This bug is for investing the issues we are seeing when we run browsertime on desktop. The biggest issue is that for a given page, we find that with Raptor, Firefox is shown to be faster, but with Browsertime, Chrome is faster (and vice versa).
The other issue is that the average results don't have a consistent increase or decrease that would be indicative of a change in overhead - something like that would be expected and perfectly reasonable given that we are using two different tools. This really leads us to the question of which tool is reporting the correct values? We can't tell at this stage.
:nalexander has a theory that the additional processes running for browsertime (on top of the existing processes like mitmproxy) are a bit too much and might be causing the problem. This would explain why mobile tests are given great results since they use a host machine for many, if not all, of those processes.
The ideal solution here is to have those processes run on a host machine for desktop as well. That said, this would be a large amount of work if it's true so we need to determine if that's actually the case which is the purpose of this bug.
What have we tried to do to test this theory so far - we need Chrome on the machine to be able to tell if the issue is resolved properly:
- Testing on large vs xlarge linux instances. Not fruitful, no drastic changes in variance, and we also don't have chrome here.
- Testing on MacMinis vs. MBP. Some changes in metrics were found, they aren't incredibly significant but it suggests we may be onto something.
The next step is to get Chrome on the two MBP machines and then redo the second test. It should be installed on them shortly (for another reason): bug 1607708.
If after retrying with Chrome we still don't find anything (it's quite possible because the mini and mbp machines aren't very different in terms of specs), then we will have to setup something locally (host machine + target machine) to test the change in metrics.