Test setup during raptor-webext tests is too long for power-tests
Categories
(Testing :: Raptor, defect, P3)
Tracking
(Not tracked)
People
(Reporter: sparky, Unassigned)
Details
I've noticed that there is a lot of test setup being done on raptor-webext tests after __run_test_cold
or __run_test_warm
is called. This isn't problematic for standard tests, but it's a big issue for resource measurement tests like power-tests because we're picking up power usage that isn't related to the browser.
We should see if either (i) power test metrics at a per page cycle granularity are useful, or (ii) moving the longest running test setup code outside of the test run methods would help.
Comment 1•5 years ago
|
||
Are we starting the timer for post-startup-delay before or after those setup steps? It gives me the feeling we do the former, which should indeed be changed.
Comment 2•5 years ago
|
||
Also do you have a gecko profile of such a run?
Reporter | ||
Comment 3•5 years ago
|
||
We start it before those setup steps. What I think we need to do here is something similar to how CPU and memory recording are implemented which uses a message from the control server to finish the recording: https://dxr.mozilla.org/mozilla-central/source/testing/raptor/raptor/webextension/base.py#109
The areas where those resource-gatherers start also needs to be improved (because I think they are affected by the delay), but it is still better than where power-usage is initialized: https://dxr.mozilla.org/mozilla-central/source/testing/raptor/raptor/webextension/android.py#327
So there's two things we can do here. Either fix this by implementing a new control server message to use to start the resource measurements, or implement all of this resource-recording in browsertime and start running power tests from there since we'll have to do this in any case.
Comment 4•5 years ago
|
||
Benefit for implementing it for the webext would be that we have better performance numbers to compare with when moving to Browsertime.
Updated•2 years ago
|
Description
•