Closed Bug 1409981 Opened 3 years ago Closed 3 years ago

2.02 - 2.06% JS (windows7-32) regression on push b63029f0a1c2 (Thu Oct 5 2017)


(Testing :: Marionette, defect)

Not set


(Not tracked)



(Reporter: igoldan, Unassigned)



(Keywords: perf, regression)

We have detected an awsy regression from push:

As author of one of the patches included in that push, we need your help to address this regression.


  2%  JS summary windows7-32 opt      82,093,963.51 -> 83,785,201.08
  2%  JS summary windows7-32 pgo      81,908,758.16 -> 83,565,147.16

You can find links to graphs and comparison views for each of the above tests at:

On the page above you can see an alert for each affected platform as well as a link to a graph showing the history of scores for this test. There is also a link to a treeherder page showing the jobs in a pushlog format.

To learn more about the regressing test(s), please see:
Component: Untriaged → Marionette
Product: Firefox → Testing
:whimboo I don't see bug 1375660 modify actual Firefox code. But still changes to Marionette caused these regressions.
Considering these changes to the way we do our tests, I guess we should accept these results as the new baseline for JS summaries. Am I right or were these results unexpected?
Flags: needinfo?(hskupin)
There is certainly the risk of a possible performance regression. The code as included is used by the Selenium project, and we haven't updated it for a very long time. So it means we pulled all the changes in from the last 4 years to make Marionette more stable in interacting with elements. On the other side we are working on finally getting rid of those atoms. The work therefore is covered on bug 1354203. That's a step by step process, and it might still take a bit for us to get this work done.

Lately we got one atom removed which was done via bug 1275273 by the following two changesets:

Could someone please check if that improved our situation a bit?

On the other side I'm not that sure in how those data is collected and when. Talos tests should not make use of Marionette at all, and as such those atoms should not be loaded, and used. When I check the subtest page from the above Talos alert I see:

Base - 022f5922d176 (autoland) - - New - 876b36faeae2 (autoland) - - 

So couldn't this also have been caused by the changes on bug 1405624?
Flags: needinfo?(hskupin)
alerts are almost 100% inaccurate based on scheduling and in-tree oddness- in this case there was a need to backfill 10 revisions as we didn't add data for that- if you look at the graph, you can see where the alert was and the large volume of data between the regression and the alert (that is all backfilled, that is not a bug in the alert code).

As for marionette + talos- you are correct, we have no marionette code in talos, this is the AWSY benchmark and we use marionette there.  In fact we get alerts from 7 different frameworks and sheriff 5 of them, Talos is just one and makes up about half of the total metrics.
I see. So I had a look at AWSY and as it looks like there is only a single command (`findElement`) which would require the atoms. But what we currently do is to always import atoms.js in different Javascript modules of Marionette. This might indeed be an overhead, which we could reduce by using `defineLazyGetter` for specifically this module.

The bump in memory usage would only be visible if a command is executed, which itself makes use of a Selenium atom. In case of AWSY it would still be the case at least for `driver.js` but not for any `listener.js` framescript which gets injected for new tabs.

Joel, I could check if that works, but could you please tell me how to confirm improvements for the AWYS Talos job?

Also what would be an acceptable increase, and did the removal of `isElementSelect` as asked in my last comment did actually make a difference? We would really like to keep the current atoms in Marionette.
Flags: needinfo?(jmaher)
in this case you are only seeing win7 awsy regressions, so |./mach try -p win32 -u awsy -t none --rebuild 3|

If we are changing the tool, this isn't a regression end users of Firefox are going to see.  Possibly end users of Marionette will see more resources consumed- but this test is targetting Firefox, not Marionette users.
Flags: needinfo?(jmaher)
Joel, so you are saying it would be fine to keep the new values as baseline? Just asking because I don't want to spend time on investigation and fixes which actually are not necessary given that this only occurs when Marionette is active.
Flags: needinfo?(jmaher)
correct, feel free to close this as wontfix as this is a tooling change.
Flags: needinfo?(jmaher)
Unfortunately, instrumenting the browser for automation necessarily
imposes a certain overhead, and I think we need to accept these new
numbers as our new baseline numbers for as long as we rely on the
Selenium atoms.

There is to ween us off atoms, and this
will hopefully in the long term remove the performance problem.
Great to see agreement here. Going to WONTFIX this bug then. Thanks everyone involved.
Closed: 3 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.