Closed Bug 1473149 Opened 2 years ago Closed 2 years ago

Add a dynamic-atom state to DOMString

Categories

(Core :: DOM: Core & HTML, enhancement, P2)

enhancement

Tracking

()

RESOLVED FIXED
mozilla63
Tracking Status
firefox63 --- fixed

People

(Reporter: bzbarsky, Assigned: bzbarsky)

References

Details

Attachments

(2 files, 2 obsolete files)

See bug 1447951 comment 15.

The patch coming up makes current tip perform like the "control" build there, whereas currently it performs like the "test" build.
Nick, do you feel comfortable reviewing this?  If not, I definitely want your review for the bits that actually touch atoms and will get someone else for the dom/xpconnect integration bits.
Attachment #8989556 - Flags: review?(n.nethercote)
Please check a testcase for this into perf-reftest or perf-reftest singeltons if you can.
Not trivial to do, since I had to disable the LICM optimizations we usually apply to getAttribute to measure this.

I could try to write up something that defeats those optimizations somehow, but it would be fragile in the sense of changing JIT heuristics making the optimization possibly reappear...
Fair enough. I'd just like us to think about ways that we can add regression tests for stuff like this. It won't always be doable.
Comment on attachment 8989556 [details] [diff] [review]
Add an external string variant that keeps a DynamicAtom alive

Review of attachment 8989556 [details] [diff] [review]:
-----------------------------------------------------------------

I am mostly, but not totally, confident in my r+ here. I'll let you decide whether to get another pair of eyes on this :)

::: js/xpconnect/src/XPCConvert.cpp
@@ +583,5 @@
>              // compiled into libxul, such as the string "undefined" above.
>              const char16_t* chars = JS_GetTwoByteExternalStringChars(str);
>              ws->AssignLiteral(chars, length);
>          } else {
> +            // We don't bother cheking for a dynamic-atom external string,

Typo: cheking
Attachment #8989556 - Flags: review?(n.nethercote) → review+
And thank you for fixing the problem I introduced!
Bobby, is this all that's involved in adding tests to this dir?
Attachment #8989664 - Flags: review?(bobbyholley)
> Typo: cheking

Fixed, good catch.
Boris, this is in the wrong component, I made a guess to the correct one.
Component: Audio/Video: Recording → XUL
Component: XUL → DOM
Priority: -- → P2
Comment on attachment 8989664 [details] [diff] [review]
Performance tests

You probably also want to add this to the manifest file. But I'm not sure if that's enough to make this appear on perf alerts, or whether there's an additional step - rwood?

Also for rwood - how well does the infra for these tests scale? Is it ok to add 7 test files when we want to test an optimization like this? If not, how do we get there?
Attachment #8989664 - Flags: review?(bobbyholley) → review?(rwood)
this should have been in DOM all along.  I have no clue how it ended up in Audio/Video...
Comment on attachment 8989664 [details] [diff] [review]
Performance tests

Review of attachment 8989664 [details] [diff] [review]:
-----------------------------------------------------------------

That's great your adding more pref-reftests!

To add these to the perf-reftest-singletons suite, you need to also update the manifest file:

https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/perf-reftest-singletons/perf_reftest_singletons.manifest

Then if you run them locally first (./mach talos-test --suite perf-reftest-singletons) you'll see the tests timeout in their current form. It's because they're missing the window.onload function (see below - and same goes for each of the 7 new tests).

I added that to each test, and updated the perf-reftest-singletons manifest and they ran locally just fine (I'm on OSX).

Note: Do you just want these as singletons? Or do you have another page to compare each with? If so, add them to the perf-reftest manifest here:

https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/perf-reftest/perf_reftest.manifest

i.e. in that manifest first line, bloom-basic is compared to bloom-basic-ref, and the actual test results is the comparison.

Either way please update and push to try and test talos perf-reftest-singletons (or perf-reftest if you add them there too) on all platforms, and that will also give us an idea of the total job duration.

It *should* be fine just adding all these new pages to the existing suite(s) as it looks like the total job time right now is around 10 minutes. Feel free to r? me again when they're ready. Thanks! :)

::: testing/talos/talos/tests/perf-reftest-singletons/id-getter-1.html
@@ +1,3 @@
> +<!doctype html>
> +<script src="util.js"></script>
> +<script>

window.onload = function() {

@@ +9,5 @@
> +  perf_start();
> +  for (var i = 0; i < count; ++i) {
> +    getter.call(el);
> +  }
> +  perf_finish();

};

@@ +10,5 @@
> +  for (var i = 0; i < count; ++i) {
> +    getter.call(el);
> +  }
> +  perf_finish();
> +</script>

<body></body>
</html>
Attachment #8989664 - Flags: review?(rwood)
Correction, to run locally the syntax is: ./mach talos-test --suite perf-reftest-singletons-e10s
> Note: Do you just want these as singletons?

Yep, I do.   I don't have a useful reference for them; just want alerts if they regress.
And thank you for describing how to set this up correctly and test it!
Attachment #8990429 - Flags: review?(rwood)
Attachment #8989556 - Attachment is obsolete: true
(In reply to Boris Zbarsky [:bz] (no decent commit message means r-) from comment #16)
> Created attachment 8990429 [details] [diff] [review]
> With the talos bits hooked up correctly now

Hi Boris can you please provide a link to the updated perf-reftest-singletons suite running on try on all the platforms? Thanks :)
Flags: needinfo?(bzbarsky)
https://treeherder.mozilla.org/#/jobs?repo=try&revision=b7ba70655cef38768f944485fdcb0f93d25136a6 -- I'll put up another version of the patch with the eslint bits fixed.  ;)
Flags: needinfo?(bzbarsky)
Attachment #8990460 - Flags: review?(rwood)
Attachment #8990429 - Attachment is obsolete: true
Attachment #8990429 - Flags: review?(rwood)
Flags: needinfo?(rwood)
Comment on attachment 8990460 [details] [diff] [review]
With eslint bits fixed

Review of attachment 8990460 [details] [diff] [review]:
-----------------------------------------------------------------

Applied latest patch and ran talos perf-reftest-singletons locally (OSX) and works great. Code LGTM, and the try run in comment 18 is green (minus the lint errors). The new perf-reftest-singletons job duration is fine (7-13min). Thanks! r+
Attachment #8990460 - Flags: review?(rwood) → review+
Flags: needinfo?(rwood)
:igoldan, fyi, this is going to change the perf-reftest-singletons baseline and probably cause a few automated regression alerts.
Flags: needinfo?(igoldan)
By the way, do we have alerts for this test on a per-subtest basis, or just for the overall number?  I'm hoping we have per-subtest alerts....
Pushed by bzbarsky@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/ed26ab52032e
Add an external string variant that keeps a DynamicAtom alive.  r=njn,rwood
https://hg.mozilla.org/mozilla-central/rev/ed26ab52032e
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla63
Depends on: 1474815
(In reply to Robert Wood [:rwood] from comment #21)
> :igoldan, fyi, this is going to change the perf-reftest-singletons baseline
> and probably cause a few automated regression alerts.

Thanks for the heads up!
Flags: needinfo?(igoldan)
(In reply to Boris Zbarsky [:bz] (no decent commit message means r-) from comment #22)
> By the way, do we have alerts for this test on a per-subtest basis, or just
> for the overall number?  I'm hoping we have per-subtest alerts....

I'm curious about this too. FWIW, the overall number isn't going to be all that meaningful, and will change whenever we add tests. I'm trying to change the culture in Platform engineering to add perf-reftests when we fix performance cliffs like this, so in my ideal world we'd be adding these quite frequently.
Flags: needinfo?(rwood)
(In reply to Bobby Holley (:bholley) from comment #26)
> (In reply to Boris Zbarsky [:bz] (no decent commit message means r-) from
> comment #22)
> > By the way, do we have alerts for this test on a per-subtest basis, or just
> > for the overall number?  I'm hoping we have per-subtest alerts....
> 
> I'm curious about this too. FWIW, the overall number isn't going to be all
> that meaningful, and will change whenever we add tests. I'm trying to change
> the culture in Platform engineering to add perf-reftests when we fix
> performance cliffs like this, so in my ideal world we'd be adding these
> quite frequently.

No, the perf-reftest-singletons test is a single test (no subtests), so the alerting is done on the top level / overall test score. However when there is a regression, you can manually look at the replicates from every test page - in treeherder select the 'ps' job symbol, click 'job details', and click on the 'perfherder-data.json' link and all the values are there.

When new tests are added, we don't want to extend the current perf-reftest-singletons job duration any further anyway, so another entire test would need to be added anyway i.e. 'ps2'.

These tests run on a large number of platforms on a large number of branches, which results in alot of data to be tracked (and for the perf sheriffs to manage when investigating regressions) so we don't do automatic alerting on every test page.
Flags: needinfo?(rwood)
> so the alerting is done on the top level / overall test score.

That significantly reduces its utility.  Further, it makes it dangerous to add more tests to it, because the more tests we add the more likely it is we will miss a regression in one of the subtests.

What infrastructure do we have in place for tests where we want to be alerted when any subtest has a significant regression?

How does reporting work for perf-reftest (not perf-reftest-singletons)?
Flags: needinfo?(rwood)
Yeah, I think we really want per-subtest scores here.
Basically, the goal here is to have something like mochitests, wherein developers add regression tests when they fix performance bugs. So we really need something that scales.
Flags: needinfo?(rwood) → needinfo?(jmaher)
There are 15 separate test pages currently in perf-reftest-singletons. Maybe we could split those up into 3 groups of 5 subtests each (and those will be alerted on)? That would help have more finite regression tracking but won't add 14 more tests.
keep in mind we run these on 10 different configs right now and 2 branches which will alerts on these tests- so we have 1 data point right now generating up to 20 alerts.  If we multiply that out by 15, that is up to 300 alerts that could fire on any given push.  We do not have the bandwidth to sheriff that many new tests.  I like :rwood's proposal of splitting this into a few groups.  For many tests we will have the test owners monitor the tests, could that be the case for these?  If people are working on adding tests that will be running on a limited pool of hardware, I would expect that they would care about the tests and want to watch the results.
Flags: needinfo?(jmaher)
We set up a call to chat about this next thursday.
Component: DOM → DOM: Core & HTML
You need to log in before you can comment on or make changes to this bug.