Closed Bug 1355540 Opened 4 years ago Closed 4 years ago
Consider using an array or segmented array for m
Links To Update, not a hashtable
Hashtable lookups are a bit slow. The array would need to have strong ref to the elements and then FlushPendingLinkUpdates() would need to check IsInComposedDoc(). There wouldn't be need for nsIDocument::UnregisterPendingLinkUpdate at all.
need to also ensure FlushPendingLinkUpdates() actually gets called occasionally.
~30% is mLinksToUpdate changes.
25-30% speedup in the test. mHasLinksToUpdateRunnable is needed so that if code in a loop does something which binds links to document and also calls methods which will call FlushPendingLinkUpdates synchronously. https://treeherder.mozilla.org/#/jobs?repo=try&revision=dba9ad2fe353ba846754814d986688c90c493fc2
Assignee: nobody → bugs
Attachment #8857222 - Flags: review?(bzbarsky)
Btw, the patch seems to bring us on par with Chrome with the worst case scenario testcase (linux).
Why does this come up in bug 1355472 at all? Is that because of the innerHTML bits? Because I don't see how the bits quoted in bug 1355472 comment 0 would trigger this code at all...
Comment on attachment 8857222 [details] [diff] [review] patch So if we have a script that does a bunch of moving the same subtree around the DOM (or heck, just calls a.appendChild(b) a bunch of times with the same values of a and b), we can have a single link end up in this vector multiple times. In fact, we can exhaust memory that way... This doesn't make me very happy. :( Can we keep track on the relevant elements of whether they have a link update scheduled and avoid calling RegisterPendingLinkUpdate again if they do? We'd need to reset that state on adoption and when the actual update happens... Even that won't help a case that creates links, adds them, then removes them. We'd keep growing memory until the runnable fires. :(
Attachment #8857222 - Flags: review?(bzbarsky) → review-
We don't really have spare bits atm (I think someone was going to take the bit from Bug 1338723). I'm not so worried about memory, given that it is 8 bytes on 64bit per element insertions. 4 bytes on 32 bit. It takes 131072 or 262144 insertions to get up to 1MB, and event loop needs to be full of other stuff.
> and event loop needs to be full of other stuff. Not really; just need to be in a loop in JS and not coming back to the event loop... Honestly, I'm mostly worried about microbenchmarks here. Link has free bits (either in mLinkState, which we can make a uint8_t, or we could convert some of its bool members to bits), for what it's worth. That would at least handle the "same element inserted multiple times before going to the event loop" case.
This way then? https://treeherder.mozilla.org/#/jobs?repo=try&revision=e3db0bea0044cfcf019b75840096ffec84f726d5 Makes the silly testcase 15% faster than Chrome
Comment on attachment 8858446 [details] [diff] [review] with flag >+ bool mPendingLinkUpdate : 1; I'd prefer mHasPendingLinkUpdate. >+ // To ensure correct mPendingLinkUpdate handling, we have this method similar >+ // to the one in Element. Please document that overriders of this method must call ClearHasPendingLinkUpdate(). r=me
Attachment #8858446 - Flags: review?(bzbarsky) → review+
Pushed by email@example.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/3f319382720c use SegmentedVector for pending links to avoid slow hashtable lookups in hot codepaths, r=bz
You need to log in before you can comment on or make changes to this bug.