Closed Bug 1355540 Opened 5 years ago Closed 5 years ago

Consider using an array or segmented array for mLinksToUpdate, not a hashtable


(Core :: DOM: Core & HTML, enhancement, P1)

50 Branch



Tracking Status
firefox55 --- fixed


(Reporter: smaug, Assigned: smaug)


(Blocks 1 open bug)



(3 files, 1 obsolete file)

Hashtable lookups are a bit slow.
The array would need to have strong ref to the elements and then FlushPendingLinkUpdates() would need to check IsInComposedDoc().

There wouldn't be need for nsIDocument::UnregisterPendingLinkUpdate at all.
need to also ensure FlushPendingLinkUpdates() actually gets called occasionally.
Attached file a worst case scenario
~30% is mLinksToUpdate changes.
Priority: -- → P1
Attached patch patch (obsolete) — Splinter Review
25-30% speedup in the test.

mHasLinksToUpdateRunnable is needed so that if code in a loop does something
which binds links to document and also calls methods which will call FlushPendingLinkUpdates synchronously.
Assignee: nobody → bugs
Attachment #8857222 - Flags: review?(bzbarsky)
Btw, the patch seems to bring us on par with Chrome with the worst case scenario testcase (linux).
Why does this come up in bug 1355472 at all?  Is that because of the innerHTML bits?  Because I don't see how the bits quoted in bug 1355472 comment 0 would trigger this code at all...
innerHTML yes.
Comment on attachment 8857222 [details] [diff] [review]

So if we have a script that does a bunch of moving the same subtree around the DOM (or heck, just calls a.appendChild(b) a bunch of times with the same values of a and b), we can have a single link end up in this vector multiple times.  In fact, we can exhaust memory that way...

This doesn't make me very happy.  :(

Can we keep track on the relevant elements of whether they have a link update scheduled and avoid calling RegisterPendingLinkUpdate again if they do?  We'd need to reset that state on adoption and when the actual update happens...

Even that won't help a case that creates links, adds them, then removes them.  We'd keep growing memory until the runnable fires.  :(
Attachment #8857222 - Flags: review?(bzbarsky) → review-
We don't really have spare bits atm (I think someone was going to take the bit from Bug 1338723).

I'm not so worried about memory, given that it is 8 bytes on 64bit per element insertions.
4 bytes on 32 bit.
It takes 131072 or 262144 insertions to get up to 1MB, and event loop needs to be full of other stuff.
> and event loop needs to be full of other stuff.

Not really; just need to be in a loop in JS and not coming back to the event loop...  Honestly, I'm mostly worried about microbenchmarks here.

Link has free bits (either in mLinkState, which we can make a uint8_t, or we could convert some of its bool members to bits), for what it's worth.  That would at least handle the "same element inserted multiple times before going to the event loop" case.
Attached patch with flagSplinter Review
This way then?

Makes the silly testcase 15% faster than Chrome
Attachment #8857222 - Attachment is obsolete: true
Attachment #8858446 - Flags: review?(bzbarsky)
Comment on attachment 8858446 [details] [diff] [review]
with flag

>+  bool mPendingLinkUpdate : 1;

I'd prefer mHasPendingLinkUpdate.

>+  // To ensure correct mPendingLinkUpdate handling, we have this method similar
>+  // to the one in Element.

Please document that overriders of this method must call ClearHasPendingLinkUpdate().

Attachment #8858446 - Flags: review?(bzbarsky) → review+
Pushed by
use SegmentedVector for pending links to avoid slow hashtable lookups in hot codepaths, r=bz
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla55
Component: DOM → DOM: Core & HTML
You need to log in before you can comment on or make changes to this bug.