Closed Bug 735342 Opened 12 years ago Closed 12 years ago

CanSkip for unoptimizable nodes held live by a binding manager

Categories

(Core :: DOM: Core & HTML, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla14

People

(Reporter: mccr8, Assigned: mccr8)

References

(Blocks 1 open bug)

Details

(Whiteboard: [snappy])

Attachments

(1 file, 2 obsolete files)

In a simple CC log with just a few tabs like Gmail, plus the error console to watch the CC messages, I have a bunch of little independent 3 node loops like this:

0x12eaff0d0 [rc=5] nsGenericElement (XUL) box class='console-row' chrome://global/content/console.xul
> 0x12042e7f0 Preserved wrapper

0x12042e7f0 [gc] JS Object (XULElement)
> 0x130ef5b70 xpc_GetJSPrivate(obj)

0x130ef5b70 [rc=1] XPCWrappedNative (XULElement)
> 0x12eaff0d0 mIdentity

There are about 4300 nodes in the graph total, and around 1000 nodes just in these little triples, so removing them would reduce the graph quite a bit for me. (I think these nodes are mostly from the error console, so the benefit to real world situations would be less.)

These are not being removed from the CC graph because the nsGenericElements are unoptimizable. But they are not being fully traversed either, because only the preserved wrapper edge is being visited!

This means that something in the following code is being triggered so nsINode::Traverse returns false:
  nsIDocument *currentDoc = tmp->GetCurrentDoc();
  if (currentDoc &&
      nsCCUncollectableMarker::InGeneration(cb, currentDoc->GetMarkedCCGeneration())) {
    return false;
  }

  if (nsCCUncollectableMarker::sGeneration) {
    // If we're black no need to traverse.
    if (tmp->IsBlack() || tmp->InCCBlackTree()) {
      return false;
    }

I would guess it is the first, because these XUL elements are all in various XUL-documents that would seem to be obviously alive (console.xul, browser.xul, hiddenwindow.xul).

Talking to smaug, it sounds like the logic in CanSkip is just a little conservative, so I can make it a little more aggressive to make nsINode::Traverse.
Attached patch WIP (obsolete) — Splinter Review
Comment on attachment 605601 [details] [diff] [review]
WIP

Smaug, does this look vaguely sane?  I think the CanSkipInCC changes are pretty straightforward, as it will only return true when Traverse won't traverse any children anyways.

My main concern is the change to CanSkip.  I think I probably misunderstood you in IRC.  Can I only mark the node children in the case if I know that the node is keeping itself alive?  Which is the scenario I described to you, and you said that was okay, and I'm not sure what you meant by "that".  This patch seems like it is probably not okay, because we don't know the node is alive, even though it is in a document that is alive? Is that right?
Attachment #605601 - Flags: feedback?(bugs)
Comment on attachment 605601 [details] [diff] [review]
WIP

Actually CanSkip should work, since that is what traverse has been doing for
a long time. I've been just very conservative with these changes.

But please push to try.
Attachment #605601 - Flags: feedback?(bugs) → feedback+
This is a little different from what Traverse does, because Traverse does not assume that the objects are alive.  For instance, if the DOM node is actually being held alive by a path through the JS object, and nothing else, then this patch would leak because we're marking it black in CanSkip, whereas before Traverse would trace through the JS, leaving it gray, and find the cycle.
This passed try, but I'm still skeptical of both this patch and the original optimization in Traverse.

If a node is anonymous content, then there's not necessarily a path between the document and the node, right?  That node can have a child, which will keep each other live and require the CC to break, due to strong parent pointer.  But if the document is live, the CC won't traverse the edges between the two nodes, so they will stay alive.  It seems to me that we need something like (!unoptimizable || NodeHasActiveFrame(currentDoc, aNode))) in Traverse.  Am I missing something?  My knowledge of anonymous content is almost non-existent.  I'll try looking at the examples I have of unoptimizable nodes to see what is happening there.
(In reply to Andrew McCreight [:mccr8] from comment #6)
> If a node is anonymous content, then there's not necessarily a path between
> the document and the node, right? 
Hmm, now that I think this more... there kind of is, via BindingManager. Document owns bindingmanager, which owns anonymous content and anonymous content owns the document via nodeinfo/nodeinfomanager.
 The path just isn't quite clear, but, it is there.
Great! I just went through the code and I can see how that path works out.
Attached patch WIP, less scary (obsolete) — Splinter Review
This patch also gets rid of those tiny loops, but in a less scary way.  Instead of eliminating the checks entirely, we directly check if an unoptimizable node is held via a binding.  I don't know if this is too expensive to be reasonable, but if we need some faster approximation I was going to leave this in as an assertion.
Assignee: nobody → continuation
Attachment #605601 - Attachment is obsolete: true
Attachment #606674 - Attachment is obsolete: true
Summary: CanSkip for unoptimizable nodes in a doc in the current generation → CanSkip for unoptimizable nodes held live by a binding manager
Comment on attachment 606709 [details] [diff] [review]
WIP, less scary, more compiling

This patch does two things.

First, it moves the live doc check in CanSkipInCC to before the unoptimizable node check, matching more closely nsINode::Traverse.

Second, it adds another way we can optimize unoptimizable nodes: if the current doc is live, and the node is contained in the current doc's binding manager, the node must be live.  It looks like it is just a hash table lookup, so it doesn't seem like it would be too expensive.  I could narrow this check a bit more, if you think that makes sense.  In the basic graphs I've looked at, this only really matters for XUL nodes, so I could just check for those.

The goal of this patch is to optimize out the tiny 3 node loops.  The second part of the patch means that the JS wrapper object gets marked black.  The first part of the patch then means that the now-childless XUL node won't get added to the graph.

It passed my try run with another couple of patches: https://tbpl.mozilla.org/?tree=Try&rev=869ed959b506
Attachment #606709 - Flags: review?(bugs)
Comment on attachment 606709 [details] [diff] [review]
WIP, less scary, more compiling

I still see a 200 of these XUL triple nodes with just about:blank open when I hit the 2 minute timeout, so I should investigate more.
Attachment #606709 - Flags: review?(bugs)
Comment on attachment 606709 [details] [diff] [review]
WIP, less scary, more compiling

Okay, never mind, I seem to have not actually rebuilt after I pushed this patch back on.
Attachment #606709 - Flags: review?(bugs)
Comment on attachment 606709 [details] [diff] [review]
WIP, less scary, more compiling

>+bool
>+OwnedByBindingManager(nsIDocument* aCurrentDoc, nsINode* aNode)
>+{
>+  return aNode->IsElement() &&
>+    !!aCurrentDoc->BindingManager()->GetBinding(aNode->AsElement());
No need for !!
Attachment #606709 - Flags: review?(bugs) → review+
https://hg.mozilla.org/mozilla-central/rev/3e1c64b26387
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Whiteboard: [snappy]
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: