Right now content nodes that are not part of the document tree have their document pointer set to null and have their script objects unrooted. This causes at least two problems: * The ownerDocument property of the DOM nodes does not work correctly * User-decorated properties on the element's JS objects could be lost due to GC (I think) while the element is not pointed to by anything in JS (but is a DOM descendant of something that is) I think we could fix these problems by having the document keep a list of the roots of subtrees of content that it "owns" but is not part of its tree. The content's document pointer could then be set correctly and be unset when the document is going away. I'm not sure what else that would cause. It would be a bit of work, since we'd need to modify node insertion and removal functions and |SetDocument| appropriately. If we had such a list, should it have owning references or not? The advantages of owning references would be that the DOM parent of something referenced by a JS object would not get lost and that the content would not be required to unregister itself when it goes away. The disadvantage would be that it would stay around until the document goes away. Any thoughts?
This is a good idea, dbaron. Brendan (or hyatt?) had suggested a "shadow document" that you could move orphaned content into, which may or may not be simpler (less change to SetDocument code; still gets ownerDocument wrong). I've been experimenting with getting rid of rooting and unrooting in SetDocument() altogether, and instead: - rooting the script object on creation - unrooting the script object when an element has a script object and it's refcount == 1 (because the only reference will now be from JS; if the script object is unreachable in JS, then we'll destroy the element when the script object is finalized) - re-rooting the script object when an element has a script object and it's refcount == 2 (because now a new reference, in addition to the JS reference, is keeping the element alive, so the element must keep the script object alive) I think this is similar to the approach XPConnect uses. It seems to me that this approach could work, and is in many ways simpler; however, the down-side is that now it may require several GC passes to clean up a DOM tree. I'm not going to spend too much more time pursuing this; if it doesn't pan out, I think we'll need to do something along the lines that you've suggested.
--> Future for now.
Status: NEW → ASSIGNED
Target Milestone: --- → Future
Pulling this back from the future, trying for mozilla0.9
Target Milestone: Future → mozilla0.9
This is something that will be worked on now that I'm converting the DOM code over to using XPConnect but I'll push this off to mozilla1.0 to keep this bug of my immediate bug radar.
QA contact Update
QA Contact: janc → desale
Updating QA contact to Shivakiran Tummala.
QA Contact: desale → stummala
Bugs targeted at mozilla1.0 without the mozilla1.0 keyword moved to mozilla1.0.1 (you can query for this string to delete spam or retrieve the list of bugs I've moved)
Target Milestone: mozilla1.0 → mozilla1.0.1
Assignee: jst → dom_bugs
Status: ASSIGNED → NEW
14 years ago
This seems to be the same issue I'm seeing with JS decoration on an element being destroyed when its element is reparented. Is it possible to do an atomic reparent operation within the same document? Something along the lines of: var x = document.getElementById("original"); var newparent = document.createElement("box"); var grandparent = x.parentNode; grandparent.appendChild(newparent); // Wait for it... newparent.atomicReparentTo(newparent); I know this function doesn't exist, but before the last function call, x is removed from the document then readded to the document. And this screws it up. What is the rationale behind http://www.w3.org/TR/DOM-Level-3-Core/core.html#ID-184E7107 ?
Component: DOM: HTML → DOM: Core & HTML
QA Contact: stummala → general
You need to log in before you can comment on or make changes to this bug.