Closed Bug 877072 Opened 12 years ago Closed 10 years ago

Implement HTML Imports

Categories

(Core :: DOM: Core & HTML, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla35

People

(Reporter: marc.nieper, Assigned: gkrizsanits)

References

(Blocks 1 open bug, )

Details

(Keywords: dev-doc-complete, feature)

Attachments

(8 files, 3 obsolete files)

59.92 KB, patch
mrbkap
: review+
Details | Diff | Splinter Review
2.40 KB, patch
bzbarsky
: review+
Details | Diff | Splinter Review
29.24 KB, patch
mrbkap
: feedback+
Details | Diff | Splinter Review
24.93 KB, patch
mrbkap
: review+
Details | Diff | Splinter Review
31.56 KB, patch
mrbkap
: review+
Details | Diff | Splinter Review
51.06 KB, patch
Details | Diff | Splinter Review
3.27 KB, patch
mrbkap
: review+
Details | Diff | Splinter Review
5.31 KB, patch
mrbkap
: review+
Details | Diff | Splinter Review
User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:21.0) Gecko/20100101 Firefox/21.0 (Beta/Release) Build ID: 20130512193848 Steps to reproduce: This is a feature request to implement the HTML Imports specification https://dvcs.w3.org/hg/webcomponents/raw-file/tip/spec/imports/index.html, which allows to include external HTML resources into a master document. This helps web application authors to modularize their applications. Now that the new <template> element is being supported by recent versions of Firefox, it makes much sense to put these templates into external resources in order to keep the size of the master document manageable. The reason is the same why one does not want to include all their JS and CSS in the master document. Actual results: Firefox is currently no supporting the HTML Imports, that is <link rel="import" href="..."> is not working. Expected results: Firefox should support importing external HTML, whose basic functionality shouldn't be too hard to implement and which would be, on the other hand, very helpful to web application authors. There is a corresponding issue for the Blink rendering issue, where work seems to have already started: http://code.google.com/p/chromium/issues/detail?id=240592.
Keywords: feature
OS: Linux → All
Hardware: x86_64 → All
Assignee: nobody → mrbkap
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: dev-doc-needed
Assignee: mrbkap → gkrizsanits
Attached patch imports (obsolete) — Splinter Review
This is just sort of a status patch. I think it's better to start talking about it early, and also wanted to give you some time to start getting familiar with all the stuff I'm doing. What this patch can do: It loads an import html, with some validation, and the master document can access it via the LinkImport interface. It also executes script in the imported document using the context of the master document (current script thing is not done yet). It does not leak, so there is some CC work done as well. But the whole thing is full of questions (see comments in patch) and the whole structure is very unpolished (see ImportManager's map). For the scripts the script/parser blocking bits are not there yet. Anyway I guess we should talk it over some time whenever both of us have some free time for it.
Attachment #8339219 - Flags: feedback?(mrbkap)
Comment on attachment 8339219 [details] [diff] [review] imports Review of attachment 8339219 [details] [diff] [review]: ----------------------------------------------------------------- This looks like a great start! We'll have to figure out what to do with document.domain and I'm not sure the memory management model is quite right, but the rest of this looks really good. ::: content/base/public/nsIDocument.h @@ +2167,5 @@ > > virtual JSObject* WrapObject(JSContext *aCx, > JS::Handle<JSObject*> aScope) MOZ_OVERRIDE; > > + virtual already_AddRefed<nsIDocument> GetMasterDocument() = 0; You're going to have to bump the IID here. ::: content/base/src/ImportManager.cpp @@ +26,5 @@ > +NS_IMPL_ISUPPORTS0(ImportManager) > + > +ImportLoader::ImportLoader(ImportManager* aManager, nsIURI* aURI, nsIDocument* originDocument) > + : mURI(aURI) > + , mOriginDocument(do_GetWeakReference(originDocument)) Would it make more sense to pull out the necessary state here rather than holding the origin ("master"?) document around as a weakref? @@ +81,5 @@ > + > + nsCOMPtr<nsILoadGroup> loadGroup = doc->GetDocumentLoadGroup(); > + nsCOMPtr<nsIChannelPolicy> channelPolicy; > + nsCOMPtr<nsIContentSecurityPolicy> csp; > + // should we use the principal of the master document here? This is a *great* question. In fact, it seems like the subdocuments' principals should all be exactly equal to the master document's. However, the spec currently doesn't say anything about document.domain, so I think it'd be possible to do (in an import): (function() { let doc = document.currentScript.ownerDocument; doc.domain = '...'; /* What now? */ })(); So I think this is fine the way this is (we should just disallow setting document.domain on import documents). @@ +122,5 @@ > + NS_WARNING("ImportLoader cannot read from stream: no closure or writeCount"); > + return NS_ERROR_FAILURE; > + } > + > + nsresult rv = NS_OK; Nit: Declare at first use. @@ +154,5 @@ > + uint32_t totalRead; > + nsresult rv = aStream->ReadSegments(ImportLoader::StreamReaderFunc, > + (void*)this, aCount, &totalRead); > + NS_ENSURE_SUCCESS(rv, rv); > + return NS_OK; I'd just: return aStream->ReadSegments(...); @@ +234,5 @@ > + > + mDocument = do_QueryInterface(importDoc); > + nsCOMPtr<nsIDocument> master = doc->GetMasterDocument(); > + mDocument->SetMasterDocument(master); > + // do we need to set principal here on mDocument? Nope, the document gets its principal via the NodeInfoManager (which is set up in NS_NewDOMDocument). @@ +238,5 @@ > + // do we need to set principal here on mDocument? > + > + if (nsContentUtils::IsSystemPrincipal(principal)) { > + // do we need this? > + mDocument->ForceEnableXULXBL(); Yuck! I don't think XBL works nearly as well with HTML imports as web components do. I'd say to take this out and only add it back if people complain and find use-cases for it (also: a lot of the problems that HTML imports are meant to solve are solved in other ways in XBL). @@ +247,5 @@ > + mChannel->GetLoadGroup(getter_AddRefs(loadGroup)); > + > + rv = mDocument->StartDocumentLoad("import", mChannel, loadGroup, > + nullptr, getter_AddRefs(listener), > + false); // to reset or not to reset? Reading through the code, it looks like resetting is the way to go. @@ +271,5 @@ > +{ > + // Check if we have a loader for that URI, if not create one, > + // and start it up. > + // TODO: if it stops for some reason we should remove it, > + // so it can be restarted the next time someone needs it. I'm not entirely sure what you mean by this comment. Do we get this for free if the manager holds a strong reference to the loader? ::: content/base/src/ImportManager.h @@ +60,5 @@ > +private: > + // Once an ImportLoader is destructed, it should remove itself > + // from the map of the manager. Strong ref would make the > + // life time management harder, let's try weak ref for now. > + nsWeakPtr mManager; So, if I understand the memory model here: The master document holds a strong reference to the ImportManager. The HTML link element holds a strong reference to its ImportLoader. Import documents hold strong refs to their master documents. And everything else is a weak ref? I don't think this quite works because even if a link is removed from the (master) document, the spec still says to only load that import once, so I think the ImportManager will have to hold a strong ref to its Loaders so that they can outlive their link elements). That should be fine (we'll have a cycle through the document, but the CC should be able to clean it up, also we already have the cycle through the link elements, so we're not actually adding any complexity except in your CC code). The strong ref from the Manger to the Loader might also mean that we can use a raw pointer here, but I'm not sure about that. @@ +63,5 @@ > + // life time management harder, let's try weak ref for now. > + nsWeakPtr mManager; > +}; > + > +class ImportManager : public nsISupports { Doesn't this have to implement nsWeakReference? (Also nit: { on its own line for top-level class definitions.) ::: content/html/content/src/HTMLLinkElement.cpp @@ +229,5 @@ > +void > +HTMLLinkElement::UpdateImport() > +{ > + // 1., link node should be attached to the document > + nsCOMPtr<nsIDocument> doc = GetOwnerDocument(); Shouldn't this be GetCurrentDocument()? @@ +233,5 @@ > + nsCOMPtr<nsIDocument> doc = GetOwnerDocument(); > + if (!doc) { > + // TODO: if the import link element was removed from the doc the loader > + // should remove itself from the manager, and when the link element > + // is append back to the same document, we might want to avoid re-loading it. I'm not sure that I follow this comment. @@ +468,5 @@ > > +already_AddRefed<nsIDocument> > +HTMLLinkElement::GetImport() > +{ > + if (!mImport || !mImport->mReady || mImport->mRemoved) Nit: {} around the body in DOM code.
Attachment #8339219 - Flags: feedback?(mrbkap) → feedback+
Thanks a lot for the feedback. I know it was very sloppy code and all but really needed someone take a look at it if I'm going to the right direction, since most of the stuff I'm doing here is quite new to me. I did not think through the memory management/ownership model yet just hacked something together quickly, I wanted to see how hard is the loading itself. Now I think I can clean things up a bit. In the meanwhile it turns out that blocking scripts is easy we'll see how much I will have with the parser... (In reply to Blake Kaplan (:mrbkap) from comment #2) > > + mDocument->ForceEnableXULXBL(); > > Yuck! I don't think XBL works nearly as well with HTML imports as web > components do. I'd say to take this out and only add it back if people > complain and find use-cases for it (also: a lot of the problems that HTML > imports are meant to solve are solved in other ways in XBL). Yaay! I was soo hoping that this will be the answer :) > > + // Check if we have a loader for that URI, if not create one, > > + // and start it up. > > + // TODO: if it stops for some reason we should remove it, > > + // so it can be restarted the next time someone needs it. > > I'm not entirely sure what you mean by this comment. Do we get this for free > if the manager holds a strong reference to the loader? Ok. So this part bothers me a bit in the spec. If my import fails to be loaded the only thing we can do is reload the whole page? I've seen that in the new version of the spec we get some event if we gave up on the loading, but then how can we re-try it? I thought removing the script and adding it back should re-try the loading if it failed once, but maybe it's stupid, I guess there should be some retry function exposed instead. > I don't think this quite works because even if a link is removed from the > (master) document, the spec still says to only load that import once, so I Does it? It only says that if it's in the import tree we don't reload it but I could not find anything about what happens if you remove the link element from the DOM. I would interpret it as it's no longer part of the import tree. So we have to remove it from the ImportManager. But the link element would keep it alive and if it is re-appended to the same DOM tree we wouldn't load it again. I might missed something in the spec or missed some talks about this part, so if you find anything that contradicts my view on this let me know. > think the ImportManager will have to hold a strong ref to its Loaders so > that they can outlive their link elements). That should be fine (we'll have > a cycle through the document, but the CC should be able to clean it up, also > we already have the cycle through the link elements, so we're not actually > adding any complexity except in your CC code). The strong ref from the > Manger to the Loader might also mean that we can use a raw pointer here, but > I'm not sure about that. Actually I wanted to avoid strong pointers in the map to the loaders only to avoid having to add all the documents to the CC, but I realized exactly what you are pointing out, so that argument is gone. The other reason for making that ref weak is that at some point we want to clean it up from the import tree. If you interpretation of the spec holds, and throwing away the link element does not removes the entry from the map and free the ImportLoader then it should be strong ref for sure. Otherwise we would have to check in the Release function of the ImportLoader if the refcount is 1 and if so remove itself from the map... Anyway, I would really prefer strong ref here, and for sure I don't want to stick to the current raw pointer version. > > + nsWeakPtr mManager; > > +}; > > + > > +class ImportManager : public nsISupports { > > Doesn't this have to implement nsWeakReference? whooops... yeah, let's just hope I can get rid of this whole thing by the cleanup, but thanks for pointing it out! > > + // 1., link node should be attached to the document > > + nsCOMPtr<nsIDocument> doc = GetOwnerDocument(); > > Shouldn't this be GetCurrentDocument()? > Yupp... I think it should. Oh, and one more thing by the way, the spec says: partial interface LinkImport { readonly attribute Document import; }; but that I think should be "Document?" no? Since it can return null in many cases...
(In reply to Blake Kaplan (:mrbkap) from comment #2) > > Would it make more sense to pull out the necessary state here rather than > holding the origin ("master"?) document around as a weakref? > This is actually the import-parent document, I'll rename it. I tried it because I liked the idea but it didn't really work out. I don't think we can do this easily. We need it for NS_CheckContentLoadPolicy which can fail so I cannot do it in the constructor. We also need to fetch the master document from it as well... So I think we should stick to this weakptr.
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #3) > > I don't think this quite works because even if a link is removed from the > > (master) document, the spec still says to only load that import once, so I So I read the spec again a few times, and put some thoughts into this. I think your understanding is the correct one, so I will just rewrite the memory model and removal part accordingly.
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #3) > Ok. So this part bothers me a bit in the spec. If my import fails to be > loaded the only thing we can do is reload the whole page? I've seen that in > the new version of the spec we get some event if we gave up on the loading, I'll send email to the list about this. > but that I think should be "Document?" no? Since it can return null in many > cases... Yep, that's a spec bug. (In reply to Gabor Krizsanits [:krizsa :gabor] from comment #4) > from it as well... So I think we should stick to this weakptr. Any reason to prefer a weakpointer over a strong pointer that we clear in OnStartRequest (and OnStopRequest, since I think it's possible to only receive an OnStopRequest)?
(In reply to Blake Kaplan (:mrbkap) from comment #6) > I'll send email to the list about this. Thanks! > Any reason to prefer a weakpointer over a strong pointer that we clear in > OnStartRequest (and OnStopRequest, since I think it's possible to only > receive an OnStopRequest)? I actually wondered too if that were better or not. I don't have a strong opinion here we can do this too, Only thing that comes to my mind is that IF there will be a Reload function, then for the strong pointer version it should have an nsIDocument* arg. Which is not necessary a bad thing, so if you prefer the strong pointer I'll give it a try.
Also, the element removed flag in the spec makes very little sense to me. The spec says we should set it on the import (could be on ImportLoader or imported document as well), but what it says nothing about what if import belongs to multiple link elements. So the only place it would be reasonable to put this flag is on the link element, which by the way knows about it without this flag anyway... The intention was maybe to stop the loading if the removed flag is set, but it's not mentioned anywhere in the doc, and then it would take it a bit more work than just this flag to achieve this goal.
Attached patch imports (obsolete) — Splinter Review
A more serious version... Memory model should be better, it has proper CC, events, script blocking, more comments, one test file even, and lot's of cleanups. - need moar tests (this is where I will put my focus on) - CORS check should be done independently form the load probably (see comment) - removal bits are still a bit unclear to me - parser should be put in speculative mode - no circular reference check (spec does not say about this part anything) Few questions: this patch is getting big, do you prefer it to be broken up into smaller ones? This time I have not attached any interdiff since it's almost a rewrote so that wouldn't help a lot. From this point on I will, and want to keep the tests in a separated patch. I denoted comments with XXX where something is not clear to me, or something still need to be fixed.
Attachment #8339219 - Attachment is obsolete: true
Attachment #8350033 - Flags: feedback?(mrbkap)
Attached patch imports (obsolete) — Splinter Review
ImportManager.h was left out from the previous patch...
Attachment #8350033 - Attachment is obsolete: true
Attachment #8350033 - Flags: feedback?(mrbkap)
Attachment #8350035 - Flags: feedback?(mrbkap)
Freddy and I sent a mail about our concerns of the current CSP of the imports. Bug filed as a result: https://www.w3.org/Bugs/Public/show_bug.cgi?id=24268
For the record, I started looking at the patch today. I'll have comments by tomorrow.
FYI the patch is fairly bitroten I have not updated that tree for a while... I've just started updating the patch. Should not matter a lot for the feedback, just wanted to give you a heads-up in case you were trying to match it against the current tree...
Comment on attachment 8350035 [details] [diff] [review] imports Review of attachment 8350035 [details] [diff] [review]: ----------------------------------------------------------------- I'm liking this, I only have minor comments here. The approach and memory models make sense to me and the code is very clear. ::: content/base/src/ImportManager.cpp @@ +35,5 @@ > + > +NS_IMPL_CYCLE_COLLECTION_3(ImportLoader, > + mDocument, > + mLinks, > + mImportParent) // XXX: Since this field is set only temorrarily, we could leave it... I wouldn't bother adding mImportParent here. All it'll do is add another few edges to the CC graph (also: "temporarily"). @@ +197,5 @@ > + // XXX: spec does not say a word about the case when there are two imports in the > + // import tree, both referring to a 3rd import, the first one allows it to be loaded > + // by its CSP the other one does not. If the first one is loaded first the 3rd import > + // will be available for both, if the we start with the 2nd then for neither... > + int16_t shouldLoad = nsIContentPolicy::ACCEPT; // XXX: All the consumers of this API init shouldLoad, but do we really have to? We probably don't have to, but it doesn't hurt. @@ +209,5 @@ > + nsContentUtils::GetContentPolicy(), > + nsContentUtils::GetSecurityManager()); > + if (NS_FAILED(rv) || NS_CP_REJECTED(shouldLoad)) { > + if (NS_CP_REJECTED(shouldLoad)) { > + NS_WARNING("ImportLoader rejected by CSP"); NS_WARN_IF_FALSE? @@ +220,5 @@ > + nsCOMPtr<nsIChannelPolicy> channelPolicy; > + nsCOMPtr<nsIContentSecurityPolicy> csp; > + rv = principal->GetCsp(getter_AddRefs(csp)); > + if (NS_FAILED(rv)) { > + Error(); Rather than calling Error everywhere, I wonder if it wouldn't be cleaner (here and below) to have a class like: class AutoError { public: AutoError(ImportLoader* loader) : mLoader(loader), mPassed(false) {} ~AutoError() { if (!mPassed) { mLoader->Error(); } } void Pass() { mPassed = true; } private: ImportLoader* mLoader; bool mPassed; }; and simply call Pass() at the end of the function. I think this is mainly a matter of taste, though, so if you find it ugly or cumbersome, feel free to ignore me. ::: content/base/src/nsScriptLoader.cpp @@ +1004,4 @@ > > + // XXX: hack. We need somehow to update the mCurrentScript on the > + // master document, to prvide a way from within a script in the > + // imported document to get a reference to the imported document... Why do you consider this to be a hack? It seems pretty reasonable to me! Also, nits: this comment is mis-indented and "provide". ::: content/base/src/nsScriptLoader.h @@ +39,5 @@ > + } > + > + ~AutoCurrentScriptUpdater() > + { > + mScriptLoader->mCurrentScript = mOldScript; You can use swap() here to avoid a refcount. @@ +53,1 @@ > public: Nit: Blank line before public:
Attachment #8350035 - Flags: feedback?(mrbkap) → feedback+
(In reply to Blake Kaplan (:mrbkap) from comment #15) > > I'm liking this, I only have minor comments here. The approach and memory > models make sense to me and the code is very clear. > Thank, I'm happy to hear that. I've done an update in the meanwhile and addressed the nits. > > + Error(); > > Rather than calling Error everywhere, I wonder if it wouldn't be cleaner > (here and below) to have a class like: > > class AutoError { > I didn't know what to do with all those Errors() and this class just worked out great, so yay! :) > ::: content/base/src/nsScriptLoader.cpp > @@ +1004,4 @@ > > > > + // XXX: hack. We need somehow to update the mCurrentScript on the > > + // master document, to prvide a way from within a script in the > > + // imported document to get a reference to the imported document... > > Why do you consider this to be a hack? It seems pretty reasonable to me! Eh... I removed this comment since there is another one a few lines below that I think it's better. This comment belong to an earlier version that was a bit hacky. I removed it but then it crawled back during some merge I guess...
Thanks for doing all the work with web components in general and HTML imports in particular! Is there an ETA yet?
(In reply to Marc Nieper-Wißkirchen from comment #17) > Thanks for doing all the work with web components in general and HTML > imports in particular! Is there an ETA yet? Not really. Currently there are some ongoing discussions about switching from documents to document fragments for imports in the spec... That would be a drastic change and until that decisions is not made, I'm not sure if it makes sense to finish the current version.
Since it has been decided to keep using document instead of documentfragment for imports I can keep going with the implementation again. As it turned out there is need for imports internally for GAIA I plan to land a proto version soon, that will not be complete but should be usable and in the scenarios GAIA needs it, should act like the spec describes it. The main missing part will be correct script execution order for nested imports. TODO: - blocking the parser. This part is very likely to change in the spec. The only reason why is specced like this currently is because technical difficulties. I think for gecko the best would be to put the parser into speculative mode, and do some magic in the script runners for the correct execution order. I have some plan for this, this is the current biggest challenge. - defaultView. For imports defaultView should be the masters defaultView. I'm not sure how this is implemented right now, but the spec might change as well here... - blocking cookies - haven't looked into it yet - async flag. The flag would mean, not to block anything for that specific import. Haven't thought a lot about this part. I'm not sure the spec is complete here. I'm a bit worried about script execution order again. If all the imports are async that's OK, but if we mix async and regular imports in the same import tree, that might bring even more complexity to an already complex case. - circular reference check - I have something for this. This should be part of the blocking the parser section, and might need some changes. I don't want to land this for the first proto version. - style elements and external resources of imports should be input sources for the style processing model of the master
Attached patch importsSplinter Review
I think the current version as it is would be fairly useful for GAIA. It would be also helpful for me if it were in the tree, and probably would make the review part easier as well from here on. What do you think? I want to disable nested imports for now, and I'm not sure that LogSimpleConsoleError is the right way to warn consumers about it or not. MOZ_RELEASE_ASSERT seemed like a bit harsh alternative... I was considering about using a custom pref flag, but I think using the already existing webcomponents flag is a better option.
Attachment #8350035 - Attachment is obsolete: true
Attachment #8406178 - Flags: review?(mrbkap)
I will get to this review next week. Gabor, how much of a difference is there between this patch and the last one I looked at?
Not a lot really. I accidentally left in CheckForCircularReference, and IsBlocking, even though I removed those from this version, so those should go. I've fixed the nits and added AutoError everywhere. SetImportScript/IsImportScript for blocking document.write which is the main difference. Disabling nested imports for now and added the webcomponents flag check.
Comment on attachment 8406178 [details] [diff] [review] imports Review of attachment 8406178 [details] [diff] [review]: ----------------------------------------------------------------- This looks good to me. Let's get it in (with my nits picked). Looking at the spec, it really looks like someone should go through and fix grammar errors, I found it to be really hard to follow. ::: content/base/public/nsIDocument.h @@ +2254,1 @@ > private: Uber-nit: add an extra blank line above "private:". ::: content/base/public/nsIScriptElement.h @@ +17,5 @@ > #include "mozilla/CORSMode.h" > > #define NS_ISCRIPTELEMENT_IID \ > +{ 0xe60fca9b, 0x1b96, 0x4e4e, \ > + { 0xa9, 0xb4, 0xdc, 0x98, 0x4f, 0x88, 0x3f, 0x9c } } I don't think we technically have to change the IID here because you're not changing the layout of the interface. You can change it anyway though since it doesn't really matter. ::: content/base/src/ImportManager.cpp @@ +24,5 @@ > + > +namespace mozilla { > +namespace dom { > + > +class AutoError { I might stick this in an anonymous namespace given how generic this name is. @@ +29,5 @@ > +public: > + AutoError(ImportLoader* loader) > + : mLoader(loader) > + , mPassed(false) > + {} Nit: I believe the style guide has strict guideline about where to put braces in classes. At the very least, the if statement below needs to be on multiple lines. @@ +248,5 @@ > + > +NS_IMETHODIMP > +ImportLoader::HandleEvent(nsIDOMEvent *aEvent) > +{ > + // XXX: this check maybe should go in a debug branch Yeah, I'd stick this whole block in an #ifdef DEBUG. ::: content/base/src/ImportManager.h @@ +23,5 @@ > + * loaded, the already existing ImportLoader is being used (without > + * loading the referred import document twice) and if necessary the > + * load/error is emitted on it immediately. > + * > + * Ownership model: Nit: trailing whitespace. @@ +60,5 @@ > +class ImportLoader : public nsIStreamListener > + , public nsIDOMEventListener > +{ > +friend class AutoError; > +friend class ImportManager; Nit: indentation, also blank line before public: ::: content/base/src/nsDocument.h @@ +1260,5 @@ > + > + virtual already_AddRefed<mozilla::dom::ImportManager> ImportManager() > + { > + if (mImportManager) { > + MOZ_ASSERT(!mMasterDocument, "Only the master document have ImportManager set"); have -> has. @@ +1265,5 @@ > + return nsRefPtr<mozilla::dom::ImportManager>(mImportManager).forget(); > + } > + > + if (mMasterDocument) > + return mMasterDocument->ImportManager(); Nit: we require {} in content/. ::: content/html/content/src/HTMLLinkElement.cpp @@ +249,5 @@ > + // For now imports are hidden behind a pref... > + return; > + } > + > + // 1., link node should be attached to the document. Super-Uber-Nit: No comma after the period, please. @@ +266,5 @@ > + // Until the script execution order is not sorted out for nested cases > + // let's not allow them. > + if (!doc->IsMasterDocument()) { > + nsContentUtils::LogSimpleConsoleError( > + NS_ConvertASCIItoUTF16("Nested imports are not supported yet"), NS_LITERAL_STRING @@ +365,5 @@ > + if (aAttribute == nsGkAtoms::href || > + aAttribute == nsGkAtoms::rel) { > + UpdateImport(); > + } > + } Nit: trailing whitespace. ::: content/html/document/src/nsHTMLDocument.cpp @@ +1322,5 @@ > + return false; > + } > + > + // Never allow document.write from a script of an import document. > + nsIScriptElement* currentScript = mScriptLoader->GetCurrentScript(); Is this in the spec? We should add it, if not.
Attachment #8406178 - Flags: review?(mrbkap) → review+
uhh... I don't know how I missed this review in my bugmail :( I will try to get it in tomorrow, in the meanwhile I think I solved the script execution order part, so hopefully we will have a complete version as well (soonish). (In reply to Blake Kaplan (:mrbkap) from comment #23) > Looking at the spec, it really looks like someone should go through and fix > grammar errors, I found it to be really hard to follow. Indeed... I was thinking about fixing some of them but probably I'm not the right person for that either. > Nit: I believe the style guide has strict guideline about where to put > braces in classes. At the very least, the if statement below needs to be on > multiple lines. Alright, I will check and follow the guide. > > + // Never allow document.write from a script of an import document. > > + nsIScriptElement* currentScript = mScriptLoader->GetCurrentScript(); > > Is this in the spec? We should add it, if not. Hmm... this part never made to the spec. I will file a but about this.
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #24) > > > + // Never allow document.write from a script of an import document. > > > + nsIScriptElement* currentScript = mScriptLoader->GetCurrentScript(); > > > > Is this in the spec? We should add it, if not. > It seems like I missed out a bug on w3c about this: https://www.w3.org/Bugs/Public/show_bug.cgi?id=24623 This seems to be unnecessary. I only have to throw when the document is an import, imports can freely call write on the master document.
Keywords: leave-open
Is there a reason there's a LinkImport interface instead of just a partial interface HTMLLinkElement?
Flags: needinfo?(gkrizsanits)
(In reply to Boris Zbarsky [:bz] from comment #29) > Is there a reason there's a LinkImport interface instead of just a partial > interface HTMLLinkElement? Good catch, I will fix that. I guess I just haven't noticed the change in the spec...
Flags: needinfo?(gkrizsanits)
There are not enough tests... But since this patch is hard and sensitive with a lot of things to chew on, I thought it's better if I file it sooner than later. In theory I think this approach should work but a lot of testing is needed. No async flag yet.
Attachment #8426919 - Flags: feedback?(mrbkap)
> I guess I just haven't noticed the change in the spec... I hadn't even looked a the spec... But what the spec has right now is complete nonsense unless there's some other spec that defines LinkImport (which I doubt). Your updated patch looks good to me; please raise a spec issue.
Comment on attachment 8426866 [details] [diff] [review] part2: LinkImport should be partial interface. v1 r=me
Attachment #8426866 - Flags: review?(bzbarsky) → review+
(In reply to Boris Zbarsky [:bz] from comment #33) > > I guess I just haven't noticed the change in the spec... > > I hadn't even looked a the spec... > > But what the spec has right now is complete nonsense unless there's some > other spec that defines LinkImport (which I doubt). Your updated patch > looks good to me; please raise a spec issue. Onky the latest editor's draft is usable, the default one is seriously outdated... http://w3c.github.io/webcomponents/spec/imports/ Luckily it defines LinkImport as a partial interface already so no spec bug needed for this.
The editor's draft is the one that's complete nonsense and the one we need a spec issue for. Please either raise it, or let me know and I will.
Flags: needinfo?(gkrizsanits)
(In reply to Boris Zbarsky [:bz] from comment #37) > The editor's draft is the one that's complete nonsense and the one we need a > spec issue for. Please either raise it, or let me know and I will. Sorry, my bad... I filed a bug https://www.w3.org/Bugs/Public/show_bug.cgi?id=25895 since I'm not very good at this, would be nice if you could take a quick look at it and fix it if I'm talking nonsense there, or missed out anything. Thanks!
Flags: needinfo?(gkrizsanits)
That bug report looks perfect, thanks.
Depends on: 1016875
Depends on: 1016888
Depends on: 1019319
Are nested import coming back at some point? From the discussion above it was hard for me to grasp why it was disabled or taken out again. If we had that back Firefox would be almost on par with Chrome and thus usable for development. :-)
(In reply to Alessandro Meyer from comment #41) > Are nested import coming back at some point? From the discussion above it > was hard for me to grasp why it was disabled or taken out again. If we had > that back Firefox would be almost on par with Chrome and thus usable for > development. :-) The reason why it was not enabled because the script execution order was wrong. In fact the spec was broken too, so once we got the spec right I could finally implement it. (I don't think the current spec is done, but at least I know what the author wants and it's sane and can be implemented) The first working version is part3. I expect a few rounds there before I can land it, but hopefully it will happen in the following few weeks. Note: that even after that patch we will still behind Chrome but it will be a big step forward for sure.
Comment on attachment 8426919 [details] [diff] [review] part3: script execution order for nested imports. v1 Review of attachment 8426919 [details] [diff] [review]: ----------------------------------------------------------------- Here's a bunch of comments. I haven't quite convinced myself of the correctness of ImportLoader::UpdateSpanningTree. I'm leaving the f? on myself so I remember to get back to it. In particular, I'm having trouble convincing myself that we know for sure that the new link is "branch" (can we please update that name to something that makes sense?) and that it's now our main referrer in all cases (to be clear, I'm not saying that the code is wrong, I'm just having a ton of trouble understanding how it works). Any hints would be appreciated. ::: content/base/public/nsIDocument.h @@ +2262,5 @@ > virtual void SetMasterDocument(nsIDocument* master) = 0; > virtual bool IsMasterDocument() = 0; > virtual already_AddRefed<mozilla::dom::ImportManager> ImportManager() = 0; > + // We keep track of the order of sub imports were added to the document. > + virtual nsCOMArray<nsINode>& GetSubImportLinks() = 0; Need to bump the IID here. I think we prefer nsTArray<nsCOMPtr<nsINode>> these days. Also, since this returns a reference I'd do s/Get//. I also wonder if it wouldn't be cleaner to return a const reference here and add an "AddSubImportLink" function for the one user who actually wants to modify the array. ::: content/base/src/ImportManager.cpp @@ +98,5 @@ > } > } > > void > +ImportLoader::AddScriptLoaderBeingBlocked(nsScriptLoader* aScriptLoader) How about "AddBlockedScriptLoader" and "RemovedBlockedScriptLoader". @@ +104,5 @@ > + MOZ_ASSERT(!mBlockedScriptLoaders.Contains(aScriptLoader), > + "Same scripts loader should be added only once"); > + aScriptLoader->AddExecuteBlocker(); > + if (mDocument) { > + // If the document ready we can just add the pending script loader If the document *is* ready. @@ +130,5 @@ > + MOZ_ASSERT(mLinks.Contains(aNode)); > + nsCOMArray<nsINode> referrers; > + referrers.AppendObject(aNode); > + > + nsCOMPtr<nsINode> node = aNode; Yeah, we should really attempt to use raw pointers here. Nothing you're doing here can cause nodes to move around or get destroyed. @@ +139,5 @@ > + // Then walking up the main referrer chain and append each link > + // to the front of the array. > + node = referrersLoader->GetMainReferrer(); > + MOZ_ASSERT(node); > + referrers.InsertObjectAt(node, 0); How long do we expect these arrays to be? Adding to the beginning like this is O(n^2) so generally something to avoid. I'd add to the end and reverse the array before returning. @@ +154,5 @@ > + } > + > + if (mLinks.Length() == 1) { > + // If this is the first referrer, let's mark it. > + mMainReferrer = 0; Right now, mMainReferrer seems like it'll always be mLinks.Length() - 1. Can that change? @@ +164,5 @@ > + // junction point between the two referrer chain. Their order in the subimport list > + // of that document will determine if we have to update the spanning tree or this > + // new edge changes nothing in the script execution order. > + nsCOMArray<nsINode> mainReferrerChain = GetReferrerChain(mLinks[mMainReferrer]); > + nsCOMArray<nsINode> newReferrerChain = GetReferrerChain(aNode); Same questions here about the need for strong refs. @@ +167,5 @@ > + nsCOMArray<nsINode> mainReferrerChain = GetReferrerChain(mLinks[mMainReferrer]); > + nsCOMArray<nsINode> newReferrerChain = GetReferrerChain(aNode); > + uint32_t max = std::min(mainReferrerChain.Length(), newReferrerChain.Length()); > + uint32_t lastCommonImportAncestor = 0; > + for (uint32_t i = 0; i < max && mainReferrerChain[i]->OwnerDoc() == newReferrerChain[i]->OwnerDoc(); i++) { It should be impossible for |i| to reach |max| here, right? Want to assert that? @@ +227,5 @@ > > void > ImportLoader::RemoveLinkElement(nsINode* aNode) > { > mLinks.RemoveObject(aNode); Don't we have to UpdateSpanningTree here as well? @@ +437,5 @@ > nsCOMPtr<nsIDocument> master = mImportParent->MasterDocument(); > mDocument->SetMasterDocument(master); > > + for (uint32_t i = 0; i < mBlockedScriptLoaders.Length(); i++) { > + printf("ImportLoader::OnStartRequest\n"); This should probably go. ::: content/base/src/ImportManager.h @@ +83,5 @@ > bool IsReady() { return mReady; } > bool IsStopped() { return mStopped; } > bool IsBlocking() { return mBlockingScripts; } > + > + already_AddRefed<ImportManager> Manager() { Is there any reason for these (here and below) getters to return strong references? My quick glance through the callers seems to suggest that we can return weak refs (depending on the import manager's strong ref for the lifetimes in question). @@ +102,5 @@ > { > return mReady ? nsCOMPtr<nsIDocument>(mDocument).forget() : nullptr; > } > > + // There is only one referring link that is marked as branch per We really need to come up with a better word for "branch". It makes no sense at all. I'd propose something like "primary link", but I'm also open to suggestions. @@ +104,5 @@ > } > > + // There is only one referring link that is marked as branch per > + // imports. This is the one that has to be taken into account when > + // scrip execution order is determined. Links marked as branch form "script" @@ +107,5 @@ > + // imports. This is the one that has to be taken into account when > + // scrip execution order is determined. Links marked as branch form > + // a spanning tree in the import graph. (Eliminating the cycles and > + // multiple parents.) This spanning tree is recalculated every time > + // a new import link is added to the manager. We'll have to see how expensive these recalculations are. If dynamically adding lots of links is rare, then we should be able to get away with this, right? @@ +146,5 @@ > // While the document is being loaded we must block scripts > // on the import parent document. > void BlockScripts(); > void UnblockScripts(); > + // Returns an array of links that forms a referring chain from Uber-nit: blank line before this comment, please. @@ +149,5 @@ > void UnblockScripts(); > + // Returns an array of links that forms a referring chain from > + // the master document to this import. Each links in the array > + // is marked as main referrer in the list. > + nsCOMArray<nsINode> GetReferrerChain(nsINode* aNode); My understanding is that we prefer nsTArray<nsCOMPtr<nsINode>> these days. I'd also probably take the array to be filled as an out parameter to avoid unnecessary copies. @@ +169,5 @@ > // List of the LinkElements that are referring to this import > // we need to keep track of them so we can fire event on them. > + // List of pending ScriptLoaders that are waiting for this import > + // to finish. > + nsCOMArray<nsScriptLoader> mBlockedScriptLoaders; Nit: whitespace and nsCOMArray -> nsTArray. @@ +198,5 @@ > already_AddRefed<ImportLoader> Get(nsIURI* aURI, nsINode* aNode, > nsIDocument* aOriginDocument); > + // It finds the predecessor for an import link node that runs its > + // scripts the latest among its predecessors. > + nsRefPtr<ImportLoader> GetNearestPredecessor(nsINode* aNode); This will do too much refcounting. You should be able to just return a weak reference here right? All of the ImportLoaders should be strongly held throughout all of the operations that might occur here. If you have to return a strong ref, please use already_AddRefed. ::: content/base/src/nsDocument.h @@ +1705,5 @@ > CSPErrorQueue mCSPWebConsoleErrorQueue; > > nsCOMPtr<nsIDocument> mMasterDocument; > nsRefPtr<mozilla::dom::ImportManager> mImportManager; > + nsCOMArray<nsINode> mSubImportLinks; Same nsCOMArray -> nsTArray comments here.
(I do wonder if we shouldn't have opened a new bug for this work.)
Comment on attachment 8426919 [details] [diff] [review] part3: script execution order for nested imports. v1 Review of attachment 8426919 [details] [diff] [review]: ----------------------------------------------------------------- I think I convinced myself of the algorithm's general correctness.
Attachment #8426919 - Flags: feedback?(mrbkap)
Comment on attachment 8426919 [details] [diff] [review] part3: script execution order for nested imports. v1 Review of attachment 8426919 [details] [diff] [review]: ----------------------------------------------------------------- (sorry for the bugspam, misclicked there)
Attachment #8426919 - Flags: feedback+
(In reply to Blake Kaplan (:mrbkap) from comment #43) > myself so I remember to get back to it. In particular, I'm having trouble > convincing myself that we know for sure that the new link is "branch" (can > we please update that name to something that makes sense?) and that it's now > our main referrer in all cases (to be clear, I'm not saying that the code is > wrong, I'm just having a ton of trouble understanding how it works). Any > hints would be appreciated. It took me a while too. It's a good thing if you have doubts, just while trying to explian why this approach work, I found a bug in it... Let's start with the name. Primary link sounds fine to me. And let's call primary path a chain of primary links from master to an actual import document. Now we have primary parent as well, which have some meaning, it means that among all the import parents of the import that is the one that should be loaded first. Given the whole import graph it's easy to see which are these primary links. It's less trivial to see if our algorithm 1. will in the end converge to this tree of primary links 2. while doing that it does not executes some script accidentally sooner than it should. Because the algorithm marks 'random' links as primary, since we learn about new edges/nodes of the import graph quite nondeterministically this part is not easy to see... One more important thing that the spec does not mention but should, is that dynamic changes during this calculation (like moving around or removing links) are ignored. I think we can look at those cases as undefined behavior. This is the reason why I use ownerDoc instead of currentDocument at most places. What I'm trying to do is this: if a new link import is added and that's the first referrer to that import then I just mark it as primary link, else I check if this new link should be marked or not. It has to be marked if this new parent that comes with the new link should be loaded sooner than the current primary parent. For that I go up to the common ancestor and compare the two links order that leads to the two different path. I made a mistake here. I assumed that I will not have to change any other primary link but actually I will have to recalculate primary links for all the subimports of the import this new link referrs to. But for the rest, it's easy to see that if an import is not reachable through this new link, it's priority link should not be changed by it, so 1. should be fine with this change. About two. Let's say an import is unblocked and can run its scripts finally. It means that all it's subimports are loaded and their scripts are executed. Plus at that stage all it's known predecessors are finished with all their subimports. Can we after this find a link to a new import that should have been executed sooner? Let's say a new link changed the setup, and now there is a new predecessor we didn't take into account. We have to go up on the parent chain until we find a common ancestor with the import we executed too early. The link that belongs to the new imports parent chain cannot be before the link that leads to the one we executed early. Because of the parser of this common ancestor would have detected it earlier and then all of its subimports including this new imports should have been done already. But if it comes later, then it cannot be a predecessor. > > + node = referrersLoader->GetMainReferrer(); > > + MOZ_ASSERT(node); > > + referrers.InsertObjectAt(node, 0); > > How long do we expect these arrays to be? Adding to the beginning like this > is O(n^2) so generally something to avoid. I'd add to the end and reverse > the array before returning. They should not be so long, and I hope we move array elements with memcopy in one go... but even then you are absolutely right I will do the reverse approach, I was sloppy here... > Right now, mMainReferrer seems like it'll always be mLinks.Length() - 1. Can > that change? If I add a link that is not a new priority link (so it changes nothing) than mLinks length will be increased but mMainReferrer not. Also with the bugfix I mentioned earlier it can point to any other existing link in theory after an update. > > void > > ImportLoader::RemoveLinkElement(nsINode* aNode) > > { > > mLinks.RemoveObject(aNode); > > Don't we have to UpdateSpanningTree here as well? It's not mentioned in the spec I think, but we only care about the static setup, removing link should not change the script execution order. Which might be a stupid design... And at least I should add some comment about it. > Is there any reason for these (here and below) getters to return strong > references? My quick glance through the callers seems to suggest that we can > return weak refs (depending on the import manager's strong ref for the > lifetimes in question). In COM by default every getter should do an addref. In Gecko I have no idea what is the rule we should follow. Probably most of these getters could just return a regular pointer. I will try to change them to do so, if you support that. > @@ +107,5 @@ > > + // imports. This is the one that has to be taken into account when > > + // scrip execution order is determined. Links marked as branch form > > + // a spanning tree in the import graph. (Eliminating the cycles and > > + // multiple parents.) This spanning tree is recalculated every time > > + // a new import link is added to the manager. > > We'll have to see how expensive these recalculations are. If dynamically > adding lots of links is rare, then we should be able to get away with this, > right? Well... the parsers adding links in a pretty nondeterministic order. IF we need to change the primary link than we have to update all sub-imports as well. Which makes it theoretically quite expensive (trivial over estimation would be O(n^3)). In practice it should be typically a lot lower. I'm fairly positive that network traffic and script execution plus parsing, tree building, etc. It won't be a bottle neck. But will probably require some performance analysis to see if we need to tune it any further. For that first we need some really big examples. Thanks for all the comments, I will fix them, and try to fix the bug I found in my approach.
(In reply to Blake Kaplan (:mrbkap) from comment #44) > (I do wonder if we shouldn't have opened a new bug for this work.) In my mind, support for nested imports should be a basic requirement, that's why I kept it here... but given the complexity we probably should have... At this point I think I will just stick to this bug.
I broke the changes into 2 parts, this one is about addressing all the issues you mentioned and the next one is about the changes I had to make to solve that bug I found in the algorithm (with test). The intention is to merge these 3 patches (the first one is the original patch you f+ -ed), but I thought it's easier to review the changes this way than as one huge new patch.
Attachment #8472997 - Flags: review?(mrbkap)
I ended up decoupling the whole updating process from the rest of the ImportLoader into an inner class. I tried to add as much explanation to the header file as I could. I hope this version is a bit easier to follow than the previous one. The only real change is that if we have to do an update in the spanning tree, we do an update on all the subimports too. Details in the Updater class and an example of why it is needed in the test file.
Attachment #8473002 - Flags: review?(mrbkap)
Filing the merged version as well, maybe it helps...
Comment on attachment 8472997 [details] [diff] [review] part4: nit fixes for the script order patch Review of attachment 8472997 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/base/src/ImportManager.cpp @@ +147,4 @@ > } > + // The reversed order is more usefull for consumers. > + // XXX: This should probably go to nsTArray or some generic utlity > + // lib for our containers that we don't have... Eww! Indeed. Please file a followup on adding one. Also: "useful" and "utility". @@ +151,5 @@ > + uint32_t l = aResult.Length(); > + for (uint32_t i = 0; i < l / 2; i++) { > + nsCOMPtr<nsINode> tmp = aResult[i]; > + aResult[i] = aResult[l - i - 1]; > + aResult[l - i - 1] = tmp; This would be better written as: aResult[i].swap(aResult[l - i - 1]); to avoid lots of refcounting. It'd be tempting to use mozilla::Swap here, but nsCOMPtr<> doesn't have a move constructor. We could fix that by specializing Swap for COMPtrs, but that doesn't seem worth it. @@ +175,5 @@ > // junction point between the two referrer chain. Their order in the subimport list > // of that document will determine if we have to update the spanning tree or this > // new edge changes nothing in the script execution order. > + nsTArray<nsCOMPtr<nsINode> > mainReferrerChain; > + nsTArray<nsCOMPtr<nsINode> > newReferrerChain; We can now use nsTArray<nsCOMPtr<nsINode>> (note the lack of a space at the end).
Attachment #8472997 - Flags: review?(mrbkap) → review+
Comment on attachment 8473002 [details] [diff] [review] part5: bugfix in the script order algorithm Review of attachment 8473002 [details] [diff] [review]: ----------------------------------------------------------------- I have a lot of nits and a bunch of comments about the refcounting, but overall I think I understand the algorithm and think it makes sense. r=me with my comments addressed. ::: content/base/src/ImportManager.cpp @@ +58,5 @@ > +//----------------------------------------------------------------------------- > + > +void > +ImportLoader::Updater::GetReferrerChain(nsINode* aNode, > + nsTArray<nsCOMPtr<nsINode> >& aResult) Looking at the uses of this function, I don't think you need the strong refs in the array, so this could just be an nsTArray<nsINode *> (and then you could use mfbt::Swap, below). In general, I think you can pass around raw nsINode pointers while doing these computations because all of the nodes in question are in the tree and no DOM mutations can happen while the algorithms happen. @@ +78,5 @@ > + aResult.AppendElement(node); > + } > + > + // The reversed order is more usefull for consumers. > + // XXX: This should probably go to nsTArray or some generic utlity "useful" I was surprised that this doesn't exist either. I guess the thing to do here would be to file a bug. I guess the thing to do would be to add it to nsTArray. That being said, we'll have to do some additional work to make swap()s of nsCOMPtr<>s not suck -- probably by making mfbt::Swap aware of them. See also bug 1015114, which would have the same effect, but seems to be blocked. @@ +85,5 @@ > + uint32_t l = aResult.Length(); > + for (uint32_t i = 0; i < l / 2; i++) { > + nsCOMPtr<nsINode> tmp = aResult[i]; > + aResult[i] = aResult[l - i - 1]; > + aResult[l - i - 1] = tmp; This should do: aResult[i].swap(aResult[l - i - 1]); to avoid refcounting everything an additional two times. @@ +199,5 @@ > + MOZ_ASSERT(doc->HasSubImportLink(aCurrentLink)); > + uint32_t idx = doc->IndexOfSubImportLink(aCurrentLink); > + nsINode* next = doc->GetSubImportLink(idx + 1); > + if (next) > + { Nit: if (next) { @@ +200,5 @@ > + uint32_t idx = doc->IndexOfSubImportLink(aCurrentLink); > + nsINode* next = doc->GetSubImportLink(idx + 1); > + if (next) > + { > + // Note: If we found an already vidited link that means the parent links has "visited" @@ +231,5 @@ > + } > + Updater& updater = loader->mUpdater; > + neededUpdate = updater.ShouldUpdate(aPath); > + if (neededUpdate) { > + updater.UpdateMainReferrer(loader->mLinks.IndexOf(current)); Maybe pull loader->mLinks.IndexOf(current) out of the loop? @@ +706,5 @@ > +{ > + HTMLLinkElement* linkElement = static_cast<HTMLLinkElement*>(aLink); > + nsCOMPtr<nsIURI> uri = linkElement->GetHrefURI(); > + nsRefPtr<ImportLoader> loader; > + mImports.Get(uri, getter_AddRefs(loader)); return mImports.GetWeak(uri)? ::: content/base/src/ImportManager.h @@ +70,5 @@ > + class Updater { > + > + public: > + Updater() {} > + void Init(ImportLoader* aLoader) { mLoader = aLoader; } Why not just take the loader as a parameter to the constructor? @@ +71,5 @@ > + > + public: > + Updater() {} > + void Init(ImportLoader* aLoader) { mLoader = aLoader; } > + // After a new link is added that referrs to this import, we "refers" Uber-nit: Please add a blank line before these block comments. @@ +83,5 @@ > + void UpdateSpanningTree(nsINode* aNode); > + > + private: > + // Returns an array of links that forms a referring chain from > + // the master document to this import. Each links in the array "each link" in the array. @@ +86,5 @@ > + // Returns an array of links that forms a referring chain from > + // the master document to this import. Each links in the array > + // is marked as main referrer in the list. > + void GetReferrerChain(nsINode* aNode, nsTArray<nsCOMPtr<nsINode> >& aResult); > + // Once we found a new referrer path to our import, we have to see if "Once we find" @@ +96,5 @@ > + // edge from link1 to link2 if link2 is a subimport in the import document > + // of link1. > + // If the ImportLoader that aCurrentLink points to didn't need to be updated > + // the algorithm skips its "children" (subimports). Note, that this graph can > + // also contain circles, aVisistedLinks is used to track the already visited circles -> cycles. @@ +242,5 @@ > NS_DECL_CYCLE_COLLECTION_CLASS(ImportManager) > > // Finds the ImportLoader that belongs to aImport in the map. > ImportLoader* Find(nsIDocument* aImport); > + Uber-nit: nuke the trailing whitespace.
Attachment #8473002 - Flags: review?(mrbkap) → review+
Thanks for the quick review! And sorry for my spelling mistakes... After a try push it turned out I have random crashes that I have not noticed yet locally :( I'm working on it... (In reply to Blake Kaplan (:mrbkap) from comment #53) > Maybe pull loader->mLinks.IndexOf(current) out of the loop? I cannot because of |current| is different in each run. > > + void Init(ImportLoader* aLoader) { mLoader = aLoader; } > > Why not just take the loader as a parameter to the constructor? If I did that mUpdater would have to be a pointer and I should new/delete it in ctro/dtor respectively. If you prefer that version I can switch to that, personally I find this pattern simpler.
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #54) > If I did that mUpdater would have to be a pointer and I should > new/delete it in ctro/dtor respectively. If you prefer that > version I can switch to that, personally I find this pattern > simpler. Well, I was thinking you could (in the initialization list in the constructor) do: mUpdator(MOZ_THIS_IN_INITIALIZER_LIST()) and have the best of both worlds (this is safe in this case because we know that we're only storing the pointer, not using it.
(In reply to Blake Kaplan (:mrbkap) from comment #55) > mUpdator(MOZ_THIS_IN_INITIALIZER_LIST()) Right, thanks I think I've overcomplicated something in my head again... I fixed it. Also I found 2 problems with the older tests, I don't know how I missed them before flagging the previous patch for a review. I think I accidentally run the tests on a different branch... Anyway, I found two small problems, and I fixed them both. I think they worth to take a look at before I commit them.
Find(nsIDocument* aImport) relies on the fact that we can look up an ImportLoader based on the URI of the document. But if there was a redirection, the original key and the documents URI after redirection is different. So if there was a redirection I simply add it to the map with the new URI as key again.
Attachment #8480579 - Flags: review?(mrbkap)
When an import is blocked by a predecessor, and the updater is warned about a new main referrer, we will no longer be blocked by the old predecessor but instead by the new one. Problem is, it would be hard to find the old predecessor at this point that is blocking us, so I cached it at the moment it starts blocking.
Attachment #8480585 - Flags: review?(mrbkap)
Attachment #8480579 - Flags: review?(mrbkap) → review+
Attachment #8480585 - Flags: review?(mrbkap) → review+
Patches make sense only merged, so I pushed it in one patch. https://hg.mozilla.org/integration/mozilla-inbound/rev/67b286d214e3
sorry had to backout this changes for test failures like https://tbpl.mozilla.org/php/getParsedLog.php?id=47161196&tree=Mozilla-Inbound
Seems like moz.build file moved one level up and the test could not find my files any more... All trees are closed, will try it again later.
Seems like _sometimes_ fetching import fails, and then of course any import related test can fail. This happen super rarely normally (only have seen one case on osx on inbound so far from all the pushes, never locally) _except_ B2G ICS Emulator Opt builds where it seems like a permanent failure for some reason. Investigating this problem might take a lot of time I'm afraid since I have no idea about B2G ICS Emulator Opt builds or how I could guard my tests against intermittent failures that most likely infrastructure related issues :( What do we do for XHR tests when the transaction fails for some reason in a test?
Maybe someone who is familiar with CSP on B2G can help. The return of 'channel->AsyncOpen(corsListener, nullptr)' seems odd. BTW, I believe the reusing of name 'failed' in those two tests is problematic. Actually, the following code: var failed = 0; function failed() { failed ++; } will bind 'failed' to 0 for the function statement always execute first. So no 'onerror' handler in those tests may actually work.
(In reply to SUN Haitao from comment #65) > Maybe someone who is familiar with CSP on B2G can help. The return of > 'channel->AsyncOpen(corsListener, nullptr)' seems odd. What makes it even more odd is that it only happens on OPT builds... > > BTW, I believe the reusing of name 'failed' in those two tests is > problematic. Actually, the following code: > Yeah, I've noticed my stupid mistake there :(
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #66) > What makes it even more odd is that it only happens on OPT builds... Ah, I see, it's failing on debug builds as well. Maybe it's some custom security setting for file URIs in the emulator.
After some logging I placed an assertion in MissingRequiredTabChild, and that seems to be the problem. It's always the 4th import channel that fails like this from AsyncOpen. And only on the emulator. Who might know more about this stuff? Is this HttpChannelChild and TabChild e10s stuff? I'm trying to figure out where is this GetCallback implemented (http://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/HttpChannelChild.cpp#1233) why it fails, and what does that mean? Seems like a platform limitation handled incorrectly, just not sure yet what is that limitation... Maybe we try to open yet another subprocess and it fails? 05:29:48 INFO - 0 libxul.so!mozilla::net::MissingRequiredTabChild(mozilla::dom::TabChild*, char const*) [NeckoCommon.h : 127 + 0x2] 05:29:48 INFO - r4 = 0x00000000 r5 = 0x42d79188 r6 = 0x00000000 r7 = 0x80070057 05:29:48 INFO - r8 = 0xbea11478 r9 = 0x000022b8 r10 = 0xbea113f4 fp = 0x00000000 05:29:48 INFO - sp = 0xbea10d38 lr = 0x409af0b3 pc = 0x409af0b4 05:29:48 INFO - Found by: given as instruction pointer in context 05:29:48 INFO - 1 libxul.so!mozilla::net::HttpChannelChild::AsyncOpen(nsIStreamListener*, nsISupports*) [HttpChannelChild.cpp:5a5360e1f47b : 1238 + 0x5] 05:29:48 INFO - r4 = 0x40258000 r5 = 0x00000000 r6 = 0x00000000 r7 = 0x80070057 05:29:48 INFO - r8 = 0xbea11478 r9 = 0x000022b8 r10 = 0xbea113f4 fp = 0x00000000 05:29:48 INFO - sp = 0xbea10d50 pc = 0x409b1f8b 05:29:48 INFO - Found by: call frame info 05:29:48 INFO - 2 libxul.so!mozilla::dom::ImportLoader::Open() [ImportManager.cpp:5a5360e1f47b : 511 + 0xf] 05:29:48 INFO - r4 = 0x451790c0 r5 = 0x44bbecc0 r6 = 0x446968d0 r7 = 0x00000000 05:29:48 INFO - r8 = 0xbea114f8 r9 = 0x00000000 r10 = 0x00000000 fp = 0x40202550 05:29:48 INFO - sp = 0xbea114c0 pc = 0x4157dbd9 05:29:48 INFO - Found by: call frame info
Flags: needinfo?(mrbkap)
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #68) > trying to figure out where is this GetCallback implemented I've found it. I wonder if the tabChild belong to the channel or to the load group, because if the later, then this might have to do something with the fact that I create new loadgroups for the subimports.
Blocks: 1061469
I think I will just turn off the tests on the emulator for now, and will file a follow-up.
Let's continue this in bug 1071698 and close this one.
Flags: needinfo?(mrbkap)
Keywords: leave-open
(In reply to Ryan VanderMeulen [:RyanVM UTC-4] from comment #73) > Backed out for various test failures. "TEST-UNEXPECTED-PASS | /html-imports/fetching/already-in-import-map.html | If LOCATION is already in the import map, let IMPORT be the imported document for LOCATION and stop. (2) - expected FAIL Return code: 1" This seems to me that a previously failing test was fixed by the patch. Where should I change what flag to stop the test expecting failure here?
http://mxr.mozilla.org/mozilla-central/source/testing/web-platform/meta/html-imports/fetching/already-in-import-map.html.ini I'm not sure whether you can just remove that .ini file or just the one annotation.
Flags: needinfo?(james)
You can just remove the whole file; if there aren't any failing tests in the file it's unneeded.
Flags: needinfo?(james)
Thanks! The other test failure seems like a very rare intermittent, that I've seen only twice so far (both on inbound...) and I want to understand before I try to land it again.
My best theory for that rare intermittent failure is that I remove some script blocker in update and before I could block them again that triggers a script execution. So I changed UpdateMainReferrer to do the remove script blocker calls after putting the new ones in place. Let's see how that works out: https://tbpl.mozilla.org/?tree=Try&rev=66a56d05ac0e
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #79) > https://hg.mozilla.org/integration/mozilla-inbound/rev/199bffb1f6fb sorry had to back this out for test failures like https://tbpl.mozilla.org/php/getParsedLog.php?id=49094641&tree=Mozilla-Inbound
I could quite reliably reproduce the failure on try by running the same test suite a dozen times, but after this fix it seems to stay green (apart from some known and unrelated intermittent failures): https://tbpl.mozilla.org/?tree=Try&rev=c2750ed506f3 so I'm trying it again on inbound, hopefully this will be the last one. The fix was that I cleaned up the relationship between nsScriptLoader and the ImportLoader. Only the ImportLoader stores the blocked loaders and unblock them at once in UnblockScripts. It's cleaner this way and the sneaky bug because of the previous double bookkeeping is gone now.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla35
Mark Giffin is writing the documentation about Web Components and already has written a first draft page for it: https://developer.mozilla.org/en-US/docs/Web/Web_Components/HTML_Imports
Regression: Mozilla-Inbound-Non-PGO - Dromaeo (DOM) - Ubuntu HW 12.04 x64 - 5.86% decrease ------------------------------------------------------------------------------------------ Previous: avg 971.140 stddev 13.521 of 12 runs up to revision df8c49b33b22 New : avg 914.244 stddev 24.500 of 12 runs since revision 89716e3f169d Change : -56.895 (5.86% / z=4.208) Graph : http://mzl.la/1vNRxhg Changeset range: http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=df8c49b33b22&tochange=89716e3f169d Changesets: * http://hg.mozilla.org/integration/mozilla-inbound/rev/89716e3f169d : Gabor Krizsanits <gkrizsanits@mozilla.com> - Bug 877072 - Script execution order for imports. r=mrbkap : http://bugzilla.mozilla.org/show_bug.cgi?id=877072 Bugs: * http://bugzilla.mozilla.org/show_bug.cgi?id=877072 - Implement HTML Imports
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #85) > Regression: Mozilla-Inbound-Non-PGO - Dromaeo (DOM) - Ubuntu HW 12.04 x64 - > 5.86% decrease We talked about this with Olli and we agreed that it's most likely just some random noise (based on the graph and the general noise level)
Depends on: 1081677
Depends on: 1079141
Is this enabled by default?
No, it's not.
teoli wrote yesterday at https://developer.mozilla.org/en-US/docs/Web/Web_Components/HTML_Imports > Firefox will not implement HTML Imports: > the use of the upcoming JavaScript Modules > will make this feature unnecessary. Is there more information available somewhere? Especially in the context of webcomponents.
Can this be enabled by default or atleast a final decision be made for removing/keeping it because right now there is no clear reason as to how/when/ this will be available. If you're going to drop support for this https://bugzilla.mozilla.org/show_bug.cgi?id=1197401 then please do it so that it's clear that Mozilla is going against the accepted standard.
(In reply to ivegotasthma from comment #91) > Can this be enabled by default or atleast a final decision be made for > removing/keeping it because right now there is no clear reason as to > how/when/ this will be available. > > If you're going to drop support for this > https://bugzilla.mozilla.org/show_bug.cgi?id=1197401 then please do it so > that it's clear that Mozilla is going against the accepted standard. It is clearly stated on MDN that we are not going to ship HTML imports: https://developer.mozilla.org/en-US/docs/Web/Web_Components/HTML_Imports. And the post is even linked in this bug in Comment 90. I don't know how this could be made any more clear.
> It is clearly stated on MDN that we are not going to ship HTML imports: https://developer.mozilla.org/en-US/docs/Web/Web_Components/HTML_Imports. And the post is even linked in this bug in Comment 90. I don't know how this could be made any more clear. Okay, but that reasoning behind that decision is deeply flawed. ES6 Modules replace some use cases, but the web has needed declarative HTML imports since HTML 1.0. The lack of it has been the reason why so many server-side templating languages have needed to exist, when their only reason for being has been that web pages can re-use all assets across pages (JS and CSS) except for other HTML partials. Firefox is looking at a very narrow use-case and extrapolating it as having no value. The reason to have HTML imports extends far, far beyond web components to the fundamentals of web page construction.
The problem is that the "HTML Imports" spec in question is not "declarative HTML Imports", in the sense that you then have to use script to do anything with the import. At which point you might as well just do the import itself in script... Actual declarative imports would be a quite different specification, and possibly worth doing. But that's not what this bug, or the code that was removed in bug 1197401, was about.
> The problem is that the "HTML Imports" spec in question is not "declarative HTML Imports", in the sense that you then have to use script to do anything with the import. At which point you might as well just do the import itself in script... Fair enough. I think requiring Web Components and custom elements to use JavaScript in order to be defined is part of the issue.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: