It seems that an image element always calls nsImageLoadingContent::ImageURIChanged whenever it is moved to another container - just in case the base URI has changed (see nsHTMLImageElement::BindToTree). This is fine most of the time because nsImageLoadingContent::ImageURIChanged will ignore the call unless the URI really changed, however not in the case that the image has been blocked by a content policy. Comment from the source code: // Skip the URI equality check if our current image was blocked. If // that happened, we really do want to try loading again. Now what happens: let's say we have HTML code like this where the image is blocked: <body><div><div><div><div><img src="..." /></div></div></div></div></body> If we now call document.body.clone(true), we end up calling nsImageLoadingContent::ImageURIChanged for this image five times (every time a cloned subtree is added to its parent) and each of those calls will trigger the content policies. While I fully understand why it was implemented this way - this is "slightly" sub-optimal. Can/should this be fixed? It would be easy to say that images shouldn't trigger content policies when not in a document but unfortunately they don't need a document to load - a commonly used feature to preload images. Note: this came up when investigating wrong statistics in Adblock Plus 0.7 for pages saved with the ScrapBook extension.
Feel free to suggest a fix. I don't see one offhand, since the second time through content policy could decide the image is no longer blocked (e.g. if the user changed prefs).
I don't think anything should rely on getting called only once.
Yes, generally this behavior is correct but it doesn't make sense for a batch of changes. Ideally the fix would make nsImageLoadingContent::ImageURIChanged not skip the "same URI" check if the current call belongs to the same batch as the previous one - it is enough to trigger content policies (and everything else that this function does) once per batch. The problem is recognizing the batches. I don't think that setting a flag in nsIDOMNode.cloneNode() will be approved - that's a big ugly hack. Unfortunately I have no idea what else could be done here. Of course I could do the checking in the extension (or simply leave it the way it is), but if there is some simple fix it would also improve performance.
The thing is, content policy is allowed to examine the DOM surrounding the image. I guess in this case we really do want to suppress all loads until the image is inserted, though... Perhaps we should make cloneNode iterative instead of recursive? So clone each kid (shallowly), set its attributes, then start cloning its kids? That will call BindToTree on each node once, for O(N) calls to BindToTree(), whereas now we have O(N*M) where M is the tree depth and N is the total number of nodes.
(In reply to Boris Zbarsky (:bz) from comment #4) > The thing is, content policy is allowed to examine the DOM surrounding the > image. I guess in this case we really do want to suppress all loads until > the image is inserted, though... Wait, content policy can examine the DOM surrounding the image? That would make Adblock Plus a *lot* more powerful, if it can prevent the download of images based on the surrounding DOM. That could turn many of the existing CSS-based element hiding rules into rules that block images/frames/etc entirely.
> Wait, content policy can examine the DOM surrounding the image? Well, nothing's stopping it, right? It might or might not be a good idea, but it's certainly not being prohibited by anything right now.
I retested this in Firefox 49 and the issue is no longer reproducible. Yes, a lot has changed in ten years...
Status: NEW → RESOLVED
Last Resolved: 2 years ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.