User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 Steps to reproduce: 1. Open up the developer tools 2. Load any page that contains multiple images that differ only by the fragment ID porition of the URL - Here's a demo page: http://kylemitofsky.com/StackSVG/ 3. You should see multiple requests made for the same SVG image to the same URL Actual results: Multiple requests are made for the same image Expected results: When loading a page with multiple svg images, that differ only by the fragment id at the end, the browser should make a single GET request for all images. When fetching any image type (png / svg / gif) that doesn't contain a fragment ID, the requests are all bundled into a single request so the same image doesn't need to be requested more than once. Since the fragment ID is not part of the URL that is sent to sent to the server, it should not impact the way resources are fetched. The fragment identifier is strictly for the client to utilize in determining which portion or fragment of the document to display once returned. The expected behavior is that fragmented URLs would be fetched as efficiently as their non-fragmented counterparts since this is obtainable from a server perspective. This is a popular technique called SVG Stacks which was first widely publicized by on [http://simurai.com/](http://simurai.com/blog/2012/04/02/svg-stacks/) Here's a demo page: http://kylemitofsky.com/StackSVG/ This emerged out of a discussion on StackOverflow here: http://stackoverflow.com/a/27861428/1366033 Fragment ID's are part of the SVG spec here: http://www.w3.org/TR/SVG/linking.html#SVGFragmentIdentifiers
Robert, do you know what's causing this? (this also reproduces on Nightly)
We seem to strip off the ref when loading in http://mxr.mozilla.org/mozilla-central/source/dom/base/nsDocument.cpp#909 so off the top of my head I don't know what's wrong. bzbarsky is more familiar with this than me given that he wrote this.
Resolved-Incomplete due to time since last communication/update by reporter. Please feel free to reopen if the error occurs in a current Firefox version.
(In reply to Robert Longson from comment #2) > We seem to strip off the ref when loading in > http://mxr.mozilla.org/mozilla-central/source/dom/base/nsDocument.cpp#909 so > off the top of my head I don't know what's wrong. bzbarsky is more familiar > with this than me given that he wrote this. Boris, do you have ideas about where to look for this? FWIW, this also affects things like bug 1191230 inside chrome code...
The link given in comment 2 is where we load SVG external resources (paint servers and the like). That's NOT the codepath where we load SVG images. The codepath for SVG images and coalescing of their loads would be in imagelib code. dholbert knows more about that stuff, but I expect we need to strip the ref when looking up things in the image cache and storing things in it, and we then need to make sure that multiple hits on the same imgRequest with different refs actually work properly.
It looks like we issue separate network requests for whatever.jpg#foo and whatever.jpg#bar, too -- i.e. this isn't SVG-specific. (Though it's less useful for authors to distinguish between #foo and #bar for raster images vs. SVG images, so that's a less common use case) Reclassifying this bug as ImageLib.
Some strawman proposals, with coalescing done at progressively higher levels: Strawman Proposal #1: Coalesce stack.svg#foo and stack.svg#bar into a single object in the image cache -- a single VectorImage object. That can't easily work, that VectorImage would receive "Draw()" calls, and it wouldn't have any idea that it needs to navigate to #foo or #bar and update the document before it draws. Plus, we store cached rasterizations of VectorImages, and those are only indexed by SVG-viewport-size -- we'd need to disable that functionality, or make the indexing more complicated, for this to work. Strawman Proposal #2: Make stack.svg#foo and stack.svg#bar into separate VectorImage objects, but let them share a SVG document under the hood. So when ImageLib receives the second request, it'd somehow find the first VectorImage and ask it for a reference to its document, and set up the second VectorImage to use that same document. This might work, but it would be complicated to manage -- and we'd need to navigate that document back and forth between #foo and #bar for our draw operations (at least until we've got cached rasterizations), which is also a little complex/heavyweight. Strawman Proposal #3: Use separate VectorImages & separate SVG Documents, but simply strip off the #ref when we issue the network request. From imagelib's perspective, we won't be sharing anything, but hopefully this should allow the networking code to figure out that it's received 2 requests for the same resource, and it can serve the second one from the network cache (I think?) without actually hitting the network a second time. Assuming this works [I don't know our networking code well, but I think it supports this], this seems like the easiest option & the least likely to break stuff.
(In reply to :Gijs Kruitbosch from comment #4) > Boris, do you have ideas about where to look for this? FWIW, this also > affects things like bug 1191230 inside chrome code... Can you clarify what you're getting at RE how this impacts chrome code here? As I understand it, this bug is about trying to avoid hitting the network twice, and for icons in chrome code, we've already got all of the resources locally.
(I think the changes here would mostly be in imgLoader.cpp or imgRequest.cpp -- maybe in imgLoader::LoadImage() or something that it calls. I don't know that code well enough to speculate further on how this should work & make use of the various caches, beyond the strawman suggestions in comment 9. I think Seth or Timothy know that code better than I do; tagging seth to get his thoughts here, when he's got a chance.)
(In reply to Daniel Holbert [:dholbert] from comment #10) > (In reply to :Gijs Kruitbosch from comment #4) > > Boris, do you have ideas about where to look for this? FWIW, this also > > affects things like bug 1191230 inside chrome code... > > Can you clarify what you're getting at RE how this impacts chrome code here? > As I understand it, this bug is about trying to avoid hitting the network > twice, and for icons in chrome code, we've already got all of the resources > locally. So in bug 1191230 we started using SVG icons for something. The SVG icons were meant to be different for win10 vs other versions of Windows. The typical way to do this with a raster image is: - package foo.png - package foo-preWin10.png - add a chrome manifest directive: override foo.png foo-preWin10.png osversion<10 to override foo.png with foo-preWin10.png iff the OS version < 10. Now even if foo.png is a spritesheet with 6 or 10 or N different things in it, no matter which bit of it I need, I will get the right image on pre-win10 versions of Windows. So far so good. If you do this with SVG, and instead of a sprite and -moz-image-region or whatever, you use #refs to use different parts of the image, this doesn't work because we actually request them with the #ref, and so we look for chrome overrides with the #ref as well. So now I have to write 6/10/N override lines like: override foo.svg#first foo.pre-Win10.svg#first osversion <10 override foo.svg#second foo.pre-Win10.svg#second osversion <10 override foo.svg#nth foo.pre-Win10.svg#nth osversion <10 etc. Which is annoying and repetitive. While we could try to update the 'override' directive to ignore #refs itself (ie strip them out when doing lookups), and that might be a sensible thing to do anyway, fixing the SVG issue here would at least for the SVG case (which is by far the most common one) resolve the problem as well. Updating just the network portion here seems sensible to me - let's see what Seth thinks.
The problem here is bug 1118926. We cannot merge requests for different fragment id's *in general* without removing #-moz-resolution. That said: it used to be that hacking around that was extremely difficult, but now that ImageCacheKey exists we *can* just check if the URI contains the string "-moz-resolution" and treat the ImageCacheKey's as equivalent whenever it doesn't. (In reply to :Gijs Kruitbosch from comment #12) > Which is annoying and repetitive. While we could try to update the > 'override' directive to ignore #refs itself (ie strip them out when doing > lookups), and that might be a sensible thing to do anyway, fixing the SVG > issue here would at least for the SVG case (which is by far the most common > one) resolve the problem as well. For this particular issue I think the 'override' directive is the problem. ImageLib definitely strips the fragment id out before doing the actual network request.
Whoops, I meant "-moz-sample-size" above. "-moz-resolution" was also an issue, but it has been removed now.
(In reply to Seth Fowler [:seth] [:s2h] from comment #14) > Whoops, I meant "-moz-sample-size" above. "-moz-resolution" was also an > issue, but it has been removed now. It looks like the blocking usecase for the removal of -moz-sample-size (ie for bug 1120056) fixing is b2g-only, which is now tier-3. Is it possible to make the splitting of request + implementation of -moz-sample-size specific to b2g or something, so we can improve this for desktop, android etc. ?
(In reply to :Gijs Kruitbosch from comment #15) > It looks like the blocking usecase for the removal of -moz-sample-size (ie > for bug 1120056) fixing is b2g-only, which is now tier-3. Is it possible to > make the splitting of request + implementation of -moz-sample-size specific > to b2g or something, so we can improve this for desktop, android etc. ? I'd be happy to straight up forbid -moz-sample-size except on B2G, yeah, which would make solving this problem everywhere else trivial. (At the cost of some uglification of ImageCacheKey, but at least we can contain it there.) What I need to do is make sure that -moz-sample-size hasn't crept into any non-B2G code, and then we can decide how to move forward. I think what you propose is actually a better solution than what I proposed in comment 13, as parsing the URI to check for -moz-sample-size has a cost that we'd be better off not paying, so let's do it if we can. I'm going to leave the needinfo request on and come back to this, because this is something better done at the start of a release cycle rather than this close to uplift.
Hello, sorry that I have made a duplicate issue. Now I am wondering, why this is not fixed? It is still here after 2 years so I guess it is happening on much more sites. Is there a way how I can help with this bug to get fixed?
There's a guide to getting started as a developer if you want to work on a patch to fix this bug: https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide