Not caching byte range request responses
Categories
(Core :: Networking: Cache, enhancement, P3)
Tracking
()
People
(Reporter: steele, Unassigned)
References
(Blocks 2 open bugs)
Details
(Whiteboard: [necko-triaged])
What platform was this reproduced on?
On macOS Sonoma 14.4.1 (M1) using Firefox 125.0.3.
Steps to reproduce
- Install and launch the node server in the package available at https://github.com/steelejoe/chunked-image-tester.
- Open browser (user agent - Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:125.0) Gecko/20100101 Firefox/125.0) and navigate to http://localhost:8085 served by the node app.
- Clear the site cache
- Open the Web Developer Tools and switch to the Network tab.
- Confirm that "disable cache" is NOT selected.
- Select on of the images and "Serial" as the download type.
- Click "Load Image". This should append an image to the document.
- In the Network panel, observe that 4 chunks are requested and none of them are cached (as expected).
- Click "Load Image" again. This should append another copy of this image to the document.
- In the Network panel, observe that 4 chunks are requested and again none of them are cached (not expected).
What happened? (actual results)
The byte ranges requested for the image were not cached by the browser.
What should have happened? (expected results)
The byte ranges requested for the image should have been cached, avoiding another set of calls to the server.
Where does this happen?
This bug happens consistently on Firefox. I have tested a few revisions now.
This bug is reproducible on Chrome and Edge but it work under some conditions.
This bug is not reproducible on Safari.
Comment 1•4 months ago
|
||
The Bugbug bot thinks this bug should belong to the 'DevTools::Netmonitor' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.
Reporter | ||
Comment 2•4 months ago
|
||
I am not sure this component is correct. I don't believe this is a bug in the dev tools, this seems like a bug in the networking code implementation.
Comment 3•4 months ago
|
||
(In reply to Joe Steele from comment #2)
I am not sure this component is correct. I don't believe this is a bug in the dev tools, this seems like a bug in the networking code implementation.
Alright, let's move the bug to the network component so the necko team can have a look
Comment 4•4 months ago
|
||
Thanks for reporting the bug.
I see that we set the flag to ignore cache entries for byte range requests.
I will check this further with our team if we intend to change it.
Comment 5•4 months ago
|
||
I think the behaviour is intentional.
// Don't cache byte range requests which are subranges, only cache 0-
// byte range requests.
if (IsSubRangeRequest(mRequestHead)) {
return NS_OK;
}
So the cache implementation will only cache ranges that are at the begining of the file. That way if a request for the full content comes in, we can use that chunk, and get the rest from the network.
We can't cache multiple subranges in the same file.
Reporter | ||
Comment 6•4 months ago
|
||
This is inconsistent with behavior for other browsers (e.g. Safari and Chrome support this). Arguably the HTTP standard also anticipates browsers supporting this behavior (see https://datatracker.ietf.org/doc/html/rfc2616#section-13.5.4 for an example).
Is there any way to revisit this decision? This has fairly significant performance implications for any large content.
Updated•4 months ago
|
Comment 7•4 months ago
|
||
The RFC doesn't say we must cache range requests.
This is something we may decide to implement in the future.
Updated•4 months ago
|
Comment 9•4 months ago
|
||
- Maybe we should add some telemetry to see how often byte-range requests are used - and how often they wouldn't be cached.
Updated•4 months ago
|
Comment 10•4 months ago
|
||
You stated "fairly significant performance implications for any large content." What's your opinion on how frequently this occurs, and how large an impact is it on your application?
Reporter | ||
Comment 11•3 months ago
|
||
Currently we are downloading all images over 10MB in size as chunks, meaning they will not be cached. I would have to run some analytics to figure out how often that resource size is exceeded one of our documents, but my rough guess would be <5%.
As to how large the impact is -- now that we know this is happening we could work around that by special casing on Firefox to not use chunking at all. Because this potentially has server side impacts for services outside my team, it's hard to know the cost of fixing it.
The impact on users is a slower doc load time, since the same images may be downloaded multiple times as the user moves across the document and images are unloaded and reloaded due to memory constraints.
Comment 12•3 months ago
|
||
Setting a reminder to check telemetry and make a decision about this.
Comment 13•6 days ago
|
||
2 months ago, valentin placed a reminder on the bug using the whiteboard tag [reminder-test 2024-09-01]
.
valentin, please refer to the original comment to better understand the reason for the reminder.
Comment 14•6 days ago
|
||
Release 2024-08-19:
not_cacheable:
26,869,139 clients 13,661,795,969 samples
cacheable:
41,389,583 clients 22,305,413,041 samples
Description
•