Open Bug 1894109 Opened 4 months ago Updated 6 days ago

Not caching byte range request responses

Categories

(Core :: Networking: Cache, enhancement, P3)

enhancement

Tracking

()

UNCONFIRMED

People

(Reporter: steele, Unassigned)

References

(Blocks 2 open bugs)

Details

(Whiteboard: [necko-triaged])

What platform was this reproduced on?

On macOS Sonoma 14.4.1 (M1) using Firefox 125.0.3.

Steps to reproduce

  1. Install and launch the node server in the package available at https://github.com/steelejoe/chunked-image-tester.
  2. Open browser (user agent - Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:125.0) Gecko/20100101 Firefox/125.0) and navigate to http://localhost:8085 served by the node app.
  3. Clear the site cache
  4. Open the Web Developer Tools and switch to the Network tab.
  5. Confirm that "disable cache" is NOT selected.
  6. Select on of the images and "Serial" as the download type.
  7. Click "Load Image". This should append an image to the document.
  8. In the Network panel, observe that 4 chunks are requested and none of them are cached (as expected).
  9. Click "Load Image" again. This should append another copy of this image to the document.
  10. In the Network panel, observe that 4 chunks are requested and again none of them are cached (not expected).

What happened? (actual results)

The byte ranges requested for the image were not cached by the browser.

What should have happened? (expected results)

The byte ranges requested for the image should have been cached, avoiding another set of calls to the server.

Where does this happen?

This bug happens consistently on Firefox. I have tested a few revisions now.
This bug is reproducible on Chrome and Edge but it work under some conditions.
This bug is not reproducible on Safari.

The Bugbug bot thinks this bug should belong to the 'DevTools::Netmonitor' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.

Component: Untriaged → Netmonitor
Product: Firefox → DevTools

I am not sure this component is correct. I don't believe this is a bug in the dev tools, this seems like a bug in the networking code implementation.

(In reply to Joe Steele from comment #2)

I am not sure this component is correct. I don't believe this is a bug in the dev tools, this seems like a bug in the networking code implementation.

Alright, let's move the bug to the network component so the necko team can have a look

Component: Netmonitor → Networking
Product: DevTools → Core

Thanks for reporting the bug.
I see that we set the flag to ignore cache entries for byte range requests.
I will check this further with our team if we intend to change it.

Blocks: necko-cache
Severity: -- → S3
Priority: -- → P2
Whiteboard: [necko-triaged][necko-priority-review]

I think the behaviour is intentional.

https://searchfox.org/mozilla-central/rev/e69f323af80c357d287fb6314745e75c62eab92a/netwerk/protocol/http/nsHttpChannel.cpp#3750-3755


// Don't cache byte range requests which are subranges, only cache 0-
// byte range requests.
if (IsSubRangeRequest(mRequestHead)) {
  return NS_OK;
}

So the cache implementation will only cache ranges that are at the begining of the file. That way if a request for the full content comes in, we can use that chunk, and get the rest from the network.
We can't cache multiple subranges in the same file.

This is inconsistent with behavior for other browsers (e.g. Safari and Chrome support this). Arguably the HTTP standard also anticipates browsers supporting this behavior (see https://datatracker.ietf.org/doc/html/rfc2616#section-13.5.4 for an example).

Is there any way to revisit this decision? This has fairly significant performance implications for any large content.

Whiteboard: [necko-triaged][necko-priority-review] → [necko-triaged][necko-priority-new]

The RFC doesn't say we must cache range requests.
This is something we may decide to implement in the future.

Type: defect → enhancement
Component: Networking → Networking: Cache
Priority: P2 → P3

Ok - thanks for reviewing.

Whiteboard: [necko-triaged][necko-priority-new] → [necko-triaged]
  • Maybe we should add some telemetry to see how often byte-range requests are used - and how often they wouldn't be cached.

You stated "fairly significant performance implications for any large content." What's your opinion on how frequently this occurs, and how large an impact is it on your application?

Flags: needinfo?(steele)
Whiteboard: [necko-triaged] → [necko-triaged][necko-priority-review]

Currently we are downloading all images over 10MB in size as chunks, meaning they will not be cached. I would have to run some analytics to figure out how often that resource size is exceeded one of our documents, but my rough guess would be <5%.

As to how large the impact is -- now that we know this is happening we could work around that by special casing on Firefox to not use chunking at all. Because this potentially has server side impacts for services outside my team, it's hard to know the cost of fixing it.

The impact on users is a slower doc load time, since the same images may be downloaded multiple times as the user moves across the document and images are unloaded and reloaded due to memory constraints.

Flags: needinfo?(steele)

Setting a reminder to check telemetry and make a decision about this.

Whiteboard: [necko-triaged][necko-priority-review] → [necko-triaged][reminder-test 2024-09-01]

2 months ago, valentin placed a reminder on the bug using the whiteboard tag [reminder-test 2024-09-01] .

valentin, please refer to the original comment to better understand the reason for the reminder.

Flags: needinfo?(valentin.gosu)
Whiteboard: [necko-triaged][reminder-test 2024-09-01] → [necko-triaged]

Release 2024-08-19:
not_cacheable:
26,869,139 clients 13,661,795,969 samples
cacheable:
41,389,583 clients 22,305,413,041 samples

https://glam.telemetry.mozilla.org/fog/probe/network_byte_range_request/explore?aggKey=cacheable&aggType=avg&app_id=release&ping_type=*&ref=2024081915

Flags: needinfo?(valentin.gosu)
You need to log in before you can comment on or make changes to this bug.