Closed Bug 1778863 Opened 2 years ago Closed 10 months ago

[macOS] Large* files fail to download (symptomsd RESOURCE_NOTIFY trigger (too many "dirty bytes" written))

Categories

(Firefox :: File Handling, defect, P3)

Firefox 102
Unspecified
macOS
defect

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: me, Unassigned)

References

(Depends on 1 open bug)

Details

(Keywords: steps-wanted)

Attachments

(1 file)

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Firefox/102.0

Steps to reproduce:

Download large file (ie: https://testfiledownload.com/) 10GB

Actual results:

Download fails randomly. I can 'Retry' the download, it will continue, then fail again. This repeats.

Expected results:

File should complete download

I've tested in other browsers, Safari and Chrome, and the download completes. Looking at Console Log, I notice: "RESOURCE_NOTIFY trigger for firefox [22384] (2147512320 dirty bytes written over 19763.00s seconds, violating limit of 2147483648 dirty bytes written over 86400.00s seconds)" Might be related, maybe not.

I've also attempted downloading in Troubleshoot Mode, where the issue persists, so it is not related to any existing addons/extensions installed.

The Bugbug bot thinks this bug should belong to the 'Firefox::File Handling' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.

Component: Untriaged → File Handling

Are you saving files to the main disk, or an external disk/flash?

Flags: needinfo?(me)

(In reply to Marco Bonardo [:mak] from comment #3)

Are you saving files to the main disk, or an external disk/flash?

Saving to my main disk, ~/Downloads.

Flags: needinfo?(me)

This appears to be a symptomsd notification. I have no idea what it means by 'dirty bytes' and what bytes would be dirtier than any others (!?), or how we'd get anywhere debugging this unless we can reproduce... I tried some web searching, didn't really get anywhere. Markus/spohl: Do you know if Apple documents this stuff anywhere, and/or do you have any idea what this is referring to?

Reporter: what version of macOS are you running?

Flags: needinfo?(spohl.mozilla.bugs)
Flags: needinfo?(mstange.moz)
Flags: needinfo?(me)
OS: Unspecified → macOS
Summary: Large* files fail to download → [macOS] Large* files fail to download (symptomsd RESOURCE_NOTIFY trigger (too many "dirty bytes" written))

(In reply to :Gijs (he/him) from comment #5)

This appears to be a symptomsd notification. I have no idea what it means by 'dirty bytes' and what bytes would be dirtier than any others (!?), or how we'd get anywhere debugging this unless we can reproduce... I tried some web searching, didn't really get anywhere. Markus/spohl: Do you know if Apple documents this stuff anywhere, and/or do you have any idea what this is referring to?

Reporter: what version of macOS are you running?

macOS Monterey 12.5 on the M1 chip

Flags: needinfo?(me)

FWIW I tried to repro on my mac (11.6 Intel mbp) and the download went smoothly to 5.3gb without errors in the console before I cancelled it (assuming I would have hit the 2gb limit by then). Still, wouldn't be surprising if there are new rules added to symptomsd under Monterey or there's configuration involved... somewhere.

I've also wondered in the past if buffer sizes might want adjusting for the 21st century... looks like file saves use a 32kb buffer? ( https://searchfox.org/mozilla-central/rev/15b656909e77d3048d4652b894f79a8c719b4b86/netwerk/base/BackgroundFileSaver.cpp#44-48 )

Feels like that'd probably be better if increased, certainly for writing (less sure about reading from the network). Florian, does anyone on the perf team have experience/data/background on what would be better buffer sizes? (We'd probably want to spin that off into a different bug and if we're lucky it might help here, but it could very well not - would just be a good perf improvement I suspect.)

Flags: needinfo?(florian)

(In reply to :Gijs (he/him) from comment #5)

This appears to be a symptomsd notification. I have no idea what it means by 'dirty bytes' and what bytes would be dirtier than any others (!?), or how we'd get anywhere debugging this unless we can reproduce... I tried some web searching, didn't really get anywhere.

I agree with all of this. I also tried searching and didn't get anywhere. I also searched for "dirty bytes" under the apple-opensource github account, and only found one mention in a test.

However, the symptomsd sounds like a warning, and not like something that would actually interfere with operation. So there must be something else that's causing the download to fail.

Flags: needinfo?(mstange.moz)

I would like to add on that this is not device-dependent. I have been able to reproduce on both my Mac Mini M1 and Macbook Air M1. So I don’t believe it is a device configuration, but related to the M1 specifically since @Gijs mentioned it working on their Intel MBP. Just wanted to add that since I don’t think I mentioned it not working on my other devices.

Attached patch Debug markersSplinter Review

(In reply to :Gijs (he/him) from comment #7)

I've also wondered in the past if buffer sizes might want adjusting for the 21st century... looks like file saves use a 32kb buffer? ( https://searchfox.org/mozilla-central/rev/15b656909e77d3048d4652b894f79a8c719b4b86/netwerk/base/BackgroundFileSaver.cpp#44-48 )

Feels like that'd probably be better if increased, certainly for writing (less sure about reading from the network). Florian, does anyone on the perf team have experience/data/background on what would be better buffer sizes?

I don't know who would know. I tried to answer your questions using profiler markers (patch attached to show what I did), and here is the profile I got: https://share.firefox.dev/3Sj17u0
Here is what I saw:

  • We do disk writes in 32kB chunks (as you already found).
  • nsAStreamCopier::Process is called for data of varying size. In the example I took, within 1s, it was called 197 times, with data sizes varying between 16kB and 256kB (median size was 48kB, and more than 70% of the calls had data of less than 64kB).
  • the data is copied in memory from one stream to another in 4kB chunks, but that's memory only, so probably irrelevant.

I don't know where we should go from there, if anywhere. Increasing the buffer size to 64kB (or even 128kB) would certainly reduce the number of write() calls (we did 390 within 1s in my example; is that too many?), but would that change anything for the user? Writing to disk doesn't seem to be a limiting factor to my download speed here. So maybe the impact would be in specific cases: slow drive? filesystem busy doing lots of I/O operations?

Flags: needinfo?(florian)
Flags: needinfo?(spohl.mozilla.bugs)
Depends on: 1788915

I filed bug 1788915 to investigate Florian's discovery. We don't know if that's going to help here, but it's worth a try.

Keywords: steps-wanted

Since this bug got filed I switched to an M1 mac with Monterey (for other reasons, not just this bug!). I just tried to reproduce this again on that machine, but the download completed successfully. :-(

Reporter: I don't suppose you have any other ideas about what might be causing the download failure? If you try the download with browser.download.loglevel set to all (can be changed in about:config), do any logs appear in the browser console (open with cmd-shift-j) that seem to indicate what triggered the failure? Is there any pattern as to at what point the downloads fail? And does this happen for you on all sites?

Flags: needinfo?(me)
Severity: -- → S3
Priority: -- → P3

A needinfo is requested from the reporter, however, the reporter is inactive on Bugzilla. Given that the bug is still UNCONFIRMED, closing the bug as incomplete.

For more information, please visit BugBot documentation.

Status: UNCONFIRMED → RESOLVED
Closed: 10 months ago
Flags: needinfo?(me)
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: