Open Bug 1431832 Opened 7 years ago Updated 3 years ago

Navigating to /dev/zero uses 100% CPU, freezes the machine

Categories

(Core :: DOM: Navigation, defect, P3)

57 Branch
x86_64
Linux
defect

Tracking

()

People

(Reporter: rrt, Unassigned)

Details

(Keywords: hang)

User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 Build ID: 20180104112904 Steps to reproduce: I wanted to Google "/dev/zero". I typed "/dev/zero" in the address bar, and pressed enter. I got a blank page. I then added a space at the start (for some reason, I had the idea this would force it to be treated as a search query). I was offered a download dialog for "zero". I clicked Cancel. Actual results: The computer rapidly ground to a halt, the mouse cursor stopped moving, I couldn't log in remotely, and I had to switch it off. Expected results: Firefox should not have sucked on /dev/zero so hard. I don't expect it to understand /dev/zero as a special case, but it should not melt down just because it's fed an unlimited amount of data very fast.
Severity: normal → critical
Keywords: hang
OS: Unspecified → Linux
Hardware: Unspecified → x86_64
I am able to reproduce this with Ubuntu 17.10. Nightly tried to open /dev/zero the first time I pressed enter and 16 G of memory was rapidly filled up. (I was lucky enough to have enough control to shut down Firefox and continue using my system without powering off, but that was probably because I was prepared for a problem.) [bugday-20180122]
Status: UNCONFIRMED → NEW
Ever confirmed: true
Status: NEW → UNCONFIRMED
Ever confirmed: false
On Ubuntu 17, I see little hang and slowness in system and memory spikes from 160 MiB to 6.2 GiB. I was able to reproduce this issue on Ubuntu 16. System froze as soon as it opens the dialogue box to save/cancel for 'zero' file.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Component: Untriaged → JavaScript: GC
Product: Firefox → Core
This bug is the best. However, there's no chance this belongs in Core::JavaScript. Turfing to (spins wheel) ...HTML!
Component: JavaScript: GC → HTML: Parser
(In reply to Reuben Thomas from comment #0) > I don't expect it to > understand /dev/zero as a special case, Is there a reason why /dev isn't already blocked on the Necko level? > but it should not melt down just > because it's fed an unlimited amount of data very fast. Maybe that would be nice, too, but there doesn't seem to be a legitimate use case for browsing to special /dev files. In particular, I want to make a change on the parser side that assumes that we can treat streams obtained from file: URLs as finite. (I'm sure there will still be edge cases where that won't be true, but I mean assuming it's *practical* to treat file: URLs as dereferencing to finite streams.)
Component: HTML: Parser → Networking: File
(In reply to Henri Sivonen (:hsivonen) from comment #4) > (In reply to Reuben Thomas from comment #0) > > I don't expect it to > > understand /dev/zero as a special case, > > Is there a reason why /dev isn't already blocked on the Necko level? I don't see a reason why we should block special files in nsFileChannel or at any lower level. /dev/zero is changed to file:///dev/zero by URI fixup code and this is IMO the right place where to fix it.
Component: Networking: File → Document Navigation
I can see why there might be reasons for not blocking special files at a very low level, but Document Navigations seems too high a level. It still leaves the possibility of self-DoS by XHR, etc., where a file: URL can be used. Can we deny these somewhere where a channel is obtained from a file: URL?
Flags: needinfo?(michal.novotny)
(In reply to Henri Sivonen (:hsivonen) from comment #6) > It still leaves the possibility of self-DoS by XHR, etc., where a file: > URL can be used. How this differs from opening a large regular file like DVD iso image? > Can we deny these somewhere where a channel is obtained from a file: URL? nsFileChannel is created in nsFileProtocolHandler::NewChannel2. If we can find out from aLoadInfo when to block certain files we can return a failure instead of creating the channel. But it is IMO a wrong solution. When we don't want to interpret /dev/zero as a file we shouldn't replace it with file:///dev/zero and that's done by URI fixup.
Flags: needinfo?(michal.novotny)
(In reply to Michal Novotny (:michal) from comment #7) > (In reply to Henri Sivonen (:hsivonen) from comment #6) > > It still leaves the possibility of self-DoS by XHR, etc., where a file: > > URL can be used. > > How this differs from opening a large regular file like DVD iso image? A regular file is finite and (typically) limited to a handful of gigabytes, which we can handle (in 64-bit builds). /dev/zero is infinite and can supply many more gigabytes very fast. > > Can we deny these somewhere where a channel is obtained from a file: URL? > > nsFileChannel is created in nsFileProtocolHandler::NewChannel2. If we can > find out from aLoadInfo when to block certain files we can return a failure > instead of creating the channel. But it is IMO a wrong solution. When we > don't want to interpret /dev/zero as a file we shouldn't replace it with > file:///dev/zero and that's done by URI fixup. From my point of view, the URL fixup isn't the problem here.
Priority: -- → P3

In the process of migrating remaining bugs to the new severity system, the severity for this bug cannot be automatically determined. Please retriage this bug using the new severity system.

Severity: critical → --

(In reply to Henri Sivonen (:hsivonen) from comment #8)

(In reply to Michal Novotny (:michal) from comment #7)

(In reply to Henri Sivonen (:hsivonen) from comment #6)

It still leaves the possibility of self-DoS by XHR, etc., where a file:
URL can be used.

How this differs from opening a large regular file like DVD iso image?

A regular file is finite and (typically) limited to a handful of gigabytes,
which we can handle (in 64-bit builds). /dev/zero is infinite and can supply
many more gigabytes very fast.

And I think that /dev/zero is a well-known path. Therefore, it can be used for DoS without knowing a path to large file.

hsivonen: Could you set the severity or redirect this request to better person who if you know?

Flags: needinfo?(hsivonen)

I still think we should block these at the point of trying to open a channel for file: URL to block DoS also from sources other than typing in the URL bar.

Valentin, comment 5 isn't particularly convincing. Is there a clearer articulation why we shouldn't block the /dev hierarchy upon trying to open a channel for a file: URL?

Flags: needinfo?(hsivonen) → needinfo?(valentin.gosu)

nsFileChannel::Init seems like the best place to reject /dev files. Let's do it, and put it behind a pref (I'm sure it will break someone's workflow 😆)

Flags: needinfo?(valentin.gosu)

(In reply to Henri Sivonen (:hsivonen) (away from Bugzilla on the week of 2022-12-05) from comment #8)

(In reply to Michal Novotny (:michal) from comment #7)

(In reply to Henri Sivonen (:hsivonen) from comment #6)

It still leaves the possibility of self-DoS by XHR, etc., where a file:
URL can be used.

How this differs from opening a large regular file like DVD iso image?

A regular file is finite and (typically) limited to a handful of gigabytes,
which we can handle (in 64-bit builds). /dev/zero is infinite and can supply
many more gigabytes very fast.

It isn't. We still freeze pretty hard when trying to load finite but huge files, particularly plain text files, and we always have. See bug 319143 for instance. I have lost track of the number of times when I've froze the browser after accidentally trying to open a binary file that has the wrong MIME type.

I think the better way to fix this would be to refuse to fully render documents which are unreasonable relative to system resources rather than to special case /dev/zero. If a file wouldn't even fit fully into RAM before it was transformed into DOM and layout data, then there's really not point in trying to fully render it. It can't be done.

That said, in the case of /dev/zero, we could also give it an application/octet-stream MIME type, in which case the user would be given the option to download it rather than freezing the browser.

Severity: -- → S3

Special-casing /dev is simple. Figuring out what input size is too large given system resources is very hard. I think it doesn't make sense not to do the simple special-casing because we might eventually be able to solve the harder problem.

You need to log in before you can comment on or make changes to this bug.