Navigating to /dev/zero uses 100% CPU, freezes the machine
Categories
(Core :: DOM: Navigation, defect, P3)
Tracking
()
People
(Reporter: rrt, Unassigned)
Details
(Keywords: hang)
Comment 1•7 years ago
|
||
Updated•7 years ago
|
Updated•7 years ago
|
Comment 2•7 years ago
|
||
Updated•7 years ago
|
Comment 3•7 years ago
|
||
Comment 4•7 years ago
|
||
Comment 5•7 years ago
|
||
Comment 6•7 years ago
|
||
Comment 7•7 years ago
|
||
Comment 8•7 years ago
|
||
Updated•7 years ago
|
Comment 9•3 years ago
|
||
In the process of migrating remaining bugs to the new severity system, the severity for this bug cannot be automatically determined. Please retriage this bug using the new severity system.
Comment 10•3 years ago
|
||
(In reply to Henri Sivonen (:hsivonen) from comment #8)
(In reply to Michal Novotny (:michal) from comment #7)
(In reply to Henri Sivonen (:hsivonen) from comment #6)
It still leaves the possibility of self-DoS by XHR, etc., where a file:
URL can be used.How this differs from opening a large regular file like DVD iso image?
A regular file is finite and (typically) limited to a handful of gigabytes,
which we can handle (in 64-bit builds). /dev/zero is infinite and can supply
many more gigabytes very fast.
And I think that /dev/zero is a well-known path. Therefore, it can be used for DoS without knowing a path to large file.
hsivonen: Could you set the severity or redirect this request to better person who if you know?
Comment 11•3 years ago
|
||
I still think we should block these at the point of trying to open a channel for file: URL to block DoS also from sources other than typing in the URL bar.
Valentin, comment 5 isn't particularly convincing. Is there a clearer articulation why we shouldn't block the /dev hierarchy upon trying to open a channel for a file: URL?
Comment 12•3 years ago
|
||
nsFileChannel::Init seems like the best place to reject /dev files. Let's do it, and put it behind a pref (I'm sure it will break someone's workflow 😆)
Comment 13•3 years ago
|
||
(In reply to Henri Sivonen (:hsivonen) (away from Bugzilla on the week of 2022-12-05) from comment #8)
(In reply to Michal Novotny (:michal) from comment #7)
(In reply to Henri Sivonen (:hsivonen) from comment #6)
It still leaves the possibility of self-DoS by XHR, etc., where a file:
URL can be used.How this differs from opening a large regular file like DVD iso image?
A regular file is finite and (typically) limited to a handful of gigabytes,
which we can handle (in 64-bit builds). /dev/zero is infinite and can supply
many more gigabytes very fast.
It isn't. We still freeze pretty hard when trying to load finite but huge files, particularly plain text files, and we always have. See bug 319143 for instance. I have lost track of the number of times when I've froze the browser after accidentally trying to open a binary file that has the wrong MIME type.
I think the better way to fix this would be to refuse to fully render documents which are unreasonable relative to system resources rather than to special case /dev/zero. If a file wouldn't even fit fully into RAM before it was transformed into DOM and layout data, then there's really not point in trying to fully render it. It can't be done.
That said, in the case of /dev/zero, we could also give it an application/octet-stream MIME type, in which case the user would be given the option to download it rather than freezing the browser.
Comment 14•3 years ago
|
||
Special-casing /dev is simple. Figuring out what input size is too large given system resources is very hard. I think it doesn't make sense not to do the simple special-casing because we might eventually be able to solve the harder problem.
Description
•