Closed Bug 1482540 Opened 2 years ago Closed 2 years ago

Assert fail in NS_OpenAnonymousTemporaryNsiFile()

Categories

(Core :: Widget, defect)

60 Branch
defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla65
Tracking Status
firefox-esr52 --- unaffected
firefox-esr60 --- wontfix
firefox61 - wontfix
firefox62 - wontfix
firefox63 - wontfix
firefox64 --- wontfix
firefox65 --- verified

People

(Reporter: mozillabugs, Assigned: robwu)

References

(Regression)

Details

(Keywords: regression, sec-other, Whiteboard: [post-critsmash-triage][adv-main65-])

Attachments

(1 file)

Group: core-security → dom-core-security
I'm marking this sec-other per the last sentence of comment 0.

Ehsan, can you take a look?
Flags: needinfo?(ehsan)
Keywords: sec-other
Bug 1346583 did not miss anything.  It is bug 335545 which added this regression quite recently (in Firefox 60) by adding a consumer to NS_OpenAnonymousTemporaryNsIFile() which is illegal to call in the content process. :-(

Over to Rob and Markus on why this was done and how it needs to be fixed.  Based on a cursory look, DataStruct needs to IPC to the parent to ask it to create a temporary file for it...
Flags: needinfo?(rob)
Flags: needinfo?(mstange)
Flags: needinfo?(ehsan)
(We should probably make that assert a diagnostic/release assert...)
This was certainly not intentional. The idea was to switch to NS_OpenAnonymousTemporaryNsIFile only in the places where we were already creating files, and not add new places where we create files, if I remember correctly.
Flags: needinfo?(mstange)
This is the relevant change:
https://hg.mozilla.org/mozilla-central/rev/07e72c13b476

I use NS_OpenAnonymousTemporaryFile to ensure that clipboard data is cached to a temporary FD.
In terms of file IO, this was not new behavior: The previous logic saved data to a (persistent) file instead.

Last time I checked, the nsITransferable stuff is somewhat inefficient already, since two files are being created for large clipboard data outside of private browsing mode:
1) Parent process creates file (via DataStruct, via anon tmp file) to back the nsITransferable.
2) Before sending the IPC message, the file is read and the data written to a IPC message.
3) Child process reads IPC message and stores data in nsITransferable, and via DataStruct in a file.

Ideally there should be only one file, in the parent and send the FD with the serialization of the nsITransferable.


A quick fix against the assertion error is to add && !XRE_IsParentProcess() to this condition: https://searchfox.org/mozilla-central/rev/2466b82b729765fb0a3ab62f812c1a96a7362478/widget/nsTransferable.cpp#77
(the disadvantage of doing this is that content processes will use lots of memory if people put lots of data on the clipboard, which is probably quite rare.)

Another possible resolution is to completely disable the clipboard file cache (at the cost of memory usage). If you do that, then don't forget to close bug 1433030.


Here is a manual STR to trigger the clipboard cache logic: https://bugzilla.mozilla.org/show_bug.cgi?id=1396224#c0
(instead of "clipboardcache" files, anonymous files are used)
Flags: needinfo?(rob)
The specific problem that bug 335545 introduced was calling a parent-process-only API in the content process.  I understand that it did not intend to change anything about when files are accessed, but that's not the main problem here...

(In reply to Rob Wu [:robwu] from comment #5)
> A quick fix against the assertion error is to add && !XRE_IsParentProcess()
> to this condition:
> https://searchfox.org/mozilla-central/rev/
> 2466b82b729765fb0a3ab62f812c1a96a7362478/widget/nsTransferable.cpp#77
> (the disadvantage of doing this is that content processes will use lots of
> memory if people put lots of data on the clipboard, which is probably quite
> rare.)

The data that the user copies to the clipboard is under the control of the website, and they can inflate the size arbitrarily...  This is a risky option from an OOM perspective IMO
(In reply to :Ehsan Akhgari from comment #7)
> The data that the user copies to the clipboard is under the control of the
> website, and they can inflate the size arbitrarily...  This is a risky
> option from an OOM perspective IMO

Why? If a website wants the user to OOM, it has a gazillion other ways of doing that. The clipboard isn't the weakest link there.
(In reply to :Gijs (he/him) from comment #8)
> (In reply to :Ehsan Akhgari from comment #7)
> > The data that the user copies to the clipboard is under the control of the
> > website, and they can inflate the size arbitrarily...  This is a risky
> > option from an OOM perspective IMO
> 
> Why? If a website wants the user to OOM, it has a gazillion other ways of
> doing that. The clipboard isn't the weakest link there.

We generally try to protect against OOM crashes that can be triggered by input that isn't linearly (or something like that) correspondent with the amount of memory allocated.  IOW, a website triggering an OOM by creating a large DOM by streaming content using an XHR is a very different scenario than a website triggering an OOM by calling an API through script.
(In reply to :Ehsan Akhgari from comment #9)
> We generally try to protect against OOM crashes that can be triggered by
> input that isn't linearly (or something like that) correspondent with the
> amount of memory allocated.  IOW, a website triggering an OOM by creating a
> large DOM by streaming content using an XHR is a very different scenario
> than a website triggering an OOM by calling an API through script.

A trivially-sized piece of script can just allocate giant (typed) arrays if it wants, or use nested dtd entity expansion, or use exponential string addition, or open loads of windows with arbitrary script-generated data, or repeatedly add identical DOM content from a loop, or...

I'm not aware of it being possible for us to do anything about this. They're all public sec-low/non-sec-rated issues, some wontfixed / marked invalid. I don't see why we should let the potential for OOM change our approach to this bug.
(In reply to :Gijs (he/him) from comment #10)
> (In reply to :Ehsan Akhgari from comment #9)
> > We generally try to protect against OOM crashes that can be triggered by
> > input that isn't linearly (or something like that) correspondent with the
> > amount of memory allocated.  IOW, a website triggering an OOM by creating a
> > large DOM by streaming content using an XHR is a very different scenario
> > than a website triggering an OOM by calling an API through script.
> 
> A trivially-sized piece of script can just allocate giant (typed) arrays if
> it wants, or use nested dtd entity expansion, or use exponential string
> addition, or open loads of windows with arbitrary script-generated data, or
> repeatedly add identical DOM content from a loop, or...
> 
> I'm not aware of it being possible for us to do anything about this. They're
> all public sec-low/non-sec-rated issues, some wontfixed / marked invalid. I
> don't see why we should let the potential for OOM change our approach to
> this bug.

I think we are talking about different things.  You seem to be talking about scenarios where a malicious page is trying to OOM the browser, that is certainly possible and not something we can defend against.  I'm worried about the case where a non-malicious page does something (such as adding an <input type=file>) which, when the user picks a large enough file, causes the browser to OOM, where it currently doesn't.  The latter, I'm saying, is a regression we should not introduce.
(In reply to :Ehsan Akhgari from comment #11)
> (In reply to :Gijs (he/him) from comment #10)
> > (In reply to :Ehsan Akhgari from comment #9)
> > > We generally try to protect against OOM crashes that can be triggered by
> > > input that isn't linearly (or something like that) correspondent with the
> > > amount of memory allocated.  IOW, a website triggering an OOM by creating a
> > > large DOM by streaming content using an XHR is a very different scenario
> > > than a website triggering an OOM by calling an API through script.
> > 
> > A trivially-sized piece of script can just allocate giant (typed) arrays if
> > it wants, or use nested dtd entity expansion, or use exponential string
> > addition, or open loads of windows with arbitrary script-generated data, or
> > repeatedly add identical DOM content from a loop, or...
> > 
> > I'm not aware of it being possible for us to do anything about this. They're
> > all public sec-low/non-sec-rated issues, some wontfixed / marked invalid. I
> > don't see why we should let the potential for OOM change our approach to
> > this bug.
> 
> I think we are talking about different things.  You seem to be talking about
> scenarios where a malicious page is trying to OOM the browser, that is
> certainly possible and not something we can defend against.  I'm worried
> about the case where a non-malicious page does something (such as adding an
> <input type=file>) which, when the user picks a large enough file, causes
> the browser to OOM, where it currently doesn't.  The latter, I'm saying, is
> a regression we should not introduce.

As mentioned in comment 5, the implementation has always tried to send the full deserialized clipboard content over IPC (i.e. create a nsITransferable that holds all strings in memory without file cache) - see nsContentUtils::IPCTransferableToTransferable and nsContentUtils::TransferableToIPCTransferable.
And also in private browsing mode this cache is not used.

Even the async clipboard API (bug 1461465) seems to use the nsITransferable primitive under the hood, so although the API is performant and scalable by design, it seems to still be affected by the limits of nsITransferable.
Oh well... :-(  Sorry I didn't read your earlier comment more carefully.
Is there anything actionable or should we consider this stalled?
I'm not sure how Rob is planning to proceed here, I've been waiting for him.  I don't have too much more to add here personally, besides what's already posted.
Flags: needinfo?(ehsan) → needinfo?(rob)
Sorry, I wasn't aware that you're awaiting my reply.

I'm going to propose a patch as described in comment 5 (avoiding file IO in the content process).
Assignee: nobody → rob
Status: NEW → ASSIGNED
Flags: needinfo?(rob)
Large clipboard data (in nsTransferable) in content processes is no
longer stored in a temporary file, but kept in memory.
Attachment #9021456 - Attachment description: Bug 1482540 - Avoid file IO in content proceesses in nsTransferable → Bug 1482540 - Avoid file IO in content processes in nsTransferable
Landed: https://hg.mozilla.org/mozilla-central/rev/7a7f203680a8
Status: ASSIGNED → RESOLVED
Closed: 2 years ago
Resolution: --- → FIXED
Please request Beta/ESR60 approval on this when you get a chance. It grafts cleanly to both as-landed.
Flags: needinfo?(rob)
Target Milestone: --- → mozilla65
Group: dom-core-security → core-security-release
Marking the attachment and comment #0 private due to possible hints about other sec bugs.  I don't know if there's anything else in this bug that still needs to be hidden.
Component: IPC → Widget
This is not a security bug. Uplifts are not necessary.
Flags: needinfo?(rob)
Can this bug be made public? After comment #20, I don't see anything that justifies the hidden state.

(sec-other can be removed too)
Flags: qe-verify+
Whiteboard: [post-critsmash-triage]
Rob, can you please advice me how to verify this bug, maybe some STR? Thanks
Flags: needinfo?(rob)
1. Start a debug build of Firefox.
2. Visit a web page and try to paste over 1MB of data.

e.g. visit the following, Ctrl-A, Ctrl-C.

data:text/html,<script>document.write("x".repeat(2e6))</script>

Expected: No crash after Ctrl-C
Actual  : Crash in debug build.
Flags: needinfo?(rob)
Thanks, Rob for the steps.
I verified this on Mac OS X 10.12, Windows 10x64 and Ubuntu 16.04 with FF Nightly debug build 65.0a1(2018-11-22) and I can't reproduce the issue. Please note that I also tested it with older debug builds, before the fix was uplifted, and I was able to reproduce it. Based on the above I will mark this as a verified fix.
Status: RESOLVED → VERIFIED
Flags: qe-verify+
Whiteboard: [post-critsmash-triage] → [post-critsmash-triage][adv-main65-]
No longer blocks: 335545
Regressed by: 335545
Group: core-security-release
See Also: → 1524237
You need to log in before you can comment on or make changes to this bug.