Closed Bug 848535 (CVE-2013-1694) Opened 7 years ago Closed 7 years ago
Use of Preserve
Wrapper in cases when we don't have a wrapper seems broken
2.24 KB, patch
|Details | Diff | Splinter Review|
2.17 KB, patch
|Details | Diff | Splinter Review|
Consider the following scenario: 1) XHR starts. 2) Its wrapper, if any, is collected. 3) A C++ readystatechange listener calls GetResponse, and the response type is JSON or arraybuffer. This causes nsXMLHttpRequest::RootJSResultObjects to be called, which calls PreserveWrapper. 4) JS touches the XHR object, causing it to be wrapped, which calls nsWrapperCache::SetWrapper(). In a debug build this will assert fatally. In an opt build this will clear the preserved-wrapper flag on the wrapper cache, which seems bad. Should we just preserve that flag on SetWrapper and remove the assert in that method, perhaps? Alternately, we should stop using PreserveWrapper to mean "hold JS objects" in XHR. Note that the above scenario is pretty simple to produce with a worker XHR if extension JS can ever touch the main-thread nsXMLHttpRequest of it, as far as I can tell.
nsDOMFileReader is possibly similarly broken.
Sounds kind of bad. Feel free to adjust as needed.
May not get to this before next week.
Assignee: nobody → bugs
This should do it. Technically drop in unlink isn't absolutely needed, but we do drop preserved wrapper there too if there is such (in nsDOMEventTargetHelper). These aren't super perf critical so extra drop isn't too bad.
But then, drop in unlink is rather useless... I'll remove it
Attachment #746648 - Flags: review?(continuation) → review+
Comment on attachment 746648 [details] [diff] [review] patch [Security approval request comment] How easily could an exploit be constructed based on the patch? As of now we don't know any exploit, but I don't think constructing one is too difficult. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? commit will be about simpler js object holding Which older supported branches are affected by this flaw? all Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? backports should be almost the same How likely is this patch to cause regressions; how much testing does it need? Should be relatively safe
Attachment #746648 - Flags: sec-approval?
sec-approval+ for m-c on 5/14 after we branch and ship. Please prepare and nominate branch patches as well.
Attachment #746648 - Flags: sec-approval? → sec-approval+
Ready for uplift noms?
Preparing patches ....
This patch seems to apply to branches. esr17 needs --fuzz=4 [Approval Request Comment] Bug caused by (feature/regressing bug #): Something ancient User impact if declined: Possible GC/CC related crashes Testing completed (on m-c, etc.): landed to m-c 2013-05-15 Risk to taking this patch (and alternatives if risky): Shouldn't be risky String or IDL/UUID changes made by this patch: NA
Comment on attachment 752297 [details] [diff] [review] for branches approving for desktop branches, holding off on b2g18 until we know if this needs to be uplifted to v1.0.1 - since it's only sec-high, I suspect it does not but will wait for confirmation.
Attachment #752297 - Flags: approval-mozilla-esr17?
Attachment #752297 - Flags: approval-mozilla-esr17+
Attachment #752297 - Flags: approval-mozilla-beta?
Attachment #752297 - Flags: approval-mozilla-beta+
Attachment #752297 - Flags: approval-mozilla-aurora?
Attachment #752297 - Flags: approval-mozilla-aurora+
Attachment #752297 - Flags: approval-mozilla-b2g18? → approval-mozilla-b2g18+
Based on the steps in comment 0 and the lack of an existing test case, we're going to mark this qa- for verification purposes. If that changes, remove qa- and/or let us know how we could otherwise verify fixed. Thanks.
Whiteboard: [qa-] → [qa-][adv-main22+][adv-esr1707+]
You need to log in before you can comment on or make changes to this bug.