755 bytes, text/html
2.97 KB, patch
|Details | Diff | Splinter Review|
3.59 KB, patch
|Details | Diff | Splinter Review|
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:22.214.171.124) Gecko/20110319 Firefox/3.6.16 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:126.96.36.199) Gecko/20110319 Firefox/3.6.16 We have a web page using JQuery 1.5.1 that makes a POST ajax request to a REST web service on another domain. Firefox sends a preflighting OPTIONS request, then the POST. If the server responds with a header of Access-Control-Max-Age set to a number of seconds longer than it takes to make 100 unique uri requests, firefox crashes. On the 101st unique URL post request, Firefox makes the 101st OPTIONS request to the server, which replies, then Firefox crashes with the attached crash dump. Reproducible: Always Steps to Reproduce: 1. Edit CrossDomainTest.html line 22 to point to a REST webservice 2. Ensure the webservice response includes a header of Access-Control-Max-Age set to something like "1728000" which is 20 days. 3. Open the page in Firefox 4. On the 101 request Firefox crashes. Actual Results: Firefox crashed and showed the submit crash dump dialog. Expected Results: Firefox should continue to make requests. This is related to the submitted crash dump ID: 7636c991-6b98-4abf-b2c4-7ea2c2110322 Module: xul.dll When no "Access-Control-Max-Age" header is defined in the web service response Firefox continues to make more than 101 requests. We are unsure if this bug could be exploited so we are setting the "Many users could be harmed" option until further investigation confirms the extent of this issue.
Oy. So the code landed for bug 389508 is buggy. Most pertinently, nsAccessControlLRUCache::GetEntry with aCreate = true will create an entry, put it in the hashtable, then maybe evict some entries from the hashtable if there are too many and then return the newly created entry. Which means that it can evict the newly-created entry (and in fact will preferentially do that!). As a separate note, RemoveExpiredEntries has a buggy check for the entry needing removal...
Created attachment 521387 [details] [diff] [review] Proposed fix (1.9 branches) Jonas, want to write a test for this? I bet you can do that way faster than I can...
Comment on attachment 521387 [details] [diff] [review] Proposed fix (1.9 branches) Looks good. Would you mind waiting to land this until after bug 644476 has been checked in? Those patches will be a pain to merge otherwise. I'm happy to do the merging/landing of this one if you want.
Waiting is fine, if that other bug is landing soonish. I'd like to get this fixed on branches too...
Bug 644476 has been landed. I'll take a stab at writing a test for this bug.
Created attachment 522502 [details] [diff] [review] Updated to tip + add test bz, could you have a quick look over the test?
Comment on attachment 522502 [details] [diff] [review] Updated to tip + add test Looks fine to me; I assme this does fail without the patch, right?
Oh, and when I'm pushing this to m-c, should I include the test? :(
Yup, it did
Oh, hmm.. yeah, you probably should not include the test :( We need a keyword for tracking tests that are pending landing.
Pushed http://hg.mozilla.org/mozilla-central/rev/79487686ac29 with the fix, but not the test.
(In reply to comment #12) > We need a keyword for tracking tests that are pending landing. Jonas: we're using the in-testsuite: ? flag for that bz: as a security bug what's the severity here? We're stuffing one too many items into a list (overrun)? Don't realize the write failed and read too far?
Daniel, the severity is that nsPreflightCache::GetEntry returns a pointer to a deleted entry in the case this bug is concerned about. Then we access members of that entry; looks like most common is the entry->mMethods.Length() access or the entry->mHeaders.Length() access, which reads from memory we don't own and crashes. Jonas, are there other GetEntry callers than AddResultToCache?
> Jonas: we're using the in-testsuite: ? flag for that We're also using it to track bugs that need tests written. I have several hundred of the latter with this flag set assigned to me, so I'm hoping that I'm not the one who's supposed to notice when we open this and land the test....
Comment on attachment 522502 [details] [diff] [review] Updated to tip + add test Requesting whatever branch approvals are relevant....
If we were to take this for Macaw we'd need to fix it on the branches too. Is the patch the same?
(I'd guess from the approval requests it is, just want to make it explicit)
It's not. The correct patch for the braches is the original "Proposed fix" patch on this bug. Would you prefer that I move the approval requests there?
Comment on attachment 521387 [details] [diff] [review] Proposed fix (1.9 branches) Approved for 188.8.131.52 and 184.108.40.206, a=dveditz for release-drivers
Comment on attachment 522502 [details] [diff] [review] Updated to tip + add test Approved for the mozilla2.0 repository, a=dveditz for release-drivers
http://hg.mozilla.org/releases/mozilla-1.9.2/rev/2abcb340f405 http://hg.mozilla.org/releases/mozilla-1.9.1/rev/15c9a0503705 http://hg.mozilla.org/releases/mozilla-2.0/rev/647a0e97f960 Note that the "Updated to tip" patch didn't work for mozilla-2.0. This is what I ended up doing: 1. Landed the "Proposed fix (1.9 branches)" patch on 1.9.2 2. hg transplanted to 1.9.1 3. hg transplanted from 1.9.2 onto 2.0
Yeah, I should have been clearer: the "proposed fix" patch was the right one for _all_ branches including 2.0. The "updated to tip" was updated to a file move that happened after Firefox 4 ship. Thank you for checking this in!
It's OK to check in the tests now.
(In reply to comment #15) > Daniel, the severity is that nsPreflightCache::GetEntry returns a pointer to > a deleted entry in the case this bug is concerned about. Then we access > members of that entry; looks like most common is the > entry->mMethods.Length() access or the entry->mHeaders.Length() access, > which reads from memory we don't own and crashes. But is the deleting and using relatively atomic (with respect to the page's code)? The web page makes a call that creates, deletes, and then dereferences an entry -- how in that time can evil content insert an allocation that reuses the deleted object's memory? If you can't do that then it's going to die safely on the read, if you can then it's exploitable. I guess you could have a web worker and have it start churning away allocating appropriately sized chunks right as you make the fatal 101st XHR?
> But is the deleting and using relatively atomic (with respect to the page's > code)? I think so, yes... > I guess you could have a web worker and have it start churning away allocating > appropriately sized chunks right as you make the fatal 101st XHR? That's a possibility, indeed. ;)
Pushed the test: http://hg.mozilla.org/projects/cedar/rev/a29229640bf8
test is now on m-c after merging cedar: http://hg.mozilla.org/mozilla-central/rev/a29229640bf8