Sent email to reporter requesting proof-of-concept CGI for exploit.
Created attachment 333792 [details] crash script The the obvious fix is to change line 203 in nsDirIndexParser::ParseFormat() to check for an allocation failure with something like this: if ((mFormat = new int[num+1]) == NULL) return NS_ERROR_OUT_OF_MEMORY; As for triggering the bug, that's a little more annoying and the impact is OS and runtime dependent. You also have to force an allocation failure, which is why the proof of concept exploit is a bit complicated. So, I'm attaching a python script (not the exploit script) that should crash Firefox on Windows, but the crash is very slow and will consume over a gig of memory in the process. An offending content body will look something like what I've shown below, but with a much longer string of "x x x" tokens (enough that the new call will fail). The address at the tokens * 4 is then overwritten with 0xFFFFFFFF. Also, the script gzips the content to save some time (and even then it's intolerably slow). Response Content Body: HTTP/1.0 200 OK Content-Type: application/http-index-format 200: x x x x x x x x x x x x x x x
I just realized that this bug is mislabeled. It affects all hardware/OS/versions. In some combinations it's a code execution bug (Firefox 2.0 and below on Windows) and on some it's a crash bug (like Firefox 3.0 on Windows). Whether it's code execution or crash depends primarily on if the new call throws an unhandled exception (which Firefox 3.0 does). After that, exploitation is a matter of finding a reasonable address to overwrite.
Created attachment 335756 [details] [diff] [review] Path to limit columns and catch NULL deref This prevents the vulnerability both by checking for an excessive number of column headers and identifying a failed allocation. Checking the column headers isn't necessary to fix the bug, but there's no good reason to allow an arbitrary number of columns given the spec at http://www.mozilla.org/projects/netlib/dirindexformat.html
Assigning to Justin since he's supplied the patch, but we'll find someone to check it in for you
Comment on attachment 335756 [details] [diff] [review] Path to limit columns and catch NULL deref sr=dveditz Looks good to me, but we need a network peer to also OK this. Looks like the original code should have had "if(!*pos) break;" before incrementing num, but it doesn't really hurt to allocate one too many if the format ends with extra spaces. Irrelevant to the security problem.
9 years ago
Comment on attachment 335756 [details] [diff] [review] Path to limit columns and catch NULL deref + // There are a maximum of six allowed header fields (doubled + terminator, just in case) please limit your lines to 80 characters + if (num > (2*(sizeof(gFieldTable) / sizeof(gFieldTable)))) spaces around the * operator, please. and use NS_ARRAY_LENGTH to get the size of the array + return NS_ERROR_OUT_OF_MEMORY; no tabs please
Created attachment 344000 [details] [diff] [review] updated to comments
Christian: did you have substantive complaints about the patch? I've addressed your comments, please re-review so we can get this in for the code-freeze.
9 years ago
Comment on attachment 344000 [details] [diff] [review] updated to comments Approved for 188.8.131.52 and 184.108.40.206, a=dveditz for release-drivers
Fix checked into trunk http://hg.mozilla.org/mozilla-central/rev/6fb0d02dc606
Fix checked into the 1.8 and 1.9.0 branches
So, reading above, we don't have a repro case because we didn't get a PoC cgi. QA would like to know if there is a way to verify this fix works.
Never mind, I see that the python file acts as a server for the creating the bug conditions. Script says: localhost - - [28/Oct/2008 17:18:42] "GET / HTTP/1.1" 200 - 388877 compressed bytes served when running and a connection is made to it via 127.0.0.1. Running with 3.0.3, Firefox peaks memory usage at around 490 MB and pegs my CPU at 100% while doing it. After about 15 minutes, it recovers and Firefox is usable again. No crash is evident... Running with last night's 220.127.116.11 nightly on the same virtual machine (Ubuntu 8.04), I'm seeing the exact same behavior with no difference at all. Comment 4 states that with Firefox 3, I should get a crash when I try this in 3.0.3. I'm going to try this on Windows as well but I need to set up an environment to do so.
Interesting. Testing on Windows XP, the same test crashed Firefox 3.0.3. With the nightly 18.104.22.168 with the fix (Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:22.214.171.124pre) Gecko/2008102905 GranParadiso/3.0.4pre), the memory usage rose but it eventually recovered with no crash. Marking verified for 126.96.36.199.
Testing with Firefox 188.8.131.52 on Windows XP, I get up to about 420 MB of RAM usage (and 99% cpu) before it drops back to 28 MB of RAM and no cpu usage, bringing up a page. I cannot get 184.108.40.206 to crash with this.
So, I've tried this a bunch of different times with 220.127.116.11 on XP, even dropping the available RAM in the virtual machine down to 128 MB to constrain things. While I can get 3.0.3 to crash, nothing ever happens with 18.104.22.168 other than it becoming very busy for a while Has anyone actually seen this crash 22.214.171.124 or earlier?
Comment on attachment 344000 [details] [diff] [review] updated to comments a=asac for 1.8.0 branch
I'm unable to verify this fix in 126.96.36.199 because of the inability to replicate the crash on a 1.8.1.x build.
the patch is not nice: 206 mFormat = new int[num+1]; 207 // Prevent NULL Deref - Bug 443299 208 if (mFormat == nsnull) 209 return NS_ERROR_OUT_OF_MEMORY; operator |new| throws exception on OOM, it doesn't return 0 as demonstrated by this proggie: int *mFormat; mFormat=new int[-2000]; if (mFormat == 0) cout << "==" << endl; return 0; ./a.out terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted
georgi: this isn't the case with gecko 1.8 on windows, the toolchain we use is msvc6 where new returns null instead of throwing. handling oom by null checking is the way all of our code expects to work. until we've changed things, we should continue to ensure we have null checks.
This landed before 1.9.1 branched