bugzilla.mozilla.org will be intermittently unavailable on Saturday, March 24th, from 16:00 until 20:00 UTC.
Somebody reported that on twitter: https://twitter.com/InfoSecMouse/status/484186535760109570 Apparently, we're not checking that the uncompressed size is less than LZ4 block size (~1.9Gb, as stated in ). Not sure we can really cheat with uncompressedSize, but opening a bug and making it secured, just in case. When using compress, we check that compressedSize <= UINT32_MAX == 2**32 - 1 bytes which is (close to) 4 GB, although the max block size of LZ4 is 1.9 GB. Should we adjust this assert? We could also add an assert during decompression that the uncompressedSize is indeed less than this ~1.9GB limit (wonder if there is a constant for that limit somewhere).  http://dxr.mozilla.org/mozilla-central/source/mfbt/Compression.h#37
(In reply to Benjamin Bouvier [:bbouvier] from comment #0) > We could also add an assert during decompression that the uncompressedSize > is indeed less than this ~1.9GB limit (wonder if there is a constant for > that limit somewhere). There's no constant, unfortunately; I think you're supposed to derive it from LZ4_compressBound(). We should probably declare the appropriate constant in mfbt/Compression.h, with static_asserts and such.
Just found more context (previous tweets): http://blog.securitymouse.com/2014/07/i-was-wrong-proving-lz4-exploitable.html Apparently, the author is not talking about the same level of block size and supposes the input stream can be handcrafted. That cannot happen within asm.js modules decompression, as these can't flow in from the outside of the engine. I am wondering whether we could create a module such that, once compressed, it follows the explained pattern (a fixed amount of 0xFF and then another byte). Should we close? Or maybe take advantage of this bug to add the 1.9GB assertions?
Yeah, it seems, if there is a bug here, it would be a general FF lz4 bug and that asm.js-serialization wouldn't need to take any special steps concerning block size. It seems like the potential vulnerability depends on an overly-big block size (and that block size is an internal detail, not this 1.9G limit). Is block size a static parameter? I tried grepping lz4 source for "block size" and variations thereof, but I couldn't find anything. For the 1.9G limit: ideally, the LZ4 APIs would remove this hazard and dynamically check (instead of asserting as they currently do). Apparently, none of the uses of LZ4::compress in the tree check for sizes > INT_MAX: http://mxr.mozilla.org/mozilla-central/source/gfx/layers/LayerScope.cpp#467 http://mxr.mozilla.org/mozilla-central/source/toolkit/components/workerlz4/lz4.cpp#67 Thoughts on either of these Till?
I went looking for some information on lz4 blocksizes, because I didn't see anything in the current code that said anything about blocksizes. This google doc: https://docs.google.com/document/d/1gZbUoLw5hRzJ5Q71oPRN6TO4cRMTZur60qip-TE7BhQ/edit?pli=1#heading=h.ujcdmapf87vn suggests that the 8MB blocksize restriction comes with some legacy lz4 format. This format doesn't seem to be supported by our code. The seemingly current description of the lz4 format: https://code.google.com/p/lz4/source/browse/trunk/lz4_format_description.txt doesn't say anything about restricted block sizes. There also seems to be some sort of dust-up between the person who reported this vunerability and the author of lz4: http://fastcompression.blogspot.com/ It's not clear to me whether this is security drama, whether our version is actually vunerable, or whether there are actual problems in the first place.
So if I understand correctly, you could theoretically get LZ4 to write into lower addresses than expected by making the output pointer overflow. First of all, this relies on the `length` value in a single iteration of the decompression loop becoming obscenely large. Length values adding up to a very large total value doesn't cut it, as the first of them going above `oend` would cause the loop to bail. Short of another, much more severe bug in the algorithm, I don't see how such a huge value in a single iteration would be possible. But let's assume that there is such a bug hiding in the algorithm - you'd still need to have very precise control of the data that's being decompressed. I don't see how any of our usages would give you this control - asm.js most certainly doesn't. So yeah, this does look like a bug, and for Linux it *might* actually be a problem. I very much doubt it is for us, however. We should still get a newer LZ4 version landed, probably as soon as Yann releases one that cleans this part up for good.
http://blog.securitymouse.com/2014/07/so-i-guess-this-happened.html https://www.youtube.com/watch?v=BcCDETzk4zc Title: Lab Mouse Security: Remote Code Execution via Browser (LZO) Description: In which a shell is popped as a video exploit compromises Firefox 30.0 on an updated Ubuntu x86_64. Just for fun, the shell executes "telnet nyancat.dakko.us"! Notice it executes twice. Once when the video starts playing, and once when it stops.
That sounds really bad. Did this guy at least contact us in any way and do we have a patch in hand in another bug, or is his "[w]e're kind to the Internet community like that" thing even less true than that? I'll look into updating our LZ4 import ASAP in any case.
(In reply to Till Schneidereit [:till] from comment #8) > That sounds really bad. Did this guy at least contact us in any way and do > we have a patch in hand in another bug, or is his "[w]e're kind to the > Internet community like that" thing even less true than that? > > I'll look into updating our LZ4 import ASAP in any case. I've sent a mail to dveditz and curtisk to ask them if they've been contacted by the consultant, as arroway told me there are some security contact mails only few people have access to. In other cases, we really need to contact him, at least to ask for a proof of concept test-case.
Discussed at lunch with Benjamin. Tracking in case we have to react quickly on this (especially with the 31 release coming). Benjamin checked and ESR seems unaffected.
status-firefox30: --- → affected
status-firefox31: --- → affected
status-firefox32: --- → affected
status-firefox33: --- → affected
status-firefox-esr24: --- → unaffected
tracking-firefox31: --- → +
tracking-firefox32: --- → +
tracking-firefox33: --- → +
The patch in bug 1031414 landed, resolving this.
Status: NEW → RESOLVED
Last Resolved: 4 years ago
Resolution: --- → FIXED
status-b2g-v1.3: --- → unaffected
status-b2g-v1.3T: --- → unaffected
status-b2g-v1.4: --- → affected
status-b2g-v2.0: --- → fixed
status-b2g-v2.1: --- → fixed
status-firefox30: affected → wontfix
status-firefox31: affected → fixed
status-firefox32: affected → fixed
status-firefox33: affected → fixed
Assignee: nobody → till
Target Milestone: --- → mozilla33
Bug 1031414 was denied for 1.4.
status-b2g-v1.4: affected → wontfix
You need to log in before you can comment on or make changes to this bug.