Bug 1334933 (CVE-2017-5400)

targeted ASM.JS JIT-Spray allows bypassing ASLR and DEP

RESOLVED FIXED in Firefox -esr45

Status

()

defect
RESOLVED FIXED
2 years ago
4 months ago

People

(Reporter: Rh01, Assigned: jandem)

Tracking

(Depends on 1 bug, {csectype-priv-escalation, sec-critical})

Trunk
mozilla54
x86
Windows
Points:
---
Dependency tree / graph
Bug Flags:
sec-bounty +

Firefox Tracking Flags

(firefox-esr4552+ fixed, firefox51 wontfix, firefox52+ fixed, firefox-esr52 fixed, firefox53+ fixed, firefox54+ fixed)

Details

(Whiteboard: [post-critsmash-triage][adv-main52+][adv-esr45.8+])

Attachments

(7 attachments, 2 obsolete attachments)

Reporter

Description

2 years ago
Dear Mozilla Team,

The mitigations made against bug 1325200 can be circumvented, and hence, ASM.JS
JIT-Spray in combination with heap spray still allows to fully bypass ASLR and
DEP in 32-bit Firefox on Windows. An attacker can directly jump to his controlled
machine code without resorting to memory-disclosures/info-leaks and code reuse,
when exploiting a memory corruption vulnerability.

As stated in comment #52 of bug 1325200, the attacker first occupies many memory
regions to decrease the entropy of possible 64KB aligned base addresses and then
forces the creation of max. allowed ASM.JS code regions. As shown in the PoC,
his controlled machine code lands at predictable addresses. (Can you reproduce?)

I think the best mitigations are 1) random nop insertion and 2) constant
blinding of four-byte constants:

1) Inserting different-sized nop-like instructions (such as "lea esp, [esp]" and
"nop dword [REG32 + OFFSET]") at random locations into the stream of machine code
instructions decreases the possiblilty to hit a sprayed nop sled. Therefore, each
JIT module should have different random nops. Even if the attacker hits a nop
sled, the instruction stream may "re-synchronize" and his payload becomes
unusable.

2) If four-byte constants are blinded only two(/one)-byte constants can be used to
hide attacker-controlled machine instructions. However, in combination with 1) I
doubt that an attacker can do anything meaningful with
that.

Of course both mitigations may have an unacceptable performance overhead :/.
Still, I hope this helps.

If you need more information, please let me know.

Best,
    Rh0


P.S.:
If requested I can provide a PoC with more code execution (.e.g.:
WinExec("cmd")) soon.
Reporter

Updated

2 years ago
Summary: targeted ASM.JS JIT-Spray allows bypassing ASLR AND DEP → targeted ASM.JS JIT-Spray allows bypassing ASLR and DEP
Group: core-security → javascript-core-security
Status: UNCONFIRMED → NEW
Ever confirmed: true
Jandem: now can we fix the other parts of Rh0's original bug? Would be super nice to get this fixed in 52 so we aren't easily hacked at Pwn2Own.
Trying to understand the technique being described here: is it the case that the malloc() performed by the 'new ArrayBuffer(0x100000)' is allocating in a predictable pattern that then allows one to fill up the heap with non-executable memory and then free up a few holes at predictable addresses (that subsequently get filled by executable allocations)?
Assignee

Comment 3

2 years ago
(In reply to Daniel Veditz [:dveditz] from comment #1)
> Would be super nice to get this fixed in 52 so we aren't easily hacked at Pwn2Own.

Pwn2Own will use 64-bit builds I hope? 32-bit with its tiny address space is easier to heap-spray/exploit.

I'll be working on these bugs this week, or at least find owners for them.
Assignee

Comment 4

2 years ago
(In reply to Luke Wagner [:luke] from comment #2)
> Trying to understand the technique being described here: is it the case that
> the malloc() performed by the 'new ArrayBuffer(0x100000)' is allocating in a
> predictable pattern that then allows one to fill up the heap with
> non-executable memory and then free up a few holes at predictable addresses
> (that subsequently get filled by executable allocations)?

It's just allocating a ton of memory so code allocations after that are not randomized properly and start to be predictable.

Regarding VirtualAlloc randomization, here are things we need to improve:

(1) Instead of trying once and then doing a non-randomized VirtualAlloc, we could try 5 times or so.

(2) Instead of using only the [64 MiB - 1GiB) range on 32-bit, we should consider using the full address space. I tried this locally and it makes most of our randomized allocations succeed on this PoC. There's a comment in ComputeRandomAllocationAddress saying:

     * x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range
     * tries to avoid system default DLL mapping space. In the end, we get 13
     * bits of randomness in our selection.

We need input from a Windows expert, but restricting ourselves to 1 GB on Win32 seems very restrictive.

(3) After that, to prevent these heap sprays, maybe we could fail code allocations on Win32 if we're using more than X % (90%?) of our address space. Not sure if there are APIs for this that are useful (we care about 64K chunks, not necessarily free pages).

dmajor, since you're familiar with Windows, thoughts on (2) and (3)?
Flags: needinfo?(dmajor)
Assignee

Comment 5

2 years ago
Spinning the NOP insertion and constant blinding mitigations mentioned in comment 0 off to pre-existing bug 1333005.
Depends on: 1333005
Flags: needinfo?(jdemooij)
(In reply to Jan de Mooij [:jandem] from comment #3)
> Pwn2Own will use 64-bit builds I hope? 32-bit with its tiny address space is
> easier to heap-spray/exploit.

Given that we're still not proactively shipping Win64 builds, I wouldn't fault them for testing 32-bit builds still.
After some discussion with jandem and dveditz, it seems like a fundamental problem is, as comment 2 was asking, we're still able to put executable code at predictable addresses due to the completely predictable allocation of *non* executable memory.  So before jumping all the way to bug 1333005, I think our first step (which should fix this bug) should be to ensure that JIT code is never forced into a predictable address.

One strategy to ensure this is: reserve a contiguous range of MaxCodeBytesPerProcess up front (once per process, when JS engine is first initialized, at a fully random address since the address space is fresh) and then only allocate JIT code out of this range.  This would have multiple side benefits: avoiding the need to call RtlAddFunctionTable() at runtime on Windows (which currently has profiler deadlock issues), making platforms more similar, making all x64 jumps/calls "near", etc.
Assignee

Comment 8

2 years ago
The plan is to have a bitvector with a bit for each page, then when we allocate X pages we look for X consecutive free bits (starting at a random location to get ASLR-like behavior), if we find some we toggle these bits and change the protection of the pages. When we free pages we just change the protection to PROT_NONE and clear the corresponding bits.
Assignee: nobody → jdemooij
Status: NEW → ASSIGNED
Flags: needinfo?(dmajor)
(In reply to Jan de Mooij [:jandem] from comment #4)
> Regarding VirtualAlloc randomization, here are things we need to improve:
> 
> (1) Instead of trying once and then doing a non-randomized VirtualAlloc, we
> could try 5 times or so.
> 
> (2) Instead of using only the [64 MiB - 1GiB) range on 32-bit, we should
> consider using the full address space. I tried this locally and it makes
> most of our randomized allocations succeed on this PoC. There's a comment in
> ComputeRandomAllocationAddress saying:
> 
>      * x86: V8 comments say that keeping addresses in the [64MiB, 1GiB) range
>      * tries to avoid system default DLL mapping space. In the end, we get 13
>      * bits of randomness in our selection.
> 
> We need input from a Windows expert, but restricting ourselves to 1 GB on
> Win32 seems very restrictive.

From a quick look at the loaded modules on a random VM, I see most libraries in the 6Cxxxxxx to 7xxxxxxx range, even across restarts. It should be fine to increase to 1.5 GiB on pure-Win32 and also include the 2-4 GiB range on WoW64. (We should do a more thorough test across multiple Windows versions and bitness if we go down this route, but that's straightforward.)

Even covering the entire address space probably wouldn't be the worst thing in the world, as long as we increase the attempts in (1).

> (3) After that, to prevent these heap sprays, maybe we could fail code
> allocations on Win32 if we're using more than X % (90%?) of our address
> space. Not sure if there are APIs for this that are useful (we care about
> 64K chunks, not necessarily free pages).

I looked at those APIs some time ago in bug 1007534 comment 13. You could check ullAvailVirtual from GlobalMemoryStatusEx (since it will be counting at 64K granularity anyway). Is 5-7ms too slow?
Assignee

Comment 10

2 years ago
(In reply to David Major [:dmajor] from comment #9)
> Even covering the entire address space probably wouldn't be the worst thing
> in the world, as long as we increase the attempts in (1).

Yeah. If we can get away with pre-allocation, there will only be a single large VirtualAlloc at process startup, so we could just try that at 10 different random addresses or so (and the virtual address space will be mostly empty still at that point). I think for now we can keep the ComputeRandomAllocationAddress function as-is and if everything works out we could do a follow-up patch to change it to use the entire address space on 32-bit.

> I looked at those APIs some time ago in bug 1007534 comment 13. You could
> check ullAvailVirtual from GlobalMemoryStatusEx (since it will be counting
> at 64K granularity anyway). Is 5-7ms too slow?

Thanks. Hopefully we don't need this, if pre-allocating works :)
Assignee

Comment 11

2 years ago
Posted patch Patch (obsolete) — Splinter Review
This patch works nicely. We reserve a MaxCodeBytesPerProcess region on startup, then we commit/decommit random pages from that based on a bitset.

I added some logging on Mac and I tested this with VMMap on Win32 + Win64 and everything looks reasonable: pages are allocated randomly and we decommit them correctly.

The patch is pretty large because it moves some code around (I removed ExecutableAllocator{Win,Posix}.cpp and merged them with the new ProcessExecutableMemory.cpp file to make it easier to change this code. Sorry about that.

I also cleaned up some ExecutableAllocator code and interfaces. I removed the W^X #define as we're probably not going to turn that off anytime soon.

Overall this patch is a nice refactoring/cleanup I think, and if this sticks there will be more code to remove: the Win64 profiler lock, jump tables on x64, etc.
Attachment #8832859 - Flags: review?(luke)
Assignee

Comment 12

2 years ago
Comment on attachment 8832859 [details] [diff] [review]
Patch

Review of attachment 8832859 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jit/ProcessExecutableMemory.cpp
@@ +315,5 @@
> +    static const size_t NumWords = NumBits / BitsPerWord;
> +
> +    mozilla::Array<WordType, NumWords> words_;
> +
> +    WordType indexToWord(uint32_t index) const {

I just noticed this one (and the places where it's used) should use uint32_t or size_t instead of WordType (it doesn't matter as it's the same type but still).
Comment on attachment 8832859 [details] [diff] [review]
Patch

Review of attachment 8832859 [details] [diff] [review]:
-----------------------------------------------------------------

Looks really good, one question:

::: js/src/jit/ProcessExecutableMemory.cpp
@@ +244,5 @@
> +static void
> +CommitPages(void* addr, size_t bytes, ProtectionSetting protection)
> +{
> +    if (!VirtualAlloc(addr, bytes, MEM_COMMIT, ProtectionSettingToFlags(protection)))
> +        MOZ_CRASH();

nit: might be good to include an informative string here and in DecommitPages for crash reporting

@@ +410,5 @@
> +    MOZ_MUST_USE bool init() {
> +        pages_.init();
> +
> +        MOZ_RELEASE_ASSERT(!initialized());
> +        MOZ_RELEASE_ASSERT(PageSize == gc::SystemPageSize());

I'm pretty sure there are some second-tier platforms for which this will fail.  I also think it's possible to configure a Windows system to use larger pages so surely there are a handful of users that do this.  Probably we should use gc::SystemPageSize().

@@ +476,5 @@
> +        // Start at a random location, for less predictable addresses. It also
> +        // makes it more likely we will land on a free page.
> +        size_t page = seed % MaxCodePages;
> +
> +        // ExecutableAllocator usually allocates 64K chunks. Make sure such

Is there a way we could have a symbolic constant in ProcessExecutableAllocator that links this code here and the uses in ExecutableAllocator?  And if our effective quanta is 64K, then maybe maybe we can use *that* as our PageSize and MOZ_RELEASE_ASSERT(gc::SystemPageSize() <= PageSize) (allowing us to keep a static constant MaxCodePages).
Assignee

Comment 14

2 years ago
Posted patch Patch v2Splinter Review
Good points. I updated the patch to use ExecutableCodePageSize (= 64K) chunk sizes and I changed ExecutableAllocator to use that value as well.

I didn't do this initially because of asm.js/wasm allocations that can be much smaller than 64K, but the plan is to move them to ExecutableAllocator soon so this will no longer be an issue then.
Attachment #8832859 - Attachment is obsolete: true
Attachment #8832859 - Flags: review?(luke)
Attachment #8832935 - Flags: review?(luke)
Comment on attachment 8832935 [details] [diff] [review]
Patch v2

Review of attachment 8832935 [details] [diff] [review]:
-----------------------------------------------------------------

Very nice!
Attachment #8832935 - Flags: review?(luke) → review+
Assignee

Comment 16

2 years ago
Comment on attachment 8832935 [details] [diff] [review]
Patch v2

[Security approval request comment]
> How easily could an exploit be constructed based on the patch?
Not easily.

> Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?
No.

> Which older supported branches are affected by this flaw?
All.

> Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be?
Should apply or not that hard to backport.

> How likely is this patch to cause regressions; how much testing does it need?
It adds a new custom allocator for JIT pages, so there's definitely some risk. If it's stable on Nightly (with regular JS fuzzing) for a few days I don't expect serious problems though.
Attachment #8832935 - Flags: sec-approval?
Please advise the Uptime folks about the address space usage here, if you haven't already.
Assignee

Comment 18

2 years ago
erahm, njn: we will reserve (per process) a region of memory that's 144 MB on 32-bit platforms (we could shrink this a bit if really needed) and 640 MB on 64-bit platforms. JIT code allocations will then use *only* pages in that part of the address space. 

This fixes the security issue here where a heap spray is used to trick Windows into using predictable addresses for JIT code. It could prevent certain UAF bugs as well (because this memory is never available for anything else). It will also have a number of other benefits: it should avoid the profiler deadlocks on Win64 because we only call RtlAddFunctionTable at startup, we will be able to use near jumps on x64, and it makes us behave more consistently across platforms.
640 MB on 64-bit is fine, address space is plentiful there.

144 MB on 32-bit is less fine. Many users only have 2 GiB of address space, that's 7% of it, and virtual OOMs are common. Please reduce the size if you can.
Do we have a sense of how much total JIT code 32-bit users have when they OOM? If they frequently approach 144MB anyway, then this isn't really worse, just moving the allocation up-front. But if the actual number is lower, then yeah, that 144MB reservation could be taking away from other things.
njn: while virtual OOMs are certainly a problem on 32-bit, I had assumed that the problem here was fragmentation which shrinking the space by 7% with 1 big allocation wouldn't really hurt.  Is that the case or do we also see virtual OOMs where the entire 2gb is allocated?

Anyhow, I expect we could reduce it somewhat, though, without hurting.
(In reply to Luke Wagner [:luke] from comment #21)
> njn: while virtual OOMs are certainly a problem on 32-bit, I had assumed
> that the problem here was fragmentation which shrinking the space by 7% with
> 1 big allocation wouldn't really hurt.  Is that the case or do we also see
> virtual OOMs where the entire 2gb is allocated?

A single big 144 MiB allocation is certainly better than a lot of smaller ones that add up to 144 MiB. But. Typically, virtual OOMs start becoming a risk once the amount of total virtual memory available drops to 200 MiB or so. Any reduction just leaves less headroom.
Assignee

Comment 23

2 years ago
(In reply to Nicholas Nethercote [:njn] from comment #19)
> 144 MB on 32-bit is less fine. Many users only have 2 GiB of address space,
> that's 7% of it, and virtual OOMs are common.

Do we know how many of our users are still using Firefox 32-bit *on 32-bit Windows*, especially after we drop XP/Vista support? I know this will have to be backported to releases that support XP..

> Please reduce the size if you can.

We can try to reduce it from 144 to 128 MB or so, maybe more if we notice a spike in OOM crashes. Unfortunately capping our JIT code memory also has risks, it behaves similar to an OOM. We did get topcrashes on beta a few weeks ago from users hitting this. We now stop JIT compilation if we approach Limit - 16MB, that helped so I feel better about reducing this number, but it's still not great.

e10s-multi will also help a lot, unfortunately that will arrive just a few releases too late for most users :/
(In reply to Jan de Mooij [:jandem] from comment #23)
> (In reply to Nicholas Nethercote [:njn] from comment #19)
> > 144 MB on 32-bit is less fine. Many users only have 2 GiB of address space,
> > that's 7% of it, and virtual OOMs are common.
> 
> Do we know how many of our users are still using Firefox 32-bit *on 32-bit
> Windows*, especially after we drop XP/Vista support? I know this will have
> to be backported to releases that support XP..
> 

Yes, we do! The Firefox Hardware Report September has statistics on the Firefox and the OS architecture.
See https://metrics.mozilla.com/firefox-hardware-report/#goto-os-and-architecture

In terms of release timing (e.g., in connection with e10s-multi) we need to remember pwn2own:

(In reply to Daniel Veditz [:dveditz] from comment #1)
> Jandem: now can we fix the other parts of Rh0's original bug? Would be super
> nice to get this fixed in 52 so we aren't easily hacked at Pwn2Own.
Assignee

Comment 25

2 years ago
(In reply to Frederik Braun [:freddyb] from comment #24)
> Yes, we do! The Firefox Hardware Report September has statistics on the
> Firefox and the OS architecture.
> See
> https://metrics.mozilla.com/firefox-hardware-report/#goto-os-and-architecture

According to the "OS by architecture" graph there, 32-bit declined from 34.20% to 32.27% between April and September last year. Let's say ~30% now. That's still quite a lot of users unfortunately (about half of them is likely XP/Vista, but still).

> In terms of release timing (e.g., in connection with e10s-multi) we need to
> remember pwn2own:

Yeah, though pwn2own will use 64-bit Firefox so the bug reported here is not an issue there (or at least way more difficult/time-consuming). We should still get this in Firefox 52 though.
Assignee

Comment 26

2 years ago
This patch reduces the limit from 144 to 128 MiB on 32-bit.

The patch also tweaks the preserve-code mechanism a bit to discard code more eagerly if we are approaching this limit.
Attachment #8833250 - Flags: review?(luke)
Comment on attachment 8833250 [details] [diff] [review]
Part 2 - Change the limit on 32-bit

Review of attachment 8833250 [details] [diff] [review]:
-----------------------------------------------------------------

Good idea!
Attachment #8833250 - Flags: review?(luke) → review+
sec-approval+ for trunk. We'll want patches made and nominated for Auror, Beta, and ESR45 as well.
Attachment #8832935 - Flags: sec-approval? → sec-approval+
Assignee

Comment 29

2 years ago
https://hg.mozilla.org/integration/mozilla-inbound/rev/380a46afcf858c4990b3f334da54246dfc2f1156

I'll attach branch patches next week.
Flags: needinfo?(jdemooij)
Assignee

Updated

2 years ago
Blocks: 1323460
Assignee

Comment 30

2 years ago
On non-Windows platforms we have always used mmap without randomization. Here's a patch to generate a random address and pass that as hint to mmap.

I tested this on OS X and on x64 mmap was always able to use the addresses we generate with this patch (because huge address space). On 32-bit we succeed most of the time. It doesn't really matter: if the address is not available the OS will pick a different one.

V8 has somewhat similar code for this FWIW.
Attachment #8833679 - Flags: review?(luke)

Updated

2 years ago
Attachment #8833679 - Flags: review?(luke) → review+
https://hg.mozilla.org/mozilla-central/rev/380a46afcf85
Status: ASSIGNED → RESOLVED
Closed: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla54
Assignee

Comment 32

2 years ago
Part 3:

https://hg.mozilla.org/integration/mozilla-inbound/rev/e58878766438f80b01d3e3cb9f48aaab373b2923
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
https://hg.mozilla.org/mozilla-central/rev/e58878766438
Status: REOPENED → RESOLVED
Closed: 2 years ago2 years ago
Resolution: --- → FIXED
Assignee

Comment 34

2 years ago
We're seeing some new Linux32 jit-test intermittents, a regression from part 3 I think.

We pass a random address to mmap as hint, but we actually need to be more careful on Linux to not use memory that we later need for stack pages. This patch changes ComputeRandomAllocationAddress on posix to use range [512MiB, 2.5GiB) on 32-bit. 

This matches the V8 code and I can no longer reproduce the crashes locally (I got 1 crash in ~2000 runs of a particular jit-test, with this patch I got no failures after 100,000 runs).

Posting this here as we'll need to uplift it with the other ones.
Attachment #8833871 - Flags: review?(luke)
Assignee

Comment 35

2 years ago
It's actually [512MiB, 1.5GiB). It's too early still :/
Attachment #8833871 - Attachment is obsolete: true
Attachment #8833871 - Flags: review?(luke)
Attachment #8833872 - Flags: review?(luke)
Depends on: 1337010

Updated

2 years ago
Attachment #8833872 - Flags: review?(luke) → review+
Group: javascript-core-security → core-security-release
This missed the beta 4 build but if you come up with a branch patch, it could still make the beta 5 build on Thursday.
Assignee

Comment 39

2 years ago
Contains the patches here + bug 1337561.

Approval Request Comment
[Feature/Bug causing the regression]: NA
[User impact if declined]: Security bug.
[Is this code covered by automated tests?]: Yes.
[Has the fix been verified in Nightly?]: Yes.
[Needs manual test from QE? If yes, steps to reproduce]: No.
[List of other uplifts needed for the feature/fix]: None.
[Is the change risky?]: There's some risk.
[Why is the change risky/not risky?]: There's a risk of an increase in OOM crashes on 32-bit as we now reserve a block of memory for JIT code.
[String changes made/needed]: None.
Attachment #8834879 - Flags: review+
Attachment #8834879 - Flags: approval-mozilla-aurora?
Assignee

Comment 40

2 years ago
Attachment #8834880 - Flags: review+
Attachment #8834880 - Flags: approval-mozilla-beta?
Assignee

Comment 41

2 years ago
I also removed the nonWritableJitCode option as ESR45 predates W^X and nobody is going to enable that option there.
Flags: needinfo?(jdemooij)
Attachment #8834890 - Flags: review+
Attachment #8834890 - Flags: approval-mozilla-esr45?
Comment on attachment 8834879 [details] [diff] [review]
Patch for Aurora

reserve a contiguous region for jit code, aurora53
Attachment #8834879 - Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Interestingly, it looks like there was a noticeable improvement in MEMORY_VSIZE_MAX_CONTIGUOUS on the Nightly that this landed in: https://groups.google.com/forum/#!topic/mozilla.dev.telemetry-alerts/CxxcrxL5v7c

Specifically, the patch from comment 31. Maybe it cleaned up some fragmentation issues we were having?
Assignee

Comment 45

2 years ago
(In reply to Andrew McCreight [:mccr8] from comment #44)
> Interestingly, it looks like there was a noticeable improvement in
> MEMORY_VSIZE_MAX_CONTIGUOUS on the Nightly that this landed in:
> https://groups.google.com/forum/#!topic/mozilla.dev.telemetry-alerts/
> CxxcrxL5v7c
> 
> Specifically, the patch from comment 31. Maybe it cleaned up some
> fragmentation issues we were having?

Interesting. It makes sense, I think: we used to allocate JIT code in 64K pools (most of the time) and we used a random allocation address for *all of these* allocations, see ComputeRandomAllocationAddress:

http://searchfox.org/mozilla-central/rev/afcf40f3eafd895611a839017730debb58a342a6/js/src/jit/ProcessExecutableMemory.cpp#40-66,262

Due to this randomization, you don't need many allocations to fragment the address space pretty quickly.

The patch there reserve a single 128 MB region per process on 32-bit, which probably uses more virtual address space but it's contiguous so it no longer places stuff at random locations.
Comment on attachment 8834880 [details] [diff] [review]
Patch for Beta

reserve memory range for executable code to prevent aslr bypass, beta52+
Attachment #8834880 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
(In reply to Andrew McCreight [:mccr8] from comment #44)
> Interestingly, it looks like there was a noticeable improvement in
> MEMORY_VSIZE_MAX_CONTIGUOUS on the Nightly that this landed in:
> https://groups.google.com/forum/#!topic/mozilla.dev.telemetry-alerts/
> CxxcrxL5v7c
> 
> Specifically, the patch from comment 31. Maybe it cleaned up some
> fragmentation issues we were having?

Hopefully that's the case. I find the graph showing the telemetry change very hard to interpret.
Comment on attachment 8834890 [details] [diff] [review]
Patch for ESR45

Fix a sec-critical. ESR45+.
Attachment #8834890 - Flags: approval-mozilla-esr45? → approval-mozilla-esr45+
(In reply to Nicholas Nethercote [:njn] from comment #48)
> Hopefully that's the case. I find the graph showing the telemetry change
> very hard to interpret.

Prior to the change, 46% of the values were at the max value or larger (which looks like it is about 16MB). After the change, 53% are.
Depends on: 1339122
Depends on: 1339593
Flags: qe-verify+
Whiteboard: [post-critsmash-triage]
Whiteboard: [post-critsmash-triage] → [post-critsmash-triage][adv-main52+][adv-esr45.8+]
Reporter

Comment 53

2 years ago
For the sake of completeness here is a complete Proof of Concept against bug 1334933 which circumvents the fix introduced in bug 1325200. It may be used to execute arbitrary code (i.e.: WinExec('cmd.exe')), thereby bypassing ASLR and DEP without info-leaks and code-reuse, only needing control of EIP.

Best,
  Rh0
Alias: CVE-2017-5400
Flags: sec-bounty?
Flags: sec-bounty? → sec-bounty+
(In reply to Jan de Mooij [:jandem] from comment #41)
> Created attachment 8834890 [details] [diff] [review]
> Patch for ESR45
> 
> I also removed the nonWritableJitCode option as ESR45 predates W^X and
> nobody is going to enable that option there.

Fwiw (and as discussed with :jandem on IRC), this broke the build of TB 45.8.0 on OpenBSD where this triggers the W^X protection when precompiling startup cache - before that, we were using a patch from #1215479 to enforce nonWritableJitCode (cf http://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/mail/mozilla-thunderbird/patches/Attic/patch-mozilla_js_src_jit_ExecutableAllocator_cpp?rev=1.1&content-type=text/x-cvsweb-markup):

xpcshell(53926): mmap W^X violation

Testing a build with the tor-browser patch from https://trac.torproject.org/projects/tor/ticket/21514 - should fix the issue, i'm going to update TB to 52.0 soon anyway.
The patch from tor-browser repo allows me to produce a working TB 45.8.0 install on OpenBSD, not hitting the W^X mmap protection anymore.
Depends on: 1347139
Group: core-security-release
Whiteboard: [post-critsmash-triage][adv-main52+][adv-esr45.8+] → [post-critsmash-triage][adv-main52+][adv-esr45.8+][qa-triaged]

Removing the qe+ flag due to the fact that the qe request is a little old and also due to the fact that the issue would pose a technical challenge to reproduce and verify anyways.
Please re-add the qe+ flag if you think that effort should still be directed into verifying this fix by QA and please provide additional breakdown details regarding how to reproduce and verify the fix.

Flags: qe-verify+
Whiteboard: [post-critsmash-triage][adv-main52+][adv-esr45.8+][qa-triaged] → [post-critsmash-triage][adv-main52+][adv-esr45.8+]
You need to log in before you can comment on or make changes to this bug.