Closed Bug 1608068 Opened 4 years ago Closed 4 years ago

ThreadSanitizer: data race [@ setUnchecked] vs. [@ slotSpan]

Categories

(Core :: JavaScript: GC, defect, P1)

x86_64
Linux
defect

Tracking

()

RESOLVED FIXED
mozilla78
Tracking Status
firefox-esr68 --- wontfix
firefox74 --- wontfix
firefox75 --- wontfix
firefox76 --- wontfix
firefox77 --- wontfix
firefox78 --- fixed

People

(Reporter: decoder, Assigned: jonco)

References

(Blocks 1 open bug)

Details

(Keywords: csectype-race, sec-moderate, Whiteboard: [post-critsmash-triage][adv-main78+r])

Attachments

(4 files)

The attached crash information was detected while running CI tests with ThreadSanitizer on mozilla-central revision 12fb5e522dd3.

Looks like another GC race.

General information about TSan reports

Why fix races?

Data races are undefined behavior and can cause crashes as well as correctness issues. Compiler optimizations can cause racy code to have unpredictable and hard-to-reproduce behavior.

Rating

If you think this race can cause crashes or correctness issues, it would be great to rate the bug appropriately as P1/P2 and/or indicating this in the bug. This makes it a lot easier for us to assess the actual impact that these reports make and if they are helpful to you.

False Positives / Benign Races

Typically, races reported by TSan are not false positives [1], but it is possible that the race is benign. Even in this case it would be nice to come up with a fix if it is easily doable and does not regress performance. Every race that we cannot fix will have to remain on the suppression list and slows down the overall TSan performance. Also note that seemingly benign races can possibly be harmful (also depending on the compiler, optimizations and the architecture) [2][3].

[1] One major exception is the involvement of uninstrumented code from third-party libraries.
[2] http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
[3] How to miscompile programs with "benign" data races: https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf

Suppressing unfixable races

If the bug cannot be fixed, then a runtime suppression needs to be added in mozglue/build/TsanOptions.cpp. The suppressions match on the full stack, so it should be picked such that it is unique to this particular race. The bug number of this bug should also be included so we have some documentation on why this suppression was added.

Paul, please take a look and set a priority for this bug. Thanks.

Flags: needinfo?(pbone)

It looks like the shape being manipulated is also being freed and the main thread races with background finalisation. I'm more concerned that we're freeing an object that is in use (It isn't rooted, barrered or traced properly?) and that this could lead to a security exploit.

Decoder can you move this to the appropriate security group? I cannot.

Group: core-security
Flags: needinfo?(pbone) → needinfo?(choller)
Priority: -- → P1
Group: core-security → javascript-core-security
Flags: needinfo?(choller)

This is a race between the JSObject finalizer accessing the base shape of a dictionary-mode object (to get the slot span) and the base shape being changed from unowned to owned in Shape::makeOwnBaseShape. The shape and base shape are not themselves being freed.

Looking at makeOwnBaseShape it seems like slotSpan_ won't be copied across to the new base shape. Jan, do you know if that's right?

The slot span is used for memory accounting so it's not the end of the world if it's wrong.

Flags: needinfo?(jdemooij)

(In reply to Jon Coppeard (:jonco) from comment #4)

Looking at makeOwnBaseShape it seems like slotSpan_ won't be copied across to the new base shape. Jan, do you know if that's right?

It's a bit obscure but I think that's because the BaseShape's slotSpan_ field is only used for BaseShapes for dictionary shapes. Dictionary objects have an owned BaseShape and makeOwnBaseShape starts with an unowned BaseShape so it's not a dictionary shape.

Flags: needinfo?(jdemooij)

(In reply to Jan de Mooij [:jandem] from comment #5)
The crash data suggests we're finalizing a dictionary mode object though so I don't understand what's going on here.

Decoder, is there any way we can reproduce this? I can't currently see how this can happen.

Flags: needinfo?(choller)

I tried to find steps to reproduce, but failed so far. All I can tell is that the race occurred in Mochitests, around toolkit/components/extensions/test/mochitest/test_ext_webnavigation_incognito.html. I ran Mochitests again with the suppression for this bug disabled and retriggered the chunk several times, no luck. Either the race is gone, it is suppressed by another (similar) rule or it is intermittent enough to not show up in 10 retriggers.

Flags: needinfo?(choller)

Thinking about it, the chunking also heavily changed since I found that race and that very likely influences GC. Maybe it would make sense to combine gczeal with TSan on some Mochitests? We can also try fuzzing the JS shell with TSan soon, if this bug doesn't require some browser-only objects.

I'm adding a suppression here now to avoid further intermittent failures. But we should still try to reproduce this and find a proper solution.

Keywords: leave-open
Assignee: nobody → choller
Status: NEW → ASSIGNED

decoder, are you still working on this bug?

Flags: needinfo?(choller)
Flags: needinfo?(choller)

Just wondering: is it possible to capture TSAN bugs in rr? I don't know if you have any automation around generating such traces for these bugs, but it could be very helpful here.

Flags: needinfo?(choller)

(In reply to Jon Coppeard (:jonco) from comment #6)
My previous comments about dictionary mode objects are wrong. I thought this was accessing the base shape in NativeObject::slotSpan because it was taking the if branch for dictionary mode object. I just worked out that other branch accesses the base shape too, via Shape::slotSpan() / Shape::getObjectClass().

(In reply to Steve Fink [:sfink] [:s:] from comment #16)

Just wondering: is it possible to capture TSAN bugs in rr? I don't know if you have any automation around generating such traces for these bugs, but it could be very helpful here.

To be honest: I don't know. I have never tried it, but rr limits all code execution to a single core, so it might at least need a specific scheduling to reproduce under rr at all. We could try for this bug, but the problem is that we never managed to reproduce this at all by running the tests ourselves.

I was hoping that JS shell fuzzing with TSan might find us a reproducible test for this, but I'm hitting issues with shell-only races that need fixing first before we can go down that way.

Flags: needinfo?(choller)

This changes NativeObject::slotSpan() to get the class from the object group rather than getting it from the base shape to avoid this race. It's a bit annoying to have to do this (and not at all obvious). What do you think?

(In reply to Christian Holler (:decoder) from comment #18)

(In reply to Steve Fink [:sfink] [:s:] from comment #16)

Just wondering: is it possible to capture TSAN bugs in rr? I don't know if you have any automation around generating such traces for these bugs, but it could be very helpful here.

To be honest: I don't know. I have never tried it, but rr limits all code execution to a single core, so it might at least need a specific scheduling to reproduce under rr at all. We could try for this bug, but the problem is that we never managed to reproduce this at all by running the tests ourselves.

While this is true in practice (anything run under rr runs on a single core), --chaos (aka -h) will report multiple cores as being present and will schedule multiple threads time-sliced on that core. Well, --chaos also tries to make tricky scheduling decisions to flush out bugs; the minimal change you need is --num-cores=N. Mostly, this should be identical to running on multiple cores except it's slower. That's not entirely true, since rr assumes that the program is race-free to begin with, but I'm hoping that TSAN would handle that part.

I was hoping that JS shell fuzzing with TSan might find us a reproducible test for this, but I'm hitting issues with shell-only races that need fixing first before we can go down that way.

Yeah, that's still a problem. If this isn't easily reproducible under TSAN, it seems unlikely that it'll be easy to reproduce under TSAN under rr.

This isn't a sec-high. The worst that can happen is that the malloc memory count gets out of sync.

Keywords: sec-highsec-moderate
Group: javascript-core-security → core-security-release
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla78

The patch landed in nightly and beta is affected.
:decoder, is this bug important enough to require an uplift?
If not please set status_beta to wontfix.

For more information, please visit auto_nag documentation.

Flags: needinfo?(choller)
Assignee: choller → jcoppeard
Flags: needinfo?(choller) → needinfo?(jcoppeard)

The race happens when doing memory accounting. Absent any evidence that this is causing problems I'd say let this one ride the trains.

Flags: needinfo?(jcoppeard)
Flags: qe-verify-
Whiteboard: [post-critsmash-triage]
Whiteboard: [post-critsmash-triage] → [post-critsmash-triage][adv-main78+r]

Re-pushing, as it was paired with the patch for https://bugzilla.mozilla.org/show_bug.cgi?id=1601632 which got backed out. This one might still be fine though.

Group: core-security-release
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: