ThreadSanitizer: data race [@ setUnchecked] vs. [@ slotSpan]
Categories
(Core :: JavaScript: GC, defect, P1)
Tracking
()
People
(Reporter: decoder, Assigned: jonco)
References
(Blocks 1 open bug)
Details
(Keywords: csectype-race, sec-moderate, Whiteboard: [post-critsmash-triage][adv-main78+r])
Attachments
(4 files)
The attached crash information was detected while running CI tests with ThreadSanitizer on mozilla-central revision 12fb5e522dd3.
Looks like another GC race.
General information about TSan reports
Why fix races?
Data races are undefined behavior and can cause crashes as well as correctness issues. Compiler optimizations can cause racy code to have unpredictable and hard-to-reproduce behavior.
Rating
If you think this race can cause crashes or correctness issues, it would be great to rate the bug appropriately as P1/P2 and/or indicating this in the bug. This makes it a lot easier for us to assess the actual impact that these reports make and if they are helpful to you.
False Positives / Benign Races
Typically, races reported by TSan are not false positives [1], but it is possible that the race is benign. Even in this case it would be nice to come up with a fix if it is easily doable and does not regress performance. Every race that we cannot fix will have to remain on the suppression list and slows down the overall TSan performance. Also note that seemingly benign races can possibly be harmful (also depending on the compiler, optimizations and the architecture) [2][3].
[1] One major exception is the involvement of uninstrumented code from third-party libraries.
[2] http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
[3] How to miscompile programs with "benign" data races: https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf
Suppressing unfixable races
If the bug cannot be fixed, then a runtime suppression needs to be added in mozglue/build/TsanOptions.cpp
. The suppressions match on the full stack, so it should be picked such that it is unique to this particular race. The bug number of this bug should also be included so we have some documentation on why this suppression was added.
Reporter | ||
Comment 1•4 years ago
|
||
Comment 2•4 years ago
|
||
Paul, please take a look and set a priority for this bug. Thanks.
Comment 3•4 years ago
|
||
It looks like the shape being manipulated is also being freed and the main thread races with background finalisation. I'm more concerned that we're freeing an object that is in use (It isn't rooted, barrered or traced properly?) and that this could lead to a security exploit.
Decoder can you move this to the appropriate security group? I cannot.
Updated•4 years ago
|
Updated•4 years ago
|
Assignee | ||
Comment 4•4 years ago
|
||
This is a race between the JSObject finalizer accessing the base shape of a dictionary-mode object (to get the slot span) and the base shape being changed from unowned to owned in Shape::makeOwnBaseShape. The shape and base shape are not themselves being freed.
Looking at makeOwnBaseShape it seems like slotSpan_ won't be copied across to the new base shape. Jan, do you know if that's right?
The slot span is used for memory accounting so it's not the end of the world if it's wrong.
Comment 5•4 years ago
•
|
||
(In reply to Jon Coppeard (:jonco) from comment #4)
Looking at makeOwnBaseShape it seems like slotSpan_ won't be copied across to the new base shape. Jan, do you know if that's right?
It's a bit obscure but I think that's because the BaseShape's slotSpan_ field is only used for BaseShapes for dictionary shapes. Dictionary objects have an owned BaseShape and makeOwnBaseShape starts with an unowned BaseShape so it's not a dictionary shape.
Assignee | ||
Comment 6•4 years ago
|
||
(In reply to Jan de Mooij [:jandem] from comment #5)
The crash data suggests we're finalizing a dictionary mode object though so I don't understand what's going on here.
Assignee | ||
Comment 7•4 years ago
|
||
Decoder, is there any way we can reproduce this? I can't currently see how this can happen.
Reporter | ||
Comment 8•4 years ago
|
||
I tried to find steps to reproduce, but failed so far. All I can tell is that the race occurred in Mochitests, around toolkit/components/extensions/test/mochitest/test_ext_webnavigation_incognito.html
. I ran Mochitests again with the suppression for this bug disabled and retriggered the chunk several times, no luck. Either the race is gone, it is suppressed by another (similar) rule or it is intermittent enough to not show up in 10 retriggers.
Reporter | ||
Comment 9•4 years ago
|
||
Thinking about it, the chunking also heavily changed since I found that race and that very likely influences GC. Maybe it would make sense to combine gczeal with TSan on some Mochitests? We can also try fuzzing the JS shell with TSan soon, if this bug doesn't require some browser-only objects.
Reporter | ||
Comment 11•4 years ago
|
||
I'm adding a suppression here now to avoid further intermittent failures. But we should still try to reproduce this and find a proper solution.
Reporter | ||
Comment 13•4 years ago
|
||
Updated•4 years ago
|
Comment 14•4 years ago
|
||
Updated•4 years ago
|
Comment 16•4 years ago
|
||
Just wondering: is it possible to capture TSAN bugs in rr? I don't know if you have any automation around generating such traces for these bugs, but it could be very helpful here.
Assignee | ||
Comment 17•4 years ago
|
||
(In reply to Jon Coppeard (:jonco) from comment #6)
My previous comments about dictionary mode objects are wrong. I thought this was accessing the base shape in NativeObject::slotSpan because it was taking the if branch for dictionary mode object. I just worked out that other branch accesses the base shape too, via Shape::slotSpan() / Shape::getObjectClass().
Reporter | ||
Comment 18•4 years ago
|
||
(In reply to Steve Fink [:sfink] [:s:] from comment #16)
Just wondering: is it possible to capture TSAN bugs in rr? I don't know if you have any automation around generating such traces for these bugs, but it could be very helpful here.
To be honest: I don't know. I have never tried it, but rr limits all code execution to a single core, so it might at least need a specific scheduling to reproduce under rr at all. We could try for this bug, but the problem is that we never managed to reproduce this at all by running the tests ourselves.
I was hoping that JS shell fuzzing with TSan might find us a reproducible test for this, but I'm hitting issues with shell-only races that need fixing first before we can go down that way.
Assignee | ||
Comment 19•4 years ago
|
||
This changes NativeObject::slotSpan() to get the class from the object group rather than getting it from the base shape to avoid this race. It's a bit annoying to have to do this (and not at all obvious). What do you think?
Comment 20•4 years ago
|
||
(In reply to Christian Holler (:decoder) from comment #18)
(In reply to Steve Fink [:sfink] [:s:] from comment #16)
Just wondering: is it possible to capture TSAN bugs in rr? I don't know if you have any automation around generating such traces for these bugs, but it could be very helpful here.
To be honest: I don't know. I have never tried it, but rr limits all code execution to a single core, so it might at least need a specific scheduling to reproduce under rr at all. We could try for this bug, but the problem is that we never managed to reproduce this at all by running the tests ourselves.
While this is true in practice (anything run under rr runs on a single core), --chaos
(aka -h
) will report multiple cores as being present and will schedule multiple threads time-sliced on that core. Well, --chaos
also tries to make tricky scheduling decisions to flush out bugs; the minimal change you need is --num-cores=N
. Mostly, this should be identical to running on multiple cores except it's slower. That's not entirely true, since rr assumes that the program is race-free to begin with, but I'm hoping that TSAN would handle that part.
I was hoping that JS shell fuzzing with TSan might find us a reproducible test for this, but I'm hitting issues with shell-only races that need fixing first before we can go down that way.
Yeah, that's still a problem. If this isn't easily reproducible under TSAN, it seems unlikely that it'll be easy to reproduce under TSAN under rr.
Assignee | ||
Comment 21•4 years ago
|
||
This isn't a sec-high. The worst that can happen is that the malloc memory count gets out of sync.
Comment 22•4 years ago
|
||
https://hg.mozilla.org/integration/autoland/rev/8bce3fa59864c8ad3bcf61c2b83f8760baa49e54
https://hg.mozilla.org/mozilla-central/rev/8bce3fa59864
Updated•4 years ago
|
Updated•4 years ago
|
Comment 23•4 years ago
|
||
The patch landed in nightly and beta is affected.
:decoder, is this bug important enough to require an uplift?
If not please set status_beta
to wontfix
.
For more information, please visit auto_nag documentation.
Updated•4 years ago
|
Assignee | ||
Comment 24•4 years ago
|
||
The race happens when doing memory accounting. Absent any evidence that this is causing problems I'd say let this one ride the trains.
Updated•4 years ago
|
Updated•4 years ago
|
Updated•4 years ago
|
Comment 25•4 years ago
|
||
Comment 26•4 years ago
|
||
Re-pushing, as it was paired with the patch for https://bugzilla.mozilla.org/show_bug.cgi?id=1601632 which got backed out. This one might still be fine though.
Comment 27•4 years ago
|
||
Remove supression for seemingly fixed issue. r=decoder
https://hg.mozilla.org/integration/autoland/rev/8e58925909cccd52a321809d5094d96e7979ed0e
https://hg.mozilla.org/mozilla-central/rev/8e58925909cc
Updated•3 years ago
|
Description
•