ThreadSanitizer: data race [@ nsHtml5ExecutorFlusher::Run] vs. [@ geckoservo::glue::traverse_subtree::h4cda47b45b905269]
Categories
(Core :: DOM: HTML Parser, defect, P3)
Tracking
()
People
(Reporter: decoder, Assigned: Gankra)
References
(Blocks 1 open bug)
Details
Attachments
(2 files)
The attached crash information was detected while running CI tests with ThreadSanitizer on mozilla-central revision 12fb5e522dd3.
I tried to make sense of this race report, but I don't understand it. It is not even clear to me, what we are racing on exactly.
The fact that one stack has some Rust code on it could matter, if that Rust code called into the Rust std library for synchronizing somehow (our Rust code is instrumented, the Rust std library is not). However, I think we first need to figure out what exactly the data is that we are racing on here.
General information about TSan reports
Why fix races?
Data races are undefined behavior and can cause crashes as well as correctness issues. Compiler optimizations can cause racy code to have unpredictable and hard-to-reproduce behavior.
Rating
If you think this race can cause crashes or correctness issues, it would be great to rate the bug appropriately as P1/P2 and/or indicating this in the bug. This makes it a lot easier for us to assess the actual impact that these reports make and if they are helpful to you.
False Positives / Benign Races
Typically, races reported by TSan are not false positives [1], but it is possible that the race is benign. Even in this case it would be nice to come up with a fix if it is easily doable and does not regress performance. Every race that we cannot fix will have to remain on the suppression list and slows down the overall TSan performance. Also note that seemingly benign races can possibly be harmful (also depending on the compiler, optimizations and the architecture) [2][3].
[1] One major exception is the involvement of uninstrumented code from third-party libraries.
[2] http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
[3] How to miscompile programs with "benign" data races: https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf
Suppressing unfixable races
If the bug cannot be fixed, then a runtime suppression needs to be added in mozglue/build/TsanOptions.cpp
. The suppressions match on the full stack, so it should be picked such that it is unique to this particular race. The bug number of this bug should also be included so we have some documentation on why this suppression was added.
Reporter | ||
Comment 1•6 years ago
|
||
Comment 2•6 years ago
|
||
Hi Henri, could you please take a look and suggest the priority for this one?
Comment 3•6 years ago
•
|
||
(In reply to Christian Holler (:decoder) from comment #0)
I tried to make sense of this race report, but I don't understand it. It is not even clear to me, what we are racing on exactly.
The fact that one stack has some Rust code on it could matter, if that Rust code called into the Rust std library for synchronizing somehow (our Rust code is instrumented, the Rust std library is not). However, I think we first need to figure out what exactly the data is that we are racing on here.
If the stacks are correct, both T20 and T21 are running code that's only ever supposed to run on the main thread. What are the chances that either of the first two stacks (the stack for the read and the stack for the write) is wrong?
Both stacks show a runnable that's supposed to run on the main thread. The act of posting the runnable and the act of the event loop taking a runnable from the queue should be synchronizing operations.
The location of the allocation that the racing address is claimed to come from threading code itself, but I have trouble correlating the allocation with the code, and it doesn't make sense to me in light of the race detection location. The race detection location suggests that the allocation should be the runnable, which is allocated on the parser thread.
Comment 4•6 years ago
|
||
(In reply to Henri Sivonen (:hsivonen) from comment #3)
If the stacks are correct, both T20 and T21 are running code that's only ever supposed to run on the main thread. What are the chances that either of the first two stacks (the stack for the read and the stack for the write) is wrong?
Do we still run Quantum DOM code, which had multiple threads serving main thread roles?
![]() |
||
Comment 5•6 years ago
|
||
(In reply to Henri Sivonen (:hsivonen) from comment #4)
(In reply to Henri Sivonen (:hsivonen) from comment #3)
If the stacks are correct, both T20 and T21 are running code that's only ever supposed to run on the main thread. What are the chances that either of the first two stacks (the stack for the read and the stack for the write) is wrong?
Do we still run Quantum DOM code, which had multiple threads serving main thread roles?
I do not believe we have any of the code which would have tried to run multiple main threads remaining in the browser.
Reporter | ||
Comment 6•6 years ago
|
||
(In reply to Henri Sivonen (:hsivonen) from comment #3)
(In reply to Christian Holler (:decoder) from comment #0)
I tried to make sense of this race report, but I don't understand it. It is not even clear to me, what we are racing on exactly.
The fact that one stack has some Rust code on it could matter, if that Rust code called into the Rust std library for synchronizing somehow (our Rust code is instrumented, the Rust std library is not). However, I think we first need to figure out what exactly the data is that we are racing on here.
If the stacks are correct, both T20 and T21 are running code that's only ever supposed to run on the main thread. What are the chances that either of the first two stacks (the stack for the read and the stack for the write) is wrong?
The stacks being wrong is highly unlikely. I have not had a single stack wrong with TSan so far. If this code is not supposed to be on other threads than the main thread, then we should certainly investigate, how it ended up on two style threads.
Both stacks show a runnable that's supposed to run on the main thread. The act of posting the runnable and the act of the event loop taking a runnable from the queue should be synchronizing operations.
The two threads T20 and T21 (both style threads) could both have dispatched runnables and they run in parallel (they are not sequential runnables with synchronization inbetween, otherwise TSan would not trigger). The only way for this to be a false positive would be if there was synchronization (e.g. an atomic or fence) inside the Rust std library (not just any Rust code), because we instrument our Rust code, but not the standard library. However, even in that case, the indication that this code runs outside the main thread would still be correct.
The location of the allocation that the racing address is claimed to come from threading code itself, but I have trouble correlating the allocation with the code, and it doesn't make sense to me in light of the race detection location. The race detection location suggests that the allocation should be the runnable, which is allocated on the parser thread.
I looked again and the exact location it points to for the allocation and indeed that looks odd. It is this line:
https://hg.mozilla.org/mozilla-central/file/12fb5e522dd3/xpcom/threads/nsThread.cpp#l1246
Maybe the allocation was somehow inside the Run()
and got inlined in a way that it truncated the stack?
In any case, I think the first step to move this forward is to find out, why code is running on style threads that should be main thread only. I tried to figure out which test caused this, but in all my runs, this race only occurred once and from the log it looks like it occurred at startup.
Comment 7•6 years ago
|
||
Two odd things about T21.
If it's really a style thread, then it's strange that the "Atomic read of size 8 at 0x7b5400030100 by thread T21" stack has the message loop running code on it. Style threads are created and completely managed by rayon, not by XPCOM, so we shouldn't see any message loop stuff at the bottom of the stack there.
Also I don't understand the "Thread T21 'StyleThread#1' (tid=2823, running) created by main thread at" stack. There's a pthread_create call, but it's shown at stack frame 94. Near the top of the stack is content_process_main / main / __libc_start_main.
Updated•6 years ago
|
Reporter | ||
Comment 8•6 years ago
|
||
(In reply to Cameron McCormack (:heycam) from comment #7)
Also I don't understand the "Thread T21 'StyleThread#1' (tid=2823, running) created by main thread at" stack. There's a pthread_create call, but it's shown at stack frame 94. Near the top of the stack is content_process_main / main / __libc_start_main.
Indeed, this looks really suspicious. I am going to disable the suppression for a try run and retrigger the relevant chunks several times. Maybe I can reproduce this once more with a clearer stack.
Assignee | ||
Comment 9•5 years ago
|
||
Depends on D94855
Updated•5 years ago
|
Comment 10•5 years ago
|
||
Comment 11•5 years ago
|
||
bugherder |
Updated•5 years ago
|
Description
•