Closed Bug 1709502 Opened 2 years ago Closed 2 years ago

ThreadSanitizer: data race [@ __tsan_atomic32_store] vs. [@ free]


(Core :: Gecko Profiler, defect, P1)




Tracking Status
firefox90 --- affected
firefox93 --- fixed


(Reporter: tsmith, Unassigned)


(Blocks 2 open bugs)



(1 file)

The attached crash information was detected by ThreadSanitizer while using build mozilla-central 20210501-d1718a9093b1. This has only been reported once and does not appear to be reproducible.

General information about TSan reports

Why fix races?

Data races are undefined behavior and can cause crashes as well as correctness issues. Compiler optimizations can cause racy code to have unpredictable and hard-to-reproduce behavior.


If you think this race can cause crashes or correctness issues, it would be great to rate the bug appropriately as P1/P2 and/or indicating this in the bug. This makes it a lot easier for us to assess the actual impact that these reports make and if they are helpful to you.

False Positives / Benign Races

Typically, races reported by TSan are not false positives [1], but it is possible that the race is benign. Even in this case it would be nice to come up with a fix if it is easily doable and does not regress performance. Every race that we cannot fix will have to remain on the suppression list and slows down the overall TSan performance. Also note that seemingly benign races can possibly be harmful (also depending on the compiler, optimizations and the architecture) [2][3].

[1] One major exception is the involvement of uninstrumented code from third-party libraries.
[3] How to miscompile programs with "benign" data races:

Suppressing unfixable races

If the bug cannot be fixed, then a runtime suppression needs to be added in mozglue/build/TsanOptions.cpp. The suppressions match on the full stack, so it should be picked such that it is unique to this particular race. The bug number of this bug should also be included so we have some documentation on why this suppression was added.

Very useful report, thank you Tyson.

In this case, we have:

  • A thread is about to wait for a Monitor, and notifies the profiler about it through profiler_thread_sleep, which modifies an atomic value in the appropriately-named RacyRegisteredThread.
  • The main thread is shutting down, and near the end of that, when all other threads should have already shutdown, is calling profiler_shutdown, which destroys its list of RegisteredThreads and associated RacyRegisteredThreads.

In the report, it seems that the actual ordering is fine: Modifying an atomic, and later destroying that atomic.

But the main issue is that there is no proper guard to prevent the reverse order, i.e.: profiler_shutdown destroying the atomic, but the thread could still actually exist and try to modify that destroyed atomic.

I think in practice this should not happen, since non-main threads are supposed to have been stopped and "joined" by the time the main thread is in that phase of its shutdown; but there could still be some long-living threads surviving (bug elsewhere that makes us forget to terminate them, or threads started by external code, which we cannot control).
Also there have been other reports that could be caused by such long-living threads. And some bugs had to deal with uncertain thread-termination times, by adding multi-ownership handles to profiling stacks (see bug 1445822).
Also bug 1701874 seems to be due to a thread shutting down after the main thread's profiler_shutdown happened!

So I think we should certainly try to fix this race issue, and it could resolve a few related bugs.
I think one solution would be to let the thread own its RegisteredThread, and register/unregister it with the profiler through proper mutexed functions. This would require some significant rework, while trying to keep the low overhead of some time-critical accesses.

It's not a security issue though, as it's happening at the end of process shutdown with no user-controllable code. (But I cannot remove the security flag.)
Also in Firefox releases, we're actually calling exit(0) earlier, so most users won't even reach that point. But it would still be good to fix, to help without our internal tests that don't exit early.

Severity: -- → S4
Keywords: csectype-race
Priority: -- → P1
Group: dom-core-security

Thanks to bug 1722261, this shouldn't happen anymore:
Now the thread data is only managed by the thread itself, the destruction of CorePS (or ActivePS) doesn't touch them anymore.
And that data is itself protected in different levels of protections, and accessing it with the appropriate locking (where needed) is guided by the C++ type system.

Of course, if you ever see more potential races, please let me know.

Closed: 2 years ago
Depends on: 1722261
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.