Closed Bug 1687001 Opened 3 years ago Closed 3 years ago

ThreadSanitizer: data race [@ TransactionBase::CommitOp::CommitOrRollbackAutoIncrementCounts ] vs. [@ FullDatabaseMetadata::Duplicate]

Categories

(Core :: Storage: IndexedDB, defect, P2)

defect

Tracking

()

RESOLVED FIXED
88 Branch
Tracking Status
firefox88 --- fixed

People

(Reporter: Gankra, Assigned: sg)

References

(Blocks 1 open bug)

Details

Attachments

(2 files)

Ran into this while testing wpt under tsan. mCommittedAutoIncrementId is being accessed on the transaction thread as well as the thread that spawns the transaction (contrary to its documentation that states it's only accessed on the transaction thread). But, it's weird because it seems like spawning the thread should be an adequate synchronization?

Events as I see them:

we have two threads, T20 and T56 (the first spawns the second!)

T20:
    BeginVersionChange
        transaction->CopyDatabaseMetadata()
            reads mCommittedAutoIncrementId 
        WaitForTransactions()
            spawns T56

T56:
    CommitOrRollbackAutoIncrementCounts
        writes mCommittedAutoIncrementId

Presumably the read should cleanly Happen Before the write?

Permalinks to problematic lines:

CommitOrRollbackAutoIncrementCounts

vs

FullDatabaseMetadata::Duplicate


General information about TSan reports

Why fix races?

Data races are undefined behavior and can cause crashes as well as correctness issues. Compiler optimizations can cause racy code to have unpredictable and hard-to-reproduce behavior.

Rating

If you think this race can cause crashes or correctness issues, it would be great to rate the bug appropriately as P1/P2 and/or indicating this in the bug. This makes it a lot easier for us to assess the actual impact that these reports make and if they are helpful to you.

False Positives / Benign Races

Typically, races reported by TSan are not false positives [1], but it is possible that the race is benign. Even in this case it would be nice to come up with a fix if it is easily doable and does not regress performance. Every race that we cannot fix will have to remain on the suppression list and slows down the overall TSan performance. Also note that seemingly benign races can possibly be harmful (also depending on the compiler, optimizations and the architecture) [2][3].

[1] One major exception is the involvement of uninstrumented code from third-party libraries.
[2] http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
[3] How to miscompile programs with "benign" data races: https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf
Suppressing unfixable races

If the bug cannot be fixed, then a runtime suppression needs to be added in mozglue/build/TsanOptions.cpp. The suppressions match on the full stack, so it should be picked such that it is unique to this particular race. The bug number of this bug should also be included so we have some documentation on why this suppression was added.

Sounds like an IndexedDB thing:

Component: DOM: Core & HTML → Storage: IndexedDB

Might this happen if BeginVersionChange is called twice in a very short timeframe such that T56 is already spawned while we read mCommittedAutoIncrementId ?

Flags: needinfo?(sgiesecke)

(In reply to Alexis Beingessner [:Gankra] from comment #0)

Created attachment 9197385 [details]
tsan-backtrace-jan-2021.txt

Ran into this while testing wpt under tsan. mCommittedAutoIncrementId is being accessed on the transaction thread as well as the thread that spawns the transaction (contrary to its documentation that states it's only accessed on the transaction thread).

That inconsistency should probably be fixed, and either actually ensure that accesses happen only on the transaction thread, or proper synchronization is in place.

But, it's weird because it seems like spawning the thread should be an adequate synchronization?

Yes, spawning should be an adequate synchronization. But it's a pool and spawning a new thread doesn't happen on each call. So I guess the thread was created when processing some earlier transaction, and it now got reused.

Assignee: nobody → sgiesecke
Status: NEW → ASSIGNED
Flags: needinfo?(sgiesecke)

So, FullDatabaseMetadata::Duplicate must necessarily read mCommittedAutoIncrementId/mNextAutoIncrementId. I think it's hard to avoid that we call OpenDatabaseOp::BeginVersionChange on the background thread. We could do either of the following:

  1. duplicate the metadata already in OpenDatabaseOp::DoDatabaseWork
  2. add a mutex protecting the two members, so they can safely be accessed from any thread. Just making each of them Atomic wouldn't suffice, since they need to be kept in sync.

Jan, what do you think? Is there a problem with either of these options? If not, which do you think is preferable?

Moving CommitOrRollbackAutoIncrementCounts to the background thread instead would require another two events to be introduced for proper serialization, which seems to be far more complicated and inefficient.

Note that OpenDatabaseOp::AssertMetadataConsistency contains assertions reading mCommittedAutoIncrementId/mNextAutoIncrementId on the background thread as well. It even contains a comment on this, but seems to assume atomic accesses at least. This is problematic as well, but this is not used in regular release builds.

Flags: needinfo?(jvarga)

I fear this is one of our long standing problems in IDB implementation. I have to take closer look.

Severity: -- → S3
Priority: -- → P2

Bug 1688734 shows that this can happen on the /IndexedDB/idbobjectstore-rename-store.html WPT test, e.g.

Hm, after looking at this again, I think the only solution is to add a mutex for mCommittedAutoIncrementId/mNextAutoIncrementId. I originally thought the other parts of the metadata are immutable, but they are not. So we can't simply move the call to FullDatabaseMetadata::Duplicate to the I/O thread.

Pushed by sgiesecke@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/2f62cc8543fe
Resolve races on auto increment ids by a mutex. r=dom-workers-and-storage-reviewers,asuth
Status: ASSIGNED → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → 88 Branch

A patch landed but I fear we still have a problem in this area.

Flags: needinfo?(jvarga)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: