Closed Bug 1601600 Opened 5 years ago Closed 2 years ago

ThreadSanitizer: data race [@ store] vs. [@ memcpy] through [@ SkARGB32_Blitter::blitRect] and [@ Clamp_S32_D32_nofilter_trans_shaderproc]

Categories

(Core :: Graphics: Layers, defect, P3)

x86_64
Linux
defect

Tracking

()

RESOLVED INACTIVE
Tracking Status
firefox73 --- affected

People

(Reporter: decoder, Unassigned)

References

(Blocks 1 open bug)

Details

(Whiteboard: qa-not-actionable)

Attachments

(4 files)

The attached crash information was detected while running CI tests with ThreadSanitizer on mozilla-central revision 6989fcd6bab3.

This race is not super frequent on Mochitests but I noticed it because it is memset/memcpy race and I decided to file it because (I guess) this is unexpected and could potentially lead to wrong results that are hard to debug.

General information about TSan reports

Why fix races?

Data races are undefined behavior and can cause crashes as well as correctness issues. Compiler optimizations can cause racy code to have unpredictable and hard-to-reproduce behavior.

Rating

If you think this race can cause crashes or correctness issues, it would be great to rate the bug appropriately as P1/P2 and/or indicating this in the bug. This makes it a lot easier for us to assess the actual impact that these reports make and if they are helpful to you.

False Positives / Benign Races

Typically, races reported by TSan are not false positives [1], but it is possible that the race is benign. Even in this case it would be nice to come up with a fix if it is easily doable and does not regress performance. Every race that we cannot fix will have to remain on the suppression list and slows down the overall TSan performance. Also note that seemingly benign races can possibly be harmful (also depending on the compiler, optimizations and the architecture) [2][3].

[1] One major exception is the involvement of uninstrumented code from third-party libraries.
[2] http://software.intel.com/en-us/blogs/2013/01/06/benign-data-races-what-could-possibly-go-wrong
[3] How to miscompile programs with "benign" data races: https://www.usenix.org/legacy/events/hotpar11/tech/final_files/Boehm.pdf

Suppressing unfixable races

If the bug cannot be fixed, then a runtime suppression needs to be added in mozglue/build/TsanOptions.cpp. The suppressions match on the full stack, so it should be picked such that it is unique to this particular race. The bug number of this bug should also be included so we have some documentation on why this suppression was added.

Over to lsalzman.

Flags: needinfo?(lsalzman)

Thanks for filing this!

Priority: -- → P3

Actually, this looks like Layers, not Skia. mattwoodrow?

Looks like we're compositing while finishing a transation on the client side, into the same buffer.

Component: Graphics → Graphics: Layers
Flags: needinfo?(lsalzman) → needinfo?(matt.woodrow)

Looks like an issue with our layer texture buffer management. The allocation stack shows that we're meant to be double buffered, but somehow we're writing and reading from the same texture.

I'm not sure how that could happen, unless it's something to do with a locking failure. Are there any logs recorded?

This seems fairly low priority, since the outcome will be (at worst) some incorrect pixels drawn for a single frame. The currently compositing frame might read a mixture of values (from its frame, and the next), but since we're drawing we must schedule the next composite which should fix them.

Flags: needinfo?(matt.woodrow)

(In reply to Matt Woodrow (:mattwoodrow) from comment #5)

Are there any logs recorded?

These logs all come from tests and this particular trace was during mochitest-plain, chunk 8 of 20. In the log I see

INFO - TEST-START | dom/quota/test/test_simpledb.html

Right before this, but the test sounds rather unrelated I guess. I also saw this randomly during other tests before I suppressed those races. Let me know if you need further help to reproduce this.

Triage: still an issue.

Gankra, this new backtrace appears to be a different issue.

The original report was a write from the main thread, read from the compositor thread.

The new report shows two racing writes, one from the compositor thread and one from a rust thread with no backtrace.

I have no theories as to what could cause the latter. Do you think it's possible to fix the missing backtrace?

Flags: needinfo?(a.beingessner)

Unfortunately, we don't currently have the Rust stdlib instrumented properly (Bug 1671691). I found 3 or 4 things exposed by removing this supression, I'll attach the others, which aren't through Rust.

Flags: needinfo?(a.beingessner)

Hopefully 1 of these 2 is more helpful.

See Also: → 1712186
Whiteboard: qa-not-actionable

In the process of migrating remaining bugs to the new severity system, the severity for this bug cannot be automatically determined. Please retriage this bug using the new severity system.

Severity: critical → --
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INACTIVE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: