Crash in [@ __memcpy_avx_unaligned | webrender::renderer::Renderer::draw_frame]
Categories
(Core :: Graphics: WebRender, defect, P3)
Tracking
()
Tracking | Status | |
---|---|---|
firefox-esr78 | --- | unaffected |
firefox83 | --- | unaffected |
firefox84 | --- | wontfix |
firefox85 | --- | fixed |
People
(Reporter: aryx, Assigned: aosmond)
References
(Blocks 1 open bug, Regression)
Details
(Keywords: crash, regression)
Crash Data
Attachments
(1 file)
Bug 1632698 landed earlier this week to fix similar signatures. Andrew, please also take a look at this one (build ID 20201209213504).
Maybe Fission related. (DOMFissionEnabled=1)
Crash report: https://crash-stats.mozilla.org/report/index/356a3a4e-17fa-4d69-a1ce-f3a410201210
Reason: SIGSEGV /SEGV_MAPERR
Top 10 frames of crashing thread:
0 libc.so.6 __memcpy_avx_unaligned
1 libxul.so webrender::renderer::Renderer::draw_frame gfx/wr/webrender/src/renderer.rs:5915
2 libxul.so webrender::renderer::Renderer::render_impl gfx/wr/webrender/src/renderer.rs:3488
3 libxul.so webrender::renderer::Renderer::render gfx/wr/webrender/src/renderer.rs:3244
4 libxul.so wr_renderer_render gfx/webrender_bindings/src/bindings.rs:614
5 libxul.so mozilla::wr::RendererOGL::UpdateAndRender gfx/webrender_bindings/RendererOGL.cpp:186
6 libxul.so mozilla::wr::RenderThread::UpdateAndRender gfx/webrender_bindings/RenderThread.cpp:492
7 libxul.so mozilla::wr::RenderThread::HandleFrameOneDoc gfx/webrender_bindings/RenderThread.cpp:326
8 libxul.so mozilla::detail::RunnableMethodImpl<mozilla::wr::RenderThread*, void xpcom/threads/nsThreadUtils.h:1201
9 libxul.so base::MessagePumpDefault::Run ipc/chromium/src/base/message_pump_default.cc:35
Assignee | ||
Comment 1•4 years ago
•
|
||
There are a few similar signatures floating around, we see it in beta too without fission. I'm in the process of fixing some telemetry/crash report annotations which may shed light on whether an NVIDIA or innocent device reset happened before the crash.
Assignee | ||
Comment 2•4 years ago
|
||
These signatures began in build 20201108093650:
But that was a Sunday daily, probably fairly low uptake. Earlier builds would include:
- 20201107100127: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=0e95e169ef40&tochange=e40cc6272439b7fa848bdf875bb41d7f4b1a3b71
- 20201106160425: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=41effaf024a5&tochange=0e95e169ef40f27ae4de839644c63249acd3b5b1
- 20201106093443: https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=5278acfcd834&tochange=41effaf024a55426f4486278364dea1117ed6571
Bug 1675453 seems of particular note, since it enabled VAAPI by default. We should probably hook hardware video decoding into the gfx blocklist. This would allow us to run a test on nightly to see if it is indeed related without disabling HW acceleration for all users.
Assignee | ||
Comment 3•4 years ago
|
||
Reviewing
https://crash-stats.mozilla.org/report/index/6b16cc93-2d3d-49c4-8626-932720201212#tab-metadata
I can see DeviceResetReason is 10, which corresponds to the NVIDIA video memory reset as the last known reset. So it appears our current solution is inadequate for handling it.
Assignee | ||
Comment 4•4 years ago
|
||
I think we might need to do:
for the NVIDIA video memory resets, and implement RenderDMABUFTextureHost::ClearCachedResources.
Assignee | ||
Comment 5•4 years ago
|
||
Updated•4 years ago
|
Assignee | ||
Updated•4 years ago
|
Updated•4 years ago
|
Assignee | ||
Comment 6•4 years ago
|
||
This should also fix issues with device resets for non-NVIDIA users as well who experience a full device reset who have DMABuf support turned on.
Comment 8•4 years ago
|
||
bugherder |
Updated•4 years ago
|
Description
•