Crash in [@ mozilla::image::DecodePool::DecodePool]
Categories
(Core :: Graphics: ImageLib, defect, P2)
Tracking
()
People
(Reporter: mccr8, Unassigned)
References
Details
(Keywords: crash)
Crash Data
Crash report: https://crash-stats.mozilla.org/report/index/0774eb8f-b749-43e3-9f05-e3fd70230623
MOZ_CRASH Reason: MOZ_RELEASE_ASSERT(((bool)(__builtin_expect(!!(!NS_FAILED_impl(rv)), 1))) && mIOThread) (Should successfully create image I/O thread)
Top 10 frames of crashing thread:
0 libxul.so mozilla::image::DecodePool::DecodePool image/DecodePool.cpp:102
1 libxul.so mozilla::image::DecodePool::Singleton image/DecodePool.cpp:63
1 libxul.so mozilla::image::DecodePool::Initialize image/DecodePool.cpp:56
2 libxul.so mozilla::image::EnsureModuleInitialized image/build/nsImageModule.cpp:74
3 libxul.so mozilla::xpcom::CallInitFunc xpcom/components/StaticComponents.cpp:9483
3 libxul.so mozilla::xpcom::CreateInstanceImpl xpcom/components/StaticComponents.cpp:9973
4 libxul.so mozilla::xpcom::StaticModule::CreateInstance const xpcom/components/StaticComponents.cpp:13094
4 libxul.so xpcom/components/nsComponentManager.cpp:184
4 libxul.so nsComponentManagerImpl::GetServiceLocked xpcom/components/nsComponentManager.cpp:971
5 libxul.so nsComponentManagerImpl::GetServiceByContractID xpcom/components/nsComponentManager.cpp:1160
Failing to create a thread might indicate some kind of resource exhaustion issue we can't easily fix.
Comment 1•2 years ago
|
||
Tim, could you please have a look here when you have some cycles?
Comment 2•2 years ago
|
||
The bulk of these crashes happen on linux.
I found an active intermittent hitting this same assert, bug 1777715. That bug is only hit in linux asan builds. I went through the failures of it in the last ~week, only one of them has any warnings printed (failure cases in this area usually have NS_WARN_IF)
https://treeherder.mozilla.org/logviewer?job_id=419938293&repo=autoland&lineNumber=6005
[Parent 1601, IPC Launch] WARNING: fork() failed: Cannot allocate memory: file /builds/worker/checkouts/gecko/ipc/chromium/src/base/process_util_linux.cc:280
[Parent 1601, IPC I/O Parent] WARNING: Failed to launch tab subprocess: file /builds/worker/checkouts/gecko/ipc/glue/GeckoChildProcessHost.cpp:802
So in that case it seems like things are quite wrong and there is some higher level issue (either with our code, tests, or machine running the test).
I asked pernosco to reproduce for the last ~10 failures to see if we can get a pernosco.
I also traced through some of the ways NS_NewNamedThread, nothing stood out as an obvious candidate.
Comment 3•2 years ago
|
||
(In reply to Timothy Nikkel (:tnikkel) from comment #2)
I asked pernosco to reproduce for the last ~10 failures to see if we can get a pernosco.
None of these ever replied to me success or failure. Kyle said it was tripping up pernosco somehow.
Comment 4•8 months ago
|
||
The bug is linked to a topcrash signature, which matches the following criteria:
- Top 20 desktop browser crashes on beta
- Top 10 content process crashes on beta
- Top 5 desktop browser crashes on Linux on beta
:tnikkel, could you consider increasing the severity of this top-crash bug?
For more information, please visit BugBot documentation.
Comment 5•8 months ago
|
||
There is a spike on nightly. It looks like about 2 machines are responsible for the bulk of crashes.
We crash here
where we try to created the image io thread. There was a bug filed years ago about a crash like this I believe.
I'm not sure what would cause us to fail to be able to create new threads, but it seems like if we are in the situation then the system is in a very bad state.
Comment 6•7 months ago
|
||
Based on the topcrash criteria, the crash signature linked to this bug is not a topcrash signature anymore.
For more information, please visit BugBot documentation.
Description
•