Assertion failure: false (Leaked Connection::synchronousClose(), ownership fail.), at /storage/mozStorageConnection.cpp:514
Categories
(Core :: Storage: StorageManager, defect, P3)
Tracking
()
People
(Reporter: jkratzer, Unassigned)
References
(Blocks 2 open bugs)
Details
(Keywords: testcase, Whiteboard: [bugmon:confirm])
Attachments
(1 file)
3.86 KB,
application/octet-stream
|
Details |
Testcase found while fuzzing mozilla-central rev 45d33d6757ba (built with: --enable-debug --enable-fuzzing).
Please note that the testcase is not very reliable and may take multiple attempts to trigger the issue. I'm trying to get a pernosco session for this issue.
Testcase can be reproduced using the following commands:
$ pip install fuzzfetch grizzly-framework
$ python -m fuzzfetch --build 45d33d6757ba --debug --fuzzing -n firefox
$ python -m grizzly.replay ./firefox/firefox testcase.html --repeat 100
Assertion failure: false (Leaked Connection::synchronousClose(), ownership fail.), at /storage/mozStorageConnection.cpp:514
==1262797==ERROR: UndefinedBehaviorSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f7dc495c7aa bp 0x7f7d230fd6b0 sp 0x7f7d230fd660 T1262968)
==1262797==The signal is caused by a WRITE memory access.
==1262797==Hint: address points to the zero page.
#0 0x7f7dc495c7aa in mozilla::storage::Connection::Release() /storage/mozStorageConnection.cpp:513:11
#1 0x7f7dc739a64d in ~nsCOMPtr /builds/worker/workspace/obj-build/dist/include/nsCOMPtr.h:451:7
#2 0x7f7dc739a64d in ~FileSystemDatabaseManagerVersion001 /dom/fs/parent/datamodel/FileSystemDatabaseManagerVersion001.h:75:58
#3 0x7f7dc739a64d in mozilla::dom::fs::data::FileSystemDatabaseManagerVersion001::~FileSystemDatabaseManagerVersion001() /dom/fs/parent/datamodel/FileSystemDatabaseManagerVersion001.h:75:58
#4 0x7f7dc73a05c0 in operator() /builds/worker/workspace/obj-build/dist/include/mozilla/UniquePtr.h:459:5
#5 0x7f7dc73a05c0 in reset /builds/worker/workspace/obj-build/dist/include/mozilla/UniquePtr.h:301:7
#6 0x7f7dc73a05c0 in operator= /builds/worker/workspace/obj-build/dist/include/mozilla/UniquePtr.h:271:5
#7 0x7f7dc73a05c0 in operator() /dom/fs/parent/datamodel/FileSystemDataManager.cpp:544:40
#8 0x7f7dc73a05c0 in mozilla::detail::ProxyFunctionRunnable<mozilla::dom::fs::data::FileSystemDataManager::BeginClose()::$_10, mozilla::MozPromise<bool, nsresult, false> >::Run() /builds/worker/workspace/obj-build/dist/include/mozilla/MozPromise.h:1645:29
#9 0x7f7dc3b2e655 in mozilla::TaskQueue::Runner::Run() /xpcom/threads/TaskQueue.cpp:259:20
#10 0x7f7dc3b4994f in nsThreadPool::Run() /xpcom/threads/nsThreadPool.cpp:310:14
#11 0x7f7dc3b40aa7 in nsThread::ProcessNextEvent(bool, bool*) /xpcom/threads/nsThread.cpp:1199:16
#12 0x7f7dc3b46fed in NS_ProcessNextEvent(nsIThread*, bool) /xpcom/threads/nsThreadUtils.cpp:465:10
#13 0x7f7dc472880b in mozilla::ipc::MessagePumpForNonMainThreads::Run(base::MessagePump::Delegate*) /ipc/glue/MessagePump.cpp:300:20
#14 0x7f7dc464cca7 in MessageLoop::RunInternal() /ipc/chromium/src/base/message_loop.cc:381:10
#15 0x7f7dc464cbb2 in RunHandler /ipc/chromium/src/base/message_loop.cc:374:3
#16 0x7f7dc464cbb2 in MessageLoop::Run() /ipc/chromium/src/base/message_loop.cc:356:3
#17 0x7f7dc3b3bdd6 in nsThread::ThreadFunc(void*) /xpcom/threads/nsThread.cpp:384:10
#18 0x7f7dd9ed3557 in _pt_root /nsprpub/pr/src/pthreads/ptthread.c:201:5
#19 0x7f7dda781b42 in start_thread nptl/./nptl/pthread_create.c:442:8
#20 0x7f7dda8139ff misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
UndefinedBehaviorSanitizer can not provide additional info.
SUMMARY: UndefinedBehaviorSanitizer: SEGV /storage/mozStorageConnection.cpp:513:11 in mozilla::storage::Connection::Release()
==1262797==ABORTING
Reporter | ||
Comment 1•6 months ago
|
||
Comment 2•6 months ago
|
||
Bugmon Analysis
Unable to reproduce bug 1791617 using build mozilla-central 20220920092542-45d33d6757ba. Without a baseline, bugmon is unable to analyze this bug.
Removing bugmon keyword as no further action possible. Please review the bug and re-add the keyword for further analysis.
Updated•6 months ago
|
Reporter | ||
Comment 4•3 months ago
|
||
We saw this issue consistently from 2022/09/14 to 2022/09/21 at which point no more crashes were identified. I think we can safely close this for now.
Description
•