Intermittent dom/tests/mochitest/fetch/test_readableStreams.html | application crashed [@ nsAutoOwningThread::AssertCurrentThreadOwnsMe(char const*) const] | After application terminated with exit code 245
Categories
(Core :: DOM: Networking, defect)
Tracking
()
People
(Reporter: intermittent-bug-filer, Assigned: asuth)
References
Details
(4 keywords, Whiteboard: [adv-main104+r][adv-esr91.13+r][adv-esr102.2+r])
Crash Data
Attachments
(3 files)
48 bytes,
text/x-phabricator-request
|
diannaS
:
approval-mozilla-beta+
RyanVM
:
approval-mozilla-release-
RyanVM
:
approval-mozilla-esr102+
tjr
:
sec-approval+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
9.74 KB,
patch
|
RyanVM
:
approval-mozilla-esr91+
|
Details | Diff | Splinter Review |
Filed by: nbeleuzu [at] mozilla.com
Parsed log: https://treeherder.mozilla.org/logviewer?job_id=378896110&repo=autoland
Full log: https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/CWf5dekrTEm8nFZiSOCPtw/runs/0/artifacts/public/logs/live_backing.log
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - PROCESS-CRASH | dom/tests/mochitest/fetch/test_readableStreams.html | application crashed [@ nsAutoOwningThread::AssertCurrentThreadOwnsMe(char const*) const]
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Mozilla crash reason: FetchStreamReader not thread-safe
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Crash dump filename: /var/folders/xv/h93qvghx71s8ldx7t2t2d6qr000014/T/tmps6rmmw8y.mozrunner/minidumps/90C04473-B525-40E6-A8C6-A4DD04E9A6E9.dmp
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Operating system: Mac OS X
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - 10.15.7 19H524
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - CPU: amd64
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - family 6 model 158 stepping 10
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - 12 CPUs
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO -
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Crash reason: EXC_BAD_ACCESS / KERN_INVALID_ADDRESS
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Crash address: 0x0
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Mac Crash Info:
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO -
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Process uptime: 43 seconds
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO -
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - Thread 33 StreamTrans #30 (crashed)
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - 0 XUL!nsAutoOwningThread::AssertCurrentThreadOwnsMe(char const*) const [nsISupportsImpl.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 43 + 0x1e]
[task 2022-05-22T09:13:58.196Z] 09:13:58 INFO - rax = 0x0000000103b9c3e8 rdx = 0x0000000000000000
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rcx = 0x00007000052b90ac rbx = 0x0000000105bc2710
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rsi = 0x00000000000120a8 rdi = 0x00007fff91067ca8
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rbp = 0x00007000052b8730 rsp = 0x00007000052b8720
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r8 = 0x00000000000130a8 r9 = 0x0000000000000000
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r10 = 0x00007fff91067cc8 r11 = 0x00007fff91067cc0
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r12 = 0x0000000000001000 r13 = 0x00007000052b89b4
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r14 = 0x0000000110c23635 r15 = 0x000000011d423320
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rip = 0x0000000106d06a7a
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - Found by: given as instruction pointer in context
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - 1 XUL!mozilla::dom::FetchStreamReader::Release() [FetchStreamReader.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 34 + 0x28]
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rbx = 0x000000011d4232e0 rbp = 0x00007000052b8760
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rsp = 0x00007000052b8740 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x000000011ca4be80
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r15 = 0x000000011d423320 rip = 0x0000000109b02169
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - 2 XUL!already_AddRefed<mozilla::CancelableRunnable> NS_NewCancelableRunnableFunction<CallbackHolder::CallbackHolder(nsIAsyncOutputStream*, nsIOutputStreamCallback*, unsigned int, nsIEventTarget*)::{lambda()#1}>(char const*, CallbackHolder::CallbackHolder(nsIAsyncOutputStream*, nsIOutputStreamCallback*, unsigned int, nsIEventTarget*)::{lambda()#1}&&)::FuncCancelableRunnable::~FuncCancelableRunnable() [nsThreadUtils.h:28d8297085fe0974241e612f2204a1d6a0169d4c : 662 + 0x39]
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rbx = 0x000000011d4232e0 rbp = 0x00007000052b8790
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rsp = 0x00007000052b8770 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x000000011d423318
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r15 = 0x000000011d423320 rip = 0x0000000106d93d6a
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - 3 XUL!mozilla::Runnable::Release() [nsThreadUtils.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 61 + 0x57]
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rbx = 0x0000000000000000 rbp = 0x00007000052b87b0
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - rsp = 0x00007000052b87a0 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x000000011d4232e0
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - r15 = 0x000000011c54a200 rip = 0x0000000106ddb648
[task 2022-05-22T09:13:58.197Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - 4 XUL!mozilla::dom::(anonymous namespace)::ExternalRunnableWrapper::~ExternalRunnableWrapper() [WorkerPrivate.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 188 + 0x2c]
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rbx = 0x00000001062feb80 rbp = 0x00007000052b87d0
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rsp = 0x00007000052b87c0 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x00000001062febb8
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r15 = 0x000000011c54a200 rip = 0x000000010ab0201d
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - 5 XUL!mozilla::dom::(anonymous namespace)::ExternalRunnableWrapper::Release() [WorkerPrivate.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 185 + 0x5c]
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rbx = 0x0000000000000000 rbp = 0x00007000052b87f0
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rsp = 0x00007000052b87e0 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x00000001062feb80
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r15 = 0x000000011c54a200 rip = 0x000000010ab01cfd
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - 6 XUL!mozilla::dom::WorkerEventTarget::Dispatch(already_AddRefed<nsIRunnable>, unsigned int) [WorkerEventTarget.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 105 + 0x8]
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rbx = 0x00000001062feb80 rbp = 0x00007000052b8840
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rsp = 0x00007000052b8800 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x000000011c54a210
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r15 = 0x000000011c54a200 rip = 0x000000010aac626e
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - 7 XUL!CallbackHolder::Notify() [nsPipe3.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 117 + 0x23]
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rbx = 0x000000011c54a200 rbp = 0x00007000052b8880
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rsp = 0x00007000052b8850 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x0000000000000000
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - r15 = 0x0000000000000008 rip = 0x0000000106d85188
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - 8 XUL!nsPipe::AdvanceReadCursor(nsPipeReadState&, unsigned int) [nsPipe3.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 674 + 0x2d]
[task 2022-05-22T09:13:58.198Z] 09:13:58 INFO - rbx = 0x0000000000000000 rbp = 0x00007000052b88c0
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rsp = 0x00007000052b8890 r12 = 0x0000000000001000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x0000000000000000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r15 = 0x0000000000000008 rip = 0x0000000106d82e68
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - 9 XUL!AutoReadSegment::~AutoReadSegment() [nsPipe3.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 464 + 0xb]
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rbx = 0x00007000052b8940 rbp = 0x00007000052b8900
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rsp = 0x00007000052b88d0 r12 = 0x0000000000000000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x00007000052b8940
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x0000000106d930ba
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - 10 XUL!nsPipeInputStream::ReadSegments(nsresult (*)(nsIInputStream*, void*, char const*, unsigned int, unsigned int, unsigned int*), void*, unsigned int, unsigned int*) [nsPipe3.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 1374 + 0x7]
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rbx = 0x00007000052b8940 rbp = 0x00007000052b89a0
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rsp = 0x00007000052b8910 r12 = 0x0000000000000000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r13 = 0x00007000052b89b4 r14 = 0x000000000000f000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x0000000106d864f2
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - 11 XUL!mozilla::dom::MutableBlobStreamListener::OnDataAvailable(nsIRequest*, nsIInputStream*, unsigned long long, unsigned int) [MutableBlobStreamListener.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 78 + 0x1d]
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rbx = 0xaaaaaaaaaaaaaaaa rbp = 0x00007000052b89c0
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rsp = 0x00007000052b89b0 r12 = 0x0000000000010000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r13 = 0x0000000000000000 r14 = 0x0000000105ce9290
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r15 = 0x0000000105ce9200 rip = 0x0000000109b391f5
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - 12 XUL!nsInputStreamPump::OnStateTransfer() [nsInputStreamPump.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 549 + 0x20]
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rbx = 0xaaaaaaaaaaaaaaaa rbp = 0x00007000052b8a40
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - rsp = 0x00007000052b89d0 r12 = 0x0000000000010000
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r13 = 0x0000000000000000 r14 = 0x0000000105ce9290
[task 2022-05-22T09:13:58.199Z] 09:13:58 INFO - r15 = 0x0000000105ce9200 rip = 0x0000000106fbdc81
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - 13 XUL!nsInputStreamPump::OnInputStreamReady(nsIAsyncInputStream*) [nsInputStreamPump.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 378 + 0x7]
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rbx = 0x0000000105ce9200 rbp = 0x00007000052b8a90
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rsp = 0x00007000052b8a50 r12 = 0x0000000103da2a70
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r13 = 0x00007000052b8b30 r14 = 0x0000000105ce9290
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r15 = 0x0000000106fbd7fc rip = 0x0000000106fbd548
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - 14 XUL!{virtual override thunk({offset(-8)}, nsInputStreamPump::OnInputStreamReady(nsIAsyncInputStream*))} + 0xc
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rbx = 0x000000011c6b5900 rbp = 0x00007000052b8aa0
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rsp = 0x00007000052b8aa0 r12 = 0x0000000103da2a70
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r13 = 0x00007000052b8b30 r14 = 0x0000000103da29b8
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x0000000106fbe3ed
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - 15 XUL!already_AddRefed<mozilla::CancelableRunnable> NS_NewCancelableRunnableFunction<CallbackHolder::CallbackHolder(nsIAsyncInputStream*, nsIInputStreamCallback*, unsigned int, nsIEventTarget*)::{lambda()#1}>(char const*, CallbackHolder::CallbackHolder(nsIAsyncInputStream*, nsIInputStreamCallback*, unsigned int, nsIEventTarget*)::{lambda()#1}&&)::FuncCancelableRunnable::Run() [nsThreadUtils.h:28d8297085fe0974241e612f2204a1d6a0169d4c : 650 + 0x15]
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rbx = 0x000000011c6b5900 rbp = 0x00007000052b8ab0
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rsp = 0x00007000052b8ab0 r12 = 0x0000000103da2a70
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r13 = 0x00007000052b8b30 r14 = 0x0000000103da29b8
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x0000000106d93540
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - 16 XUL!nsThreadPool::Run() [nsThreadPool.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 310 + 0x15]
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rbx = 0x000000011c6b5900 rbp = 0x00007000052b8bc0
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - rsp = 0x00007000052b8ac0 r12 = 0x0000000103da2a70
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r13 = 0x00007000052b8b30 r14 = 0x0000000103da29b8
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x0000000106df2222
[task 2022-05-22T09:13:58.200Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - 17 XUL!nsThread::ProcessNextEvent(bool, bool*) [nsThread.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 1174 + 0x15]
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rbx = 0x00000000ffffffff rbp = 0x00007000052b8ce0
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rsp = 0x00007000052b8bd0 r12 = 0x00001352fbfffda7
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r13 = 0x000000011c6b5900 r14 = 0x0000000000000000
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x0000000106de8f98
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - 18 XUL!NS_ProcessNextEvent(nsIThread*, bool) [nsThreadUtils.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 465 + 0xf]
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rbx = 0x0000000000000000 rbp = 0x00007000052b8d10
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rsp = 0x00007000052b8cf0 r12 = 0x00000001062fe060
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r13 = 0x00000001062fe070 r14 = 0x00000001062fe040
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r15 = 0x000000011c6b5900 rip = 0x0000000106def3df
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - 19 XUL!mozilla::ipc::MessagePumpForNonMainThreads::Run(base::MessagePump::Delegate*) [MessagePump.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 300 + 0x9]
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rbx = 0x00007000052b8db0 rbp = 0x00007000052b8d60
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rsp = 0x00007000052b8d20 r12 = 0x00000001062fe060
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r13 = 0x00000001062fe070 r14 = 0x00000001062fe040
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r15 = 0x000000011c6b5900 rip = 0x000000010773c111
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - 20 XUL!MessageLoop::Run() [message_loop.cc:28d8297085fe0974241e612f2204a1d6a0169d4c : 355 + 0x4]
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rbx = 0x000000011c6b5900 rbp = 0x00007000052b8d90
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - rsp = 0x00007000052b8d70 r12 = 0x000000000000000f
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r13 = 0x000000011ca49128 r14 = 0x000000011ca49120
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - r15 = 0x00007000052b8db0 rip = 0x00000001076b9914
[task 2022-05-22T09:13:58.201Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - 21 XUL!nsThread::ThreadFunc(void*) [nsThread.cpp:28d8297085fe0974241e612f2204a1d6a0169d4c : 378 + 0x7]
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rbx = 0x000000011c6b5900 rbp = 0x00007000052b8f70
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rsp = 0x00007000052b8da0 r12 = 0x000000000000000f
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r13 = 0x000000011ca49128 r14 = 0x000000011ca49120
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r15 = 0x00007000052b8db0 rip = 0x0000000106de4255
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - 22 libnss3.dylib!_pt_root [ptthread.c:28d8297085fe0974241e612f2204a1d6a0169d4c : 201 + 0x9]
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rbx = 0x00007000052b9000 rbp = 0x00007000052b8fb0
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rsp = 0x00007000052b8f80 r12 = 0x0000000105bc32f0
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r13 = 0x0000000000000000 r14 = 0x00007000052b9000
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r15 = 0x0000000000000002 rip = 0x00000001039225a9
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - 23 libsystem_pthread.dylib!_pthread_start + 0x93
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rbx = 0x00007000052b9000 rbp = 0x00007000052b8fd0
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rsp = 0x00007000052b8fc0 r12 = 0x0000000000000000
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r13 = 0x0000000000000000 r14 = 0x0000000000000000
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x00007fff6a9c0109
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - Found by: call frame info
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - 24 libsystem_pthread.dylib!thread_start + 0xe
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rbx = 0x0000000000000000 rbp = 0x00007000052b8ff0
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - rsp = 0x00007000052b8fe0 r12 = 0x0000000000000000
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r13 = 0x0000000000000000 r14 = 0x0000000000000000
[task 2022-05-22T09:13:58.202Z] 09:13:58 INFO - r15 = 0x0000000000000000 rip = 0x00007fff6a9bbb8b
[task 2022-05-22T09:13:58.203Z] 09:13:58 INFO - Found by: call frame info
Updated•2 years ago
|
Updated•2 years ago
|
Comment 1•2 years ago
|
||
Matthew, could this be related to your streams work? Thanks.
I'll mark this sec-high, because releasing an object on the wrong thread can potentially lead to a UAF. I don't know how exploitable that would really be.
Comment 2•2 years ago
|
||
The severity field for this bug is set to S4. However, the bug is flagged with the sec-high
keyword.
:dragana, could you consider increasing the severity of this security bug?
For more information, please visit auto_nag documentation.
Comment 3•2 years ago
|
||
I guess this is actually streams-related, not networking.
Updated•2 years ago
|
Comment 4•2 years ago
|
||
This looks like FetchStreamReader
is a non-threadsafe nsIOutputStreamCallback
. When the pipe code gets an update and tries to notify the callback, it attempts to dispatch a runnable which owns a reference to that callback to the required thread from the transport thread. Normally using a non-threadsafe callback is OK, because if the dispatch fails, the event target will not release the runnable, and will instead leak it. However, in this case the dispatch appears to fail, and WorkerEventTarget
appears to release the runnable on failure (https://searchfox.org/mozilla-central/rev/1c391443f770eddc7cde9e52dba5ef50cc233c06/dom/workers/WorkerEventTarget.cpp#89), meaning that if the dispatch fails, we'll end up releasing the runnable on the wrong thread, and crashing.
There appear to be two seperate bugs here. One is that WorkerEventTarget
is releasing runnables on the wrong thread on failure, which can lead to problems with code which assumes it's protected by automatic runnable leaking, and also some code is not canceling stream listener callbacks before the worker thread is gone, leading to these super late dispatches.
Comment 5•2 years ago
|
||
(the above looks like a better diagnosis than I would have provided, so clearing ni?... it also seems like this isn't directly a Streams issue; back to networking perhaps?)
Comment 6•2 years ago
|
||
Hi Andrew, can you have a look at nika's diagnosis? Thanks
Assignee | ||
Comment 7•2 years ago
|
||
Thank you for the excellent analysis, Nika!
It seems like FetchStreamReader is only used for JS-implemented ReadableStreams (creation 1 and 2), so this is readable stream specific in terms of where we'd expect to see this failure.
FetchStreamReader only holds a WeakWorkerRef which means it does nothing to ensure that its owning thread stays alive long enough to receive the AsyncWait; once the callback passed to the WeakWorkerRef constructor is called, the only guarantee we have is that the event loop will be fully spun at least once. FetchStreamReader also never cancels its AsyncWait calls which means that we definitely expect a callback to be dispatched at the worker event target at some point. The WeakWorkerRef, however, does synchronously call CloseAndRelease, so presumably if we were to call mPipeOut->AsyncWait(nullptr, 0, 0, mOwningEventTarget) before calling close then presumably we can rely on either:
- A callback has already been dispatched and is already queued on the worker thread. Because of invariants about draining the worker thread's event queue to completion, the callback will be released on the worker thread. (Our sketchy cancellation logic could be invoked, though.)
- A callback will never be dispatched to the worker thread!
Nika, am I correct that it is safe for us to call AsyncWait like that to ensure the callback is canceled? (That's what I believe you were explicitly implying we should do, but I am very paranoid about (nsI)streams, especially now! ;)
I don't think we want to change things to leak the runnable here in the current code; at least currently it's still better to find the logic bugs, but I think we're also just going to end up to be removing much of the WorkerThread/WorkerEventTarget stuff so ideally we'll end up using existing better-written code that probably will leak/assert as appropriate.
This is definitely the case of a cycle-collected refcount being freed on the wrong thread and after we would expect the underlying cycle collected runtime to be gone. In the event the callback does get to where it's going and is invoked, the callback turns into a no-op because mStream is already closed so any security risk would be related to the cycle-collected refcount being sad and trying to access the runtime... does it do that?
Comment 8•2 years ago
|
||
(In reply to Andrew Sutherland [:asuth] (he/him) from comment #7)
FetchStreamReader only holds a WeakWorkerRef which means it does nothing to ensure that its owning thread stays alive long enough to receive the AsyncWait; once the callback passed to the WeakWorkerRef constructor is called, the only guarantee we have is that the event loop will be fully spun at least once. FetchStreamReader also never cancels its AsyncWait calls which means that we definitely expect a callback to be dispatched at the worker event target at some point. The WeakWorkerRef, however, does synchronously call CloseAndRelease, so presumably if we were to call mPipeOut->AsyncWait(nullptr, 0, 0, mOwningEventTarget) before calling close then presumably we can rely on either:
- A callback has already been dispatched and is already queued on the worker thread. Because of invariants about draining the worker thread's event queue to completion, the callback will be released on the worker thread. (Our sketchy cancellation logic could be invoked, though.)
- A callback will never be dispatched to the worker thread!
Nika, am I correct that it is safe for us to call AsyncWait like that to ensure the callback is canceled? (That's what I believe you were explicitly implying we should do, but I am very paranoid about (nsI)streams, especially now! ;)
Calling Close
on a pipe will fire any async callbacks to notify the pipe when it's invoked, so clearing the AsyncWait
callback immediately before calling Close
is definitely already too late.
Clearing the callback also doesn't cancel callbacks which are already in flight, and they're dispatched after the pipe's monitor is released, so there's always a potential race if the thread dies at some point, as theoretically the thread which is trying to notify could stall for an arbitrary amount of time after unlocking, so you unfortunately can't rely on being able to cancel callbacks before the thread goes away.
Your best bet is probably going to be to hold a strong worker ref while an AsyncWait
callback is pending, to prevent the thread from going away before the runnable is dispatched.
It'd be nice to properly allow synchronously canceling AsyncWait callbacks, but it's not really clear how to make that happen given that we want to be unlocked when dispatching.
This is definitely the case of a cycle-collected refcount being freed on the wrong thread and after we would expect the underlying cycle collected runtime to be gone. In the event the callback does get to where it's going and is invoked, the callback turns into a no-op because mStream is already closed so any security risk would be related to the cycle-collected refcount being sad and trying to access the runtime... does it do that?
Cycle-collected objects definitely need to be released on the thread they're created on otherwise they can break the cycle collection runtime and cause all sorts of issues. It definitely will try to access global state when it's destroyed to update its state, though I don't have the code memorized well enough to explain all of the ways things can go wrong in this case.
Comment 9•2 years ago
|
||
I'd like to move this to a better component so someone can give this a proper severity/priority. Any suggestions asuth?
Assignee | ||
Updated•2 years ago
|
Assignee | ||
Comment 10•2 years ago
|
||
Assignee | ||
Comment 11•2 years ago
|
||
Depends on D149185
Updated•2 years ago
|
Updated•2 years ago
|
Assignee | ||
Comment 12•2 years ago
|
||
I think this family of crashes may end up just being a null de-ref so this may not need to be sec-high. Specifically, I think we will see a null crash in NS_CycleCollectorSuspect3 on
if (MOZ_LIKELY(data->mCollector)) {
like we see in NS_CycleCollectorSuspect3.
In comment 0 the stack we're seeing is from the first of these lines in NS_IMPL_CYCLE_COLLECTING_RELEASE_WITH_DESTROY
NS_ASSERT_OWNINGTHREAD(_class); \
nsISupports* base = NS_CYCLE_COLLECTION_CLASSNAME(_class)::Upcast(this); \
nsrefcnt count = mRefCnt.decr(base); \
and the last line calling decr
calls nsCycleCollectingAutoRefCnt::decr which does a bunch of clever bitmask stuff with mRefCntAndFlags
but most notably just calls NS_CycleCollectorSuspect3 as suspect which then will attempt to do a Thread-Local Storage (TLS) lookup and if that fails, null pointer deref crash.
Note that the key thing here is that the specific risky scenario here is coming from nsIAsyncInputStream::AsyncWait callbacks which we expect to be happening on either the Stream Transport Service thread pool (as in comment 0) or on a different background thread reached via NS_DispatchBackgroundTask
(ex: DataPipe does this when AsyncWait specifies no explicit target thread, as is the case for one of the corrected classes) or similar. None of these threads are expected to have cycle-collected runtimes. Even when a pipe would be written to from a cycle-collected thread, we would expect the input stream pump to be actually be running from the STS when attempting to dispatch a runnable.
Assignee | ||
Comment 13•2 years ago
|
||
Comment on attachment 9281104 [details]
Bug 1770630 - Worker stream readers should contribute to busy count. r=#dom-workers!
Security Approval Request
- How easily could an exploit be constructed based on the patch?: As described in comment 12, it's not clear this can actually be exploited but it would be nice to have a second set of eyes on that as maybe I'm missing something. That is, this may just be a null deref crash when
MOZ_DIAGNOSTIC_ASSERT
andNS_ASSERT_OWNINGTHREAD
aren't enabled.
In terms of the patch, an attacker could probably figure out that the problem involves worker shutdown by noticing that we're upgrading WeakWorkerRefs to StrongWorkerRefs. The commit message focuses on Workers' confusing busy count semantics since that's a side-effect of the conversion from WeakWorkerRef to StrongWorkerRef in an attempt to make would-be attackers' heads hurt.
If my analysis in comment 12 is wrong, then triggering this interesting scenario would have a timing race, but workers do allow for a fair amount of control over triggering worker shutdown, so it's not clear that's a huge impediment. (Specifically, the parent/owner of the Worker
binding can call terminate()
and the worker global itself can call globalThis.close()
both of which will lead to predictable worker shutdown.)
- Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?: Yes
- Which older supported branches are affected by this flaw?: all
- If not all supported branches, which bug introduced the flaw?: None
- Do you have backports for the affected branches?: No
- If not, how different, hard to create, and risky will they be?: As a test I de-bitrotted esr91 and it was just some superficial drift so it should be easy.
- How likely is this patch to cause regressions; how much testing does it need?: This is low-ish risk but I think it would want to get to at least make it to m-c before trying to land it on any other branches.
- Is Android affected?: Yes
Comment 14•2 years ago
|
||
Comment on attachment 9281104 [details]
Bug 1770630 - Worker stream readers should contribute to busy count. r=#dom-workers!
Approved to land and uplift when ready.
Comment 15•2 years ago
|
||
Landed: https://hg.mozilla.org/integration/autoland/rev/cc893e08aa2f8e98312e33eb5af8f684b015c29c
Backed out for causing wpt failures in FileReader-multiple-reads.any.worker.html:
https://hg.mozilla.org/integration/autoland/rev/0f1b68697a667b2da901317342d2fb16bd420d13
Updated•2 years ago
|
Comment 16•2 years ago
|
||
Worker stream readers should contribute to busy count. r=dom-worker-reviewers,jstutte
https://hg.mozilla.org/integration/autoland/rev/68257659814773dd5ce01d8dc9962eb3cd2313c9
https://hg.mozilla.org/mozilla-central/rev/682576598147
Updated•2 years ago
|
Comment 18•2 years ago
|
||
FYI, this grafts cleanly to ESR102 but would need rebasing for ESR91.
Assignee | ||
Comment 19•2 years ago
|
||
(In reply to Jens Stutte [:jstutte] from comment #17)
Do we want to uplift this to beta 104?
Yes to all uplifts; I was giving this a few days to try and make sure there were no obvious new problems reported by CI. I'll try out moz-phab uplift for ESR91 since I changed the commit after the initial landing bounced and my manual cherry-pick rebase for ESR91 is therefore out-of-date and I want to try the new automation assistance.
Comment 20•2 years ago
|
||
The patch landed in nightly and beta is affected.
:asuth, is this bug important enough to require an uplift?
- If yes, please nominate the patch for beta approval.
- If no, please set
status-firefox104
towontfix
.
For more information, please visit auto_nag documentation.
Assignee | ||
Comment 21•2 years ago
|
||
It seems that moz-phab only understands the beta train right now. It seems like our phabricator instance doesn't understand esr91 to be its own tree and I'm unclear on whether it could understand esr91 as a branch because I don't understand if it uses the unified tree or not. The moz-phab docs reference https://wiki.mozilla.org/Release_Management/Feature_Uplift which unfortunately didn't provide any clarity on how to provide the patch, so I'm attaching it here.
I was unable to fully validate this test because bug 1773259 means I can't build ESR91, but I saw no indications my patch was the source of any problems. I would like to do more but I am notionally on PTO.
Assignee | ||
Comment 22•2 years ago
|
||
Comment on attachment 9281104 [details]
Bug 1770630 - Worker stream readers should contribute to busy count. r=#dom-workers!
Beta/Release Uplift Approval Request
- User impact if declined: See comment 12 and comment 13. In general we expect null de-ref crashes of content processes due to worker shutdown edge-cases without this patch HOWEVER with these fixes we need to worry less about more complicated security-related shenanigans a clever attacker could try and use.
- Is this code covered by automated tests?: Yes
- Has the fix been verified in Nightly?: Yes
- Needs manual test from QE?: No
- If yes, steps to reproduce:
- List of other uplifts needed: None
- Risk to taking this patch: Low
- Why is the change risky/not risky? (and alternatives if risky): The patch strengthens life-cycle invariants and should only have an impact in cases where a bad thing would already have been happening.
- String changes made/needed: none
- Is Android affected?: Yes
ESR Uplift Approval Request
- If this is not a sec:{high,crit} bug, please state case for ESR consideration: Although I make the case this is a sec-med in comment 12, it's reasonable to act like this is a sec-high.
- User impact if declined: see above
- Fix Landed on Version:
- Risk to taking this patch: Low
- Why is the change risky/not risky? (and alternatives if risky):
Assignee | ||
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Comment 23•2 years ago
|
||
Comment on attachment 9281104 [details]
Bug 1770630 - Worker stream readers should contribute to busy count. r=#dom-workers!
Not something we need in a 103 dot release.
Comment 24•2 years ago
|
||
Comment on attachment 9281104 [details]
Bug 1770630 - Worker stream readers should contribute to busy count. r=#dom-workers!
Approved for 104.0b8
Comment 25•2 years ago
|
||
Comment on attachment 9281104 [details]
Bug 1770630 - Worker stream readers should contribute to busy count. r=#dom-workers!
Approved for 102.2esr and 91.13esr.
Updated•2 years ago
|
Comment 26•2 years ago
|
||
uplift |
Comment 27•2 years ago
|
||
uplift |
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Updated•2 years ago
|
Updated•1 year ago
|
Comment 28•11 months ago
|
||
Pushed by bugmail@asutherland.org: https://hg.mozilla.org/integration/autoland/rev/e2c0baa73a21 Improve WorkerRef usage comments. r=jstutte DONTBUILD
Comment 29•11 months ago
|
||
bugherder |
Description
•