AddressSanitizer: heap-use-after-free [@ nsAtomTable::Atomize] with WRITE of size 8 through [@ selectors::parser::parse_one_simple_selector]
Categories
(Core :: CSS Parsing and Computation, defect, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox-esr60 | --- | unaffected |
firefox-esr68 | --- | wontfix |
firefox69 | --- | wontfix |
firefox70 | --- | wontfix |
firefox71 | --- | fixed |
People
(Reporter: decoder, Assigned: emilio)
References
(Blocks 1 open bug)
Details
(4 keywords, Whiteboard: [post-critsmash-triage][adv-main71+r])
Attachments
(2 files)
The attached crash information was submitted via the ASan Nightly Reporter on mozilla-central-asan-nightly revision 70.0a1-20190826094255-https://hg.mozilla.org/mozilla-central/rev/c75d6a0539eb4b2c7961f0c9782fcb364198c6b2.
For detailed crash information, see attachment.
From the stack, this looks like a shutdown crash (mozilla::ShutdownXPCOM
called in the free stack).
Reporter | ||
Comment 1•5 years ago
|
||
Comment 2•5 years ago
|
||
Actually, as decoder said, this looks like a shutdown race, so I think sec-moderate is okay. It looks like Rust refers to an atom, but XPCOM shutdown from underneath it.
Updated•5 years ago
|
Assignee | ||
Comment 3•5 years ago
|
||
So yeah, it looks like we're parsing stuff off-main-thread and trying to atomize stuff while the main thread is shutting down and has already freed the atom table...
It's not 100% clear to me how to best fix it. We could make async parsing tasks somehow interruptible, or block shutdown on them, or something like that...
Nick, Bobby, any thoughts?
Assignee | ||
Comment 4•5 years ago
|
||
How do we deal with stuff like workers that can still do stuff while we shutdown?
Comment 5•5 years ago
|
||
A hacky thing that might help is to move the call to NS_ShutdownAtomTable()
as late as possible within ShutdownXPCOM()
.
Another thought is to make NS_Atomize()
(and similar functions) check if gAtomTable
is null, and fail if so. Some callers check for a null return value. (Ah, but I see the comments on the declarations say that this function never returns null, so that probably won't work.)
Probably the safest thing is to make the atom table immortal. We could just never free it, though that might make leak checkers complain. (There is already some NS_FREE_PERMANENT_DATA
code in nsAtomTable.cpp
to placate leak checkers. We could only free the table if NS_FREE_PERMANENT_DATA
is defined, which would solve the problem on release, though ASan builds might still complain!) Or we could put it in static memory, though that would require some care, particularly to avoid the addition of static constructors.
Comment 6•5 years ago
|
||
ASan defines NS_FREE_PERMANENT_DATA.
Comment 7•5 years ago
|
||
If we make the atom table immortal, but keep leak reporting working, we'll get intermittent leaks reported when we get into this situation of the CSS parsing thread still running while we shutdown. Waiting on the CSS parsing thread to be done seems better.
Assignee | ||
Comment 8•5 years ago
|
||
Waiting on a potentially long task during shutdown doesn't look great though...
Comment 9•5 years ago
|
||
So, we really have two cases: debug builds (where NS_FREE_PERMANENT_DATA is defined, and we want to do expensive shutdown logic to verify against leaks), and opt builds (where we want to exit as quickly as possible). There is important leak checking we want to do on atom table shutdown [1], and opt builds already just fast-kill content processes without shutting down XPCOM at all [2].
So I think what we want here is something like:
void ShutdownXPCOM() {
...
#ifdef NS_FREE_PERMANENT_DATA
DontStartAnyNewAsyncParseJobsAndWaitForCurrentOnesToFinish();
NS_ShutdownAtomTable();.
#endif
...
}
[1] https://searchfox.org/mozilla-central/rev/7088fc958db5935eba24b413b1f16d6ab7bd13ea/xpcom/ds/nsAtomTable.cpp#366
[2] https://searchfox.org/mozilla-central/rev/7088fc958db5935eba24b413b1f16d6ab7bd13ea/dom/ipc/ContentChild.cpp#2370
Assignee | ||
Comment 10•5 years ago
|
||
Ok, I'll try to give something like that a shot.
Comment 11•5 years ago
|
||
Thanks Emilio!
Assignee | ||
Comment 12•5 years ago
|
||
Assignee | ||
Comment 13•5 years ago
|
||
Comment on attachment 9089464 [details]
Bug 1577439 - Shutdown Servo's thread-pool in leak-checking builds, leak the atom table elsewhere.
How does something like this look like?
The only missing piece (AFAICT) is doing the necessary build system dance so that I can turn the #[cfg(feature = "gecko_debug")]
into something that reflects NS_FREE_PERMANENT_DATA
.
Assignee | ||
Comment 14•5 years ago
|
||
Note that the key there is that the thread-pool destructor sync-waits on all the workers to finish their jobs.
Comment 15•5 years ago
|
||
Comment on attachment 9089464 [details]
Bug 1577439 - Shutdown Servo's thread-pool in leak-checking builds, leak the atom table elsewhere.
Comments in the Phabricactor rev.
Updated•5 years ago
|
Comment 16•5 years ago
|
||
Backed out for causing perma leakcheck | tab process: negative leaks caught:
https://hg.mozilla.org/integration/autoland/rev/25ee7237b69916f8a42385887c3ccfb1723b68ab
Push with failures: https://treeherder.mozilla.org/#/jobs?repo=autoland&resultStatus=testfailed%2Cbusted%2Cexception%2Cretry%2Cusercancel%2Crunning%2Cpending%2Crunnable&revision=42ebd8a50978189a7247dc029de7d66d97e7bec9
Failure log example: https://treeherder.mozilla.org/logviewer.html#?job_id=264431920&repo=autoland
Comment 17•5 years ago
|
||
This causes also xpcshell tests to fail which are part of the Windows x64 asan builds: https://treeherder.mozilla.org/logviewer.html#?job_id=264426812&repo=autoland
[task 2019-09-01T00:49:01.463Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestAddTask TEST-UNEXPECTED-FAIL
[task 2019-09-01T00:49:01.464Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestAddTaskFail PASSED
[task 2019-09-01T00:49:01.464Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestAddTaskMultiple PASSED
[task 2019-09-01T00:49:01.465Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestAddTest PASSED
[task 2019-09-01T00:49:01.466Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestAddTestAddTask TEST-UNEXPECTED-FAIL
[task 2019-09-01T00:49:01.466Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestAddTestFail PASSED
[task 2019-09-01T00:49:01.467Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNoRunTestEmptyTest PASSED
[task 2019-09-01T00:49:01.468Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNotSkipForAddTask PASSED
[task 2019-09-01T00:49:01.468Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testNotSkipForAddTest PASSED
[task 2019-09-01T00:49:01.469Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testPass PASSED
[task 2019-09-01T00:49:01.469Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testPassFail TEST-UNEXPECTED-FAIL
[task 2019-09-01T00:49:01.470Z] 00:49:01 INFO - check> ..\testing\xpcshell\selftest.py::XPCShellTestsTests::testRandomExecution TEST-UNEXPECTED-FAIL
Assignee | ||
Comment 18•5 years ago
|
||
So I had misread rayon's documentation, and the threads do not join synchronously on drop.
There is a way to do that, using exit_handler
and co. It's not 100% perfect because we can't wait for TLS destructors to run, but it seems good enough:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=eba97d22bd7d73cb127d2e68290347fd08519a83
I filed https://github.com/rayon-rs/rayon/issues/688 as a feature request to get what we want here, though I don't think it blocks this.
Updated•5 years ago
|
Comment 19•5 years ago
|
||
Landed: https://hg.mozilla.org/integration/autoland/rev/ef66bc33c16697f613e5403be0d614be02455c2b
Backed out changeset a396ec8f44fd (bug 1577439) for failing valgrind-test on a CLOSED TREE.
Backout link: https://hg.mozilla.org/integration/autoland/rev/53ed94e16e564516f9e4d1bb3be05e3af9205cc4
Push with failures: https://treeherder.mozilla.org/#/jobs?repo=autoland&selectedJob=265527882&resultStatus=testfailed%2Cbusted%2Cexception&revision=a396ec8f44fd61729f209d138e08989b60cd739b
Log link: https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=265527882&repo=autoland&lineNumber=47554
Log snippet:
[task 2019-09-07T14:40:36.904Z] 14:40:36 INFO - 16:11.89 ==6681== Warning: set address range perms: large range [0x1197de7bc000, 0x11985e3bc000) (noaccess)
[task 2019-09-07T14:40:43.352Z] 14:40:43 INFO - 16:18.34 --6681-- Archiving syms at 0x1bccc180-0x1bd42c0b in /builds/worker/workspace/build/src/obj-firefox/security/nss/lib/freebl/freebl_freeblpriv3/libfreeblpriv3.so (have_dinfo 1)
[task 2019-09-07T14:40:43.352Z] 14:40:43 INFO - 16:18.34 --6681-- Scanning and archiving ExeContexts ...
[task 2019-09-07T14:40:44.062Z] 14:40:44 INFO - 16:19.05 --6681-- Scanned 4,312,686 ExeContexts, archived 1,991 ExeContexts
[task 2019-09-07T14:40:44.069Z] 14:40:44 INFO - 16:19.06 --6681-- Archiving syms at 0x1bc89830-0x1bcb84cc in /builds/worker/workspace/build/src/obj-firefox/security/nss/lib/softoken/softoken_softokn3/libsoftokn3.so (have_dinfo 1)
[task 2019-09-07T14:40:44.069Z] 14:40:44 INFO - 16:19.06 --6681-- Scanning and archiving ExeContexts ...
[task 2019-09-07T14:40:44.731Z] 14:40:44 INFO - 16:19.72 --6681-- Scanned 4,312,726 ExeContexts, archived 4,961 ExeContexts
[task 2019-09-07T14:40:45.060Z] 14:40:45 INFO - 16:20.05 --6681-- Archiving syms at 0x1741c180-0x17423148 in /lib/x86_64-linux-gnu/libnss_files-2.13.so (have_dinfo 1)
[task 2019-09-07T14:40:45.060Z] 14:40:45 INFO - 16:20.05 --6681-- Scanning and archiving ExeContexts ...
[task 2019-09-07T14:40:45.742Z] 14:40:45 INFO - 16:20.73 --6681-- Scanned 4,315,690 ExeContexts, archived 94 ExeContexts
[task 2019-09-07T14:40:45.757Z] 14:40:45 INFO - 16:20.75 --6681-- Archiving syms at 0x263e0000-0x263e30b8 in /lib/x86_64-linux-gnu/libnss_dns-2.13.so (have_dinfo 1)
[task 2019-09-07T14:40:45.757Z] 14:40:45 INFO - 16:20.75 --6681-- Scanning and archiving ExeContexts ...
[task 2019-09-07T14:40:46.403Z] 14:40:46 INFO - 16:21.39 --6681-- Scanned 4,315,701 ExeContexts, archived 74 ExeContexts
[task 2019-09-07T14:40:46.410Z] 14:40:46 INFO - 16:21.40 ==6681==
[task 2019-09-07T14:40:46.410Z] 14:40:46 INFO - 16:21.40 ==6681== HEAP SUMMARY:
[task 2019-09-07T14:40:46.410Z] 14:40:46 INFO - 16:21.40 ==6681== in use at exit: 1,786,345 bytes in 13,881 blocks
[task 2019-09-07T14:40:46.410Z] 14:40:46 INFO - 16:21.40 ==6681== total heap usage: 2,629,368 allocs, 2,615,487 frees, 1,732,931,463 bytes allocated
[task 2019-09-07T14:40:46.410Z] 14:40:46 INFO - 16:21.40 ==6681==
[task 2019-09-07T14:40:46.417Z] 14:40:46 INFO - 16:21.41 ==6681== Searching for pointers to 13,165 not-freed blocks
[task 2019-09-07T14:40:46.445Z] 14:40:46 INFO - 16:21.43 ==6681== Checked 12,675,000 bytes
[task 2019-09-07T14:40:46.445Z] 14:40:46 INFO - 16:21.43 ==6681==
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 TEST-UNEXPECTED-FAIL | valgrind-test | 8,224 bytes in 2 blocks are definitely lost at malloc / std::thread::local::fast::Key / style::bloom::StyleBloom / style::parallel::create_thread_local_context
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== 8,224 bytes in 2 blocks are definitely lost in loss record 8,182 of 8,204
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== at 0x4C2B280: malloc+112 (vg_replace_malloc.c:308)
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x1441C320: std::thread::local::fast::Key<T>::try_initialize+48 (src/libstd/sys/unix/alloc.rs:10)
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140ED670: style::bloom::StyleBloom<E>::new+144 (local.rs:411)
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F6FB4: style::parallel::create_thread_local_context+52 (context.rs:785)
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F63FA: <rayon_core::job::HeapJob<BODY> as rayon_core::job::Job>::execute+714 (parallel.rs:130)
[task 2019-09-07T14:40:47.183Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x14474057: rayon_core::registry::WorkerThread::wait_until_cold+615 (job.rs:59)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F5D27: rayon_core::scope::scope_fifo::{{closure}}+7783 (/builds/worker/workspace/build/src/third_party/rust/rayon-core/src/registry.rs:692)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F9975: <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute+197 (registry.rs:852)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x14474057: rayon_core::registry::WorkerThread::wait_until_cold+615 (job.rs:59)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x14474CA8: std::sys_common::backtrace::__rust_begin_short_backtrace+1816 (registry.rs:692)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x1447452D: core::ops::function::FnOnce::call_once{{vtable-shim}}+125 (mod.rs:470)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x144B3A1D: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once+61 (boxed.rs:746)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x144B5CC7: std::sys::unix::thread::Thread::new::thread_start+135 (boxed.rs:746)
[task 2019-09-07T14:40:47.184Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x4E3BB4F: start_thread+207 (pthread_create.c:304)
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x5CD9FBC: clone+108 (clone.S:112)
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 ==6681==
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 {
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 <insert_a_suppression_name_here>
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 Memcheck:Leak
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 match-leak-kinds: definite
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:malloc
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN3std6thread5local4fast12Key$LT$T$GT$14try_initialize17hc08d0b0f9903c9c9E
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN5style5bloom19StyleBloom$LT$E$GT$3new17h22f606c365cb41dfE
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN5style8parallel27create_thread_local_context17h437db8c885b33746E
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:ZN77$LT$rayon_core..job..HeapJob$LT$BODY$GT$$u20$as$u20$rayon_core..job..Job$GT$7execute17h331d9371cf63b2dfE
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN10rayon_core8registry12WorkerThread15wait_until_cold17h8f66355dbf518d33E
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:ZN10rayon_core5scope10scope_fifo28$u7b$$u7b$closure$u7d$$u7d$17h6f1bad6164180d8dE
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:ZN83$LT$rayon_core..job..StackJob$LT$L$C$F$C$R$GT$$u20$as$u20$rayon_core..job..Job$GT$7execute17h873ef4330d46bbe1E
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN10rayon_core8registry12WorkerThread15wait_until_cold17h8f66355dbf518d33E
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN3std10sys_common9backtrace28__rust_begin_short_backtrace17hfd4f88f17dfc7f23E
[task 2019-09-07T14:40:47.185Z] 14:40:47 INFO - 16:22.17 fun:_ZN4core3ops8function6FnOnce40call_once$u7b$$u7b$vtable.shim$u7d$$u7d$17h07d42e32b9b34dc4E
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 fun:ZN83$LT$alloc..boxed..Box$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$A$GT$$GT$9call_once17h42806b83647d4c79E
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 fun:_ZN3std3sys4unix6thread6Thread3new12thread_start17h4570080769500bcdE
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 fun:start_thread
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 fun:clone
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 }
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 TEST-UNEXPECTED-FAIL | valgrind-test | 12,368 bytes in 2 blocks are definitely lost at malloc / std::thread::local::fast::Key / style::sharing::StyleSharingCache / style::parallel::create_thread_local_context
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== 12,368 bytes in 2 blocks are definitely lost in loss record 8,187 of 8,204
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== at 0x4C2B280: malloc+112 (vg_replace_malloc.c:308)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x1441D20C: std::thread::local::fast::Key<T>::try_initialize+28 (src/libstd/sys/unix/alloc.rs:10)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140ED54C: style::sharing::StyleSharingCache<E>::new+124 (local.rs:411)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F6FA5: style::parallel::create_thread_local_context+37 (context.rs:783)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F63FA: <rayon_core::job::HeapJob<BODY> as rayon_core::job::Job>::execute+714 (parallel.rs:130)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x14474057: rayon_core::registry::WorkerThread::wait_until_cold+615 (job.rs:59)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F5D27: rayon_core::scope::scope_fifo::{{closure}}+7783 (/builds/worker/workspace/build/src/third_party/rust/rayon-core/src/registry.rs:692)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x140F9975: <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute+197 (registry.rs:852)
[task 2019-09-07T14:40:47.186Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x14474057: rayon_core::registry::WorkerThread::wait_until_cold+615 (job.rs:59)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x14474CA8: std::sys_common::backtrace::__rust_begin_short_backtrace+1816 (registry.rs:692)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x1447452D: core::ops::function::FnOnce::call_once{{vtable-shim}}+125 (mod.rs:470)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x144B3A1D: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once+61 (boxed.rs:746)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x144B5CC7: std::sys::unix::thread::Thread::new::thread_start+135 (boxed.rs:746)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x4E3BB4F: start_thread+207 (pthread_create.c:304)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681== by 0x5CD9FBC: clone+108 (clone.S:112)
[task 2019-09-07T14:40:47.187Z] 14:40:47 INFO - 16:22.17 ==6681==
Assignee | ||
Comment 20•5 years ago
|
||
Landed with a valgrind suppression as that's an intentional leak.
Comment 21•5 years ago
|
||
https://hg.mozilla.org/integration/autoland/rev/7bebeb7a62ad388eca65cc9bbe45265e6e3c7382
https://hg.mozilla.org/mozilla-central/rev/7bebeb7a62ad
Comment 22•5 years ago
|
||
Is this something we should consider uplifting or can it ride with Fx71 to release?
Assignee | ||
Comment 23•5 years ago
|
||
Probably can ride the trains. It's not risky but it's also not exploitable afaict.
Updated•5 years ago
|
Updated•5 years ago
|
Updated•5 years ago
|
Updated•4 years ago
|
Updated•3 years ago
|
Description
•