Closed Bug 1379814 Opened 7 years ago Closed 7 years ago

Prevent dining philosophers deadlock in HelperThreads in a principled way

Categories

(Core :: JavaScript Engine, enhancement)

enhancement
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla57
Tracking Status
firefox57 --- fixed

People

(Reporter: lth, Assigned: lth)

References

(Blocks 1 open bug)

Details

Attachments

(1 file, 1 obsolete file)

In bug 1371216 I introduced a work-stealing scheme to avoid a dining philosophers deadlock problem in the helper threads system, namely, if a helper thread wants to wait on another helper thread to do work for it then it must be careful not to do so unless there is actually an available helper thread to do that work. Work stealing avoids that problem by ensuring forward progress even when only the manager thread is available. The work stealing was backed out in bug 1373414 because it turned out to be brittle: tasks should not change personalities (as they do with work stealing) because there is code in the system that scans the task list looking for tasks with specific personalities, and use the absence of any as a termination condition. Attempts to work around that remained brittle. A comment (https://dxr.mozilla.org/mozilla-central/rev/91c943f7373722ad4e122d98a2ddd6c79708b732/js/src/wasm/WasmGenerator.cpp#346) describes the issue and how it was resolved previously: there are at least two helper threads in the system (when threads are active) and only one manager thread can be active at one time by construction. That comment assumes that no other parts of the system create manager threads. That happens to be true at the moment, as far as I can tell, but will cease to be true with Wasm tier 2 compilation (bug 1277562), which is why the work stealing was implemented in the first place. In bug 1373414 comment 14 a better solution is suggested, namely, we incorporate information about resource needs into the helper threads system so that it can schedule threads properly depending on available resources. If a manager thread is active that needs other worker threads, trying to schedule another manager thread should fail (ie, the thread should not be scheduled until more resources are available). In this scheme, the world is therefore divided into manager tasks, which run on threads that wait on other threads, and worker tasks, which do not, and when a thread is running a manager task there must be at least one thread that is not running a manager task.
Blocks: 1277562
There's another aspect to this that I appear to run into occasionally with tiered computation. During shutdown, the helper thread shutdown code eagerly shuts down threads that are not currently doing work, and in doing so it is competing (safely) for resources with any ongoing compilation work that is in the process of shutting down. It is possible for the system to get into a state where all the available worker threads have been reaped, with the tier-2 generator thread sitting alone waiting for work to be done. The right fix here is for the helper thread shutdown code to either be aware of ongoing tasks and not do anything until those are done, or for it to be aware of those tasks' resource needs and respect them.
Could there be a general rule that any helper thread task that wants to wait on any other helper thread task has to wait on a condvar that is notified on shutdown and then check 'terminate' in the condvar loop?
That is one of several possibilities, but I'll have to see - there's a bunch of crufty code in the helper threads system that needs to be cleaned up. For example, we use the worklists as stacks, which is fine as long as all tasks on any given worklist have exactly the same needs, but that is not true for parse tasks, I believe - module parsetasks need helper threads but other parse tasks may not(?). In any case this can be tidied up for the benefit of all. I think at that point the termination logic actually becomes confined to the scheduler and does not need the condvar logic, but TBD.
I think there are two separable problems here and because they were similar I tried to solve them by the same mechanism, but that's wrong, it just adds complexity. The first problem is that a master/manager/generator thread should not be allowed to take the last remaining worker thread, because then it'll starve. That's easily fixed in the HelperThreads system by local changes to how we determine whether a task can be scheduled: a master thread does not get to take the last available thread. (One can discuss whether it should not get to take one of the last N threads depending on what else is going on but that's a question of performance, not termination.) The second problem is that there is a shutdown race, as in comment 1. This is probably most easily fixed in the manner it was fixed for parse threads, namely, by having CancelTier2GeneratorTasks (from bug 1277562) block until the one remaining active Tier2 task, if any, is completed, before we terminate any threads. (For parse threads this is a slightly simpler problem because here we cancel parse tasks before we shut down the thread system, and so the process does not have the race described in comment 1.) Just to close the loop on this: I think the sketch Luke presented in comment 2 does not do anything for us here because the problems described in this bug are not about the thread that waits on other threads; it's about the latter threads being available at all - not being allocated, and not being terminated.
Attached patch bug1379814-master-tasks.patch (obsolete) — Splinter Review
This solves the first problem: master tasks must not take the last available thread. Asking only for feedback on this because: - I don't want to land until I've rebased my queue for bug 1277562 on top of this and fixed the other problem and demonstrated that things are fine - I'm currently flagging every parse task as a master task but only parse tasks that run asm.js compilation really are, and I would like to fix that if possible, but that's a secondary concern. This is really a tiny change. It looks larger, because in the process of implementing this I elected to clean up the task selection logic in HelperThread::threadLoop. It really irked me that we would call the various predicates twice in most cases (and in different orders, to boot) but avoid it in special cases when that would not be correct (very ad-hoc). There is now a single "if" that implements priority by calling the predicates once in some defined order, and some comments to outline what's going on.
Attachment #8885937 - Flags: feedback?(shu)
Attachment #8885937 - Flags: feedback?(luke)
Makes sense. I realized that, since ModuleGenerator owns the CompileTasks sitting in various Vectors, if the Tier2Generator task simply aborted early (as suggested in comment 2), it would still need to take action to avoid deleting the memory of an in-progress CompileTask and this could get complicated (since you'd have to be careful not to wait on a new task to start b/c there might not be any helper threads etc). On that topic, if we switch to having the GlobalHelperState owning the Tier2Generator task (as suggested above), it'll be necessary to delete it during CancelTier2GeneratorTasks().
Comment on attachment 8885937 [details] [diff] [review] bug1379814-master-tasks.patch Review of attachment 8885937 [details] [diff] [review]: ----------------------------------------------------------------- Much improved! It's kindof nuts that in the worst case we're making M calls to canX() each of which is scanning N threads, all while holding a global lock. It seems like we should maintain a locked array of counters (1 per ThreadType + 1 counter for idle, the sum always adding up to M) and use this to make the decision instead... ::: js/src/vm/HelperThreads.cpp @@ +1229,5 @@ > { > + // Parse tasks that end up compiling asm.js in turn may use Wasm compilation > + // threads to generate machine code. Assume that all parse tasks are master > + // tasks, for now. To do better we need specific information from the last > + // task in the list that it is not an asm.js compilation. Unfortunately I don't think it's possible to know whether a given ParseTask is an asm.js parse task or not before starting that task. It's just a big script that may encounter 0...N nested asm.js functions encountered during parsing. So I'd remove the "To do better..." part and just say we're being conservative since the ParseTask might be a master.
Attachment #8885937 - Flags: feedback?(luke) → feedback+
(In reply to Luke Wagner [:luke] from comment #7) > Comment on attachment 8885937 [details] [diff] [review] > bug1379814-master-tasks.patch > > Much improved! It's kindof nuts that in the worst case we're making M calls > to canX() each of which is scanning N threads, all while holding a global > lock. It seems like we should maintain a locked array of counters (1 per > ThreadType + 1 counter for idle, the sum always adding up to M) and use this > to make the decision instead... Totally agree. I'll file a followup bug for that issue (which is not exactly very difficult but seems like it might be a bit invasive) since I need to have at least the current patch to move tiering along before it goes stale...
Comment on attachment 8885937 [details] [diff] [review] bug1379814-master-tasks.patch Review of attachment 8885937 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/vm/HelperThreads.cpp @@ +2162,5 @@ > + // lists). Unlocking the HelperThreadState between task selection > + // and execution is not well-defined. > + > + if (HelperThreadState().canStartGCParallelTask(lock)) > + task = js::oom::THREAD_TYPE_GCPARALLEL; As a followup, please move these constants out of namespace oom, now that they are used for non-OOM reasons. @@ -2143,4 @@ > HelperThreadState().wait(lock, GlobalHelperThreadState::PRODUCER); > } > > - if (HelperThreadState().canStartGCParallelTask(lock)) { Ah, the dangers of naming something like a simple predicate. With a name like that it's gotta be constant time, right? ::: js/src/vm/HelperThreads.h @@ +152,5 @@ > CONSUMER, > > + // For notifying helper threads doing the work that they may be able to > + // make progress, ie, a work item has been enqueued and an idle helper > + // thread may pick up up the work item and perform it. Great clarifying comments.
Attachment #8885937 - Flags: feedback?(shu) → feedback+
Blocks: 1388752
Blocks: 1388756
Apart from the comment change you requested in your previous remarks, this is identical to the patch that you saw last time. (Dependent bugs have been filed for improving efficiency by reducing task list scanning and for moving the thread identifiers out of the js::oom namespace.)
Attachment #8885937 - Attachment is obsolete: true
Attachment #8895430 - Flags: review?(luke)
Comment on attachment 8895430 [details] [diff] [review] bug1379814-master-tasks-v2.patch Review of attachment 8895430 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/vm/HelperThreads.cpp @@ +980,2 @@ > if (maxThreads >= threadCount) > return true; It seems like we need to do the loop below if isMaster=true (otherwise we have no guarantee there is any idle thread), so I'd think this if requires '&& !isMaster'.
Attachment #8895430 - Flags: review?(luke) → review+
(In reply to Luke Wagner [:luke] from comment #11) > Comment on attachment 8895430 [details] [diff] [review] > bug1379814-master-tasks-v2.patch > > Review of attachment 8895430 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: js/src/vm/HelperThreads.cpp > @@ +980,2 @@ > > if (maxThreads >= threadCount) > > return true; > > It seems like we need to do the loop below if isMaster=true (otherwise we > have no guarantee there is any idle thread), so I'd think this if requires > '&& !isMaster'. Good find. I've been lucky because threadCount >= 5 always and maxThreads is low (I think currently always 1) for master tasks, so !isMaster has been implied when this condition is true.
The patch was tested by this try run (with lots of other patches on top of it, some of which stress this patch and none of which change it materially): https://treeherder.mozilla.org/#/jobs?repo=try&revision=dc133eb38b868073f0035f9e92399f68348fdd69
Pushed by lhansen@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/806a941c8580 Clean up task selection logic, implement master task concept. r=luke
Backed out for asserting at HelperThreads.cpp:997, e.g. in chrome tests on OS X: https://hg.mozilla.org/integration/mozilla-inbound/rev/c4eb424ae51815c00c532dfb550d463b29dcd63a Push with failures: https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=806a941c858051eceb8076b056fb278dcc259647&filter-resultStatus=testfailed&filter-resultStatus=busted&filter-resultStatus=exception&filter-resultStatus=retry&filter-resultStatus=usercancel&filter-resultStatus=runnable Failure log: https://treeherder.mozilla.org/logviewer.html#?job_id=122496269&repo=mozilla-inbound 01:06:42 INFO - GECKO(1662) | [1662] WARNING: file /home/worker/workspace/build/src/ipc/chromium/src/base/histogram.cc, line 349 01:06:42 INFO - GECKO(1662) | [1662] WARNING: nsAppShell::Exit() called redundantly: file /home/worker/workspace/build/src/widget/cocoa/nsAppShell.mm, line 711 01:06:42 INFO - GECKO(1662) | Assertion failure: idle > 0, at /home/worker/workspace/build/src/js/src/vm/HelperThreads.cpp:997 01:06:42 INFO - TEST-INFO | Main app process: exit 1 01:06:42 INFO - Buffered messages finished 01:06:42 ERROR - TEST-UNEXPECTED-FAIL | browser/base/content/test/chrome/test_aboutCrashed.xul | application terminated with exit code 1 01:06:42 INFO - runtests.py | Application ran for: 0:00:16.468743 01:06:42 INFO - zombiecheck | Reading PID log: /var/folders/mv/p22cn1vn67j16nt3hpjkmwl800000w/T/tmp4ngZmgpidlog 01:06:42 INFO - mozcrash Copy/paste: /Users/cltbld/tasks/task_1502438681/build/macosx64-minidump_stackwalk /var/folders/mv/p22cn1vn67j16nt3hpjkmwl800000w/T/tmpReTwBx.mozrunner/minidumps/B0120FEC-EB80-419E-9FE5-954EC5E988D4.dmp /Users/cltbld/tasks/task_1502438681/build/symbols 01:07:00 INFO - mozcrash Saved minidump as /Users/cltbld/tasks/task_1502438681/build/blobber_upload_dir/B0120FEC-EB80-419E-9FE5-954EC5E988D4.dmp 01:07:00 INFO - mozcrash Saved app info as /Users/cltbld/tasks/task_1502438681/build/blobber_upload_dir/B0120FEC-EB80-419E-9FE5-954EC5E988D4.extra 01:07:00 INFO - PROCESS-CRASH | browser/base/content/test/chrome/test_aboutCrashed.xul | application crashed [@ bool js::GlobalHelperThreadState::checkTaskThreadLimit<js::SourceCompressionTask*>(unsigned long, bool) const] 01:07:00 INFO - Crash dump filename: /var/folders/mv/p22cn1vn67j16nt3hpjkmwl800000w/T/tmpReTwBx.mozrunner/minidumps/B0120FEC-EB80-419E-9FE5-954EC5E988D4.dmp 01:07:00 INFO - Operating system: Mac OS X 01:07:00 INFO - 10.10.5 14F27 01:07:00 INFO - CPU: amd64 01:07:00 INFO - family 6 model 69 stepping 1 01:07:00 INFO - 4 CPUs 01:07:00 INFO - 01:07:00 INFO - GPU: UNKNOWN 01:07:00 INFO - 01:07:00 INFO - Crash reason: EXC_BAD_ACCESS / KERN_INVALID_ADDRESS 01:07:00 INFO - Crash address: 0x0 01:07:00 INFO - Process uptime: 17 seconds 01:07:00 INFO - 01:07:00 INFO - Thread 35 (crashed) 01:07:00 INFO - 0 XUL!bool js::GlobalHelperThreadState::checkTaskThreadLimit<js::SourceCompressionTask*>(unsigned long, bool) const [HelperThreads.cpp:806a941c8580 : 997 + 0x0] 01:07:00 INFO - rax = 0x0000000000000000 rdx = 0x00007fff72a7b1f8 01:07:00 INFO - rcx = 0x0000000000000000 rbx = 0x00000001124bdc00 01:07:00 INFO - rsi = 0x000cef00000cef00 rdi = 0x000cee00000cef03 01:07:00 INFO - rbp = 0x000000012c700040 rsp = 0x000000012c700040 01:07:00 INFO - r8 = 0x000000012c6ffff0 r9 = 0x000000012c701000 01:07:00 INFO - r10 = 0x0000000000000001 r11 = 0x0000000000000206 01:07:00 INFO - r12 = 0x000000012c700318 r13 = 0x000000012bce06b8 01:07:00 INFO - r14 = 0x000000010fce2d68 r15 = 0x000000010fd9a7d8 01:07:00 INFO - rip = 0x000000010d0517e3 01:07:00 INFO - Found by: given as instruction pointer in context 01:07:00 INFO - 1 XUL!js::GlobalHelperThreadState::startHandlingCompressionTasks(js::AutoLockHelperThreadState const&) [HelperThreads.cpp:806a941c8580 : 1249 + 0xf] 01:07:00 INFO - rbx = 0x00000001124bdc00 rbp = 0x000000012c700060 01:07:00 INFO - rsp = 0x000000012c700050 r12 = 0x000000012c700318 01:07:00 INFO - r13 = 0x000000012bce06b8 r14 = 0x000000010fce2d68 01:07:00 INFO - r15 = 0x000000010fd9a7d8 rip = 0x000000010d051877 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 2 XUL!js::gc::GCRuntime::beginMarkPhase(JS::gcreason::Reason, js::AutoLockForExclusiveAccess&) [jsgc.cpp:806a941c8580 : 4050 + 0x5] 01:07:00 INFO - rbx = 0x000000010fd9a7f0 rbp = 0x000000012c700210 01:07:00 INFO - rsp = 0x000000012c700070 r12 = 0x000000012c700318 01:07:00 INFO - r13 = 0x000000012bce06b8 r14 = 0x000000010fce2d68 01:07:00 INFO - r15 = 0x000000010fd9a7d8 rip = 0x000000010ce72204 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 3 XUL!js::gc::GCRuntime::incrementalCollectSlice(js::SliceBudget&, JS::gcreason::Reason, js::AutoLockForExclusiveAccess&) [jsgc.cpp:806a941c8580 : 6577 + 0x11] 01:07:00 INFO - rbx = 0x0000000000000000 rbp = 0x000000012c7002e0 01:07:00 INFO - rsp = 0x000000012c700220 r12 = 0x000000012bce0000 01:07:00 INFO - r13 = 0x000000012bce06b8 r14 = 0x000000002bce0000 01:07:00 INFO - r15 = 0x000000010fd9a7d8 rip = 0x000000010ce81b86 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 4 XUL!js::gc::GCRuntime::gcCycle(bool, js::SliceBudget&, JS::gcreason::Reason) [jsgc.cpp:806a941c8580 : 6911 + 0xe] 01:07:00 INFO - rbx = 0x000000012c482000 rbp = 0x000000012c7003c0 01:07:00 INFO - rsp = 0x000000012c7002f0 r12 = 0x000000012c7004b0 01:07:00 INFO - r13 = 0x000000012bce06b8 r14 = 0x000000010fd9a7d8 01:07:00 INFO - r15 = 0x0000000000000000 rip = 0x000000010ce82fa2 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 5 XUL!js::gc::GCRuntime::collect(bool, js::SliceBudget, JS::gcreason::Reason) [jsgc.cpp:806a941c8580 : 7060 + 0x11] 01:07:00 INFO - rbx = 0x000000012663e000 rbp = 0x000000012c7004a0 01:07:00 INFO - rsp = 0x000000012c7003d0 r12 = 0x0000000000000001 01:07:00 INFO - r13 = 0x000000012c7004b0 r14 = 0x000000012bce06b8 01:07:00 INFO - r15 = 0x0000000000000000 rip = 0x000000010ce843d0 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 6 XUL!js::gc::GCRuntime::gc(JSGCInvocationKind, JS::gcreason::Reason) [jsgc.cpp:806a941c8580 : 7127 + 0x33] 01:07:00 INFO - rbx = 0x000000012bce06b8 rbp = 0x000000012c700510 01:07:00 INFO - rsp = 0x000000012c7004b0 r12 = 0x0000000000000001 01:07:00 INFO - r13 = 0x000000000000000d r14 = 0x0000000000000000 01:07:00 INFO - r15 = 0x0000000000000000 rip = 0x000000010ce6afdf 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 7 XUL!(anonymous namespace)::WorkerThreadPrimaryRunnable::Run() [RuntimeService.cpp:806a941c8580 : 2941 + 0x8] 01:07:00 INFO - rbx = 0x000000012c3a3680 rbp = 0x000000012c700830 01:07:00 INFO - rsp = 0x000000012c700520 r12 = 0x0000000000000001 01:07:00 INFO - r13 = 0x000000000000000d r14 = 0x00000000ffffffff 01:07:00 INFO - r15 = 0x000000012c482000 rip = 0x000000010aab1471 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 8 XUL!nsThread::ProcessNextEvent(bool, bool*) [nsThread.cpp:806a941c8580 : 1446 + 0x6] 01:07:00 INFO - rbx = 0x000000012c700cb0 rbp = 0x000000012c700da0 01:07:00 INFO - rsp = 0x000000012c700840 r12 = 0x000000012c7008a8 01:07:00 INFO - r13 = 0x0000000000000000 r14 = 0x000000012c1d8428 01:07:00 INFO - r15 = 0x000000012c1d8400 rip = 0x00000001081ee316 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 9 XUL!NS_ProcessNextEvent(nsIThread*, bool) [nsThreadUtils.cpp:806a941c8580 : 480 + 0xd] 01:07:00 INFO - rbx = 0x0000000000000000 rbp = 0x000000012c700dc0 01:07:00 INFO - rsp = 0x000000012c700db0 r12 = 0x000000012c1d8400 01:07:00 INFO - r13 = 0x000000012c3a3630 r14 = 0x000000012c3a3600 01:07:00 INFO - r15 = 0x000000012c700dd0 rip = 0x00000001081f0c9f 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 10 XUL!mozilla::ipc::MessagePumpForNonMainThreads::Run(base::MessagePump::Delegate*) [MessagePump.cpp:806a941c8580 : 339 + 0xa] 01:07:00 INFO - rbx = 0x000000012c1af230 rbp = 0x000000012c700e20 01:07:00 INFO - rsp = 0x000000012c700dd0 r12 = 0x000000012c1d8400 01:07:00 INFO - r13 = 0x000000012c3a3630 r14 = 0x000000012c3a3600 01:07:00 INFO - r15 = 0x000000012c700dd0 rip = 0x00000001087cfdbb 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 11 XUL!MessageLoop::Run() [message_loop.cc:806a941c8580 : 319 + 0x5] 01:07:00 INFO - rbx = 0x000000012c1af230 rbp = 0x000000012c700e50 01:07:00 INFO - rsp = 0x000000012c700e30 r12 = 0x000000012c1d8400 01:07:00 INFO - r13 = 0x00000000000008ff r14 = 0x000000012c1d8428 01:07:00 INFO - r15 = 0x000000000df35201 rip = 0x0000000108780655 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 12 XUL!nsThread::ThreadFunc(void*) [nsThread.cpp:806a941c8580 : 506 + 0x5] 01:07:00 INFO - rbx = 0x000000012c1af230 rbp = 0x000000012c700ec0 01:07:00 INFO - rsp = 0x000000012c700e60 r12 = 0x000000012c1d8400 01:07:00 INFO - r13 = 0x00000000000008ff r14 = 0x000000012c1d8428 01:07:00 INFO - r15 = 0x000000000df35201 rip = 0x00000001081ea4a9 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 13 libnss3.dylib!_pt_root [ptthread.c:806a941c8580 : 216 + 0x3] 01:07:00 INFO - rbx = 0x000000012996cbd0 rbp = 0x000000012c700ef0 01:07:00 INFO - rsp = 0x000000012c700ed0 r12 = 0x0000000000010003 01:07:00 INFO - r13 = 0x00000000000008ff r14 = 0x000000012c701000 01:07:00 INFO - r15 = 0x0000000000000000 rip = 0x0000000107f40d69 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 14 libsystem_pthread.dylib!_pthread_body + 0x83 01:07:00 INFO - rbx = 0x000000012c701000 rbp = 0x000000012c700f10 01:07:00 INFO - rsp = 0x000000012c700f00 r12 = 0x0000000000010003 01:07:00 INFO - r13 = 0x00000000000008ff r14 = 0x000000012996cbd0 01:07:00 INFO - r15 = 0x0000000107f40c50 rip = 0x00007fff8c6c505a 01:07:00 INFO - Found by: call frame info 01:07:00 INFO - 15 libsystem_pthread.dylib!_pthread_start + 0xb0 01:07:00 INFO - rbp = 0x000000012c700f50 rsp = 0x000000012c700f20 01:07:00 INFO - rip = 0x00007fff8c6c4fd7 01:07:00 INFO - Found by: previous frame's frame pointer 01:07:00 INFO - 16 libsystem_pthread.dylib!thread_start + 0xd 01:07:00 INFO - rbp = 0x000000012c700f78 rsp = 0x000000012c700f60 01:07:00 INFO - rip = 0x00007fff8c6c23ed 01:07:00 INFO - Found by: previous frame's frame pointer 01:07:00 INFO - 17 libnss3.dylib + 0x140c50 01:07:00 INFO - rsp = 0x000000012c701030 rip = 0x0000000107f40c50 01:07:00 INFO - Found by: stack scanning
Flags: needinfo?(lhansen)
The assertion is incorrect - idle threads can be zero, because the predicate can be called from a thread other than a helper thread. Notably, the code that handles compression tasks has its own little mini-scheduler that runs on a separate thread.
Flags: needinfo?(lhansen)
Pushed by lhansen@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/983eaa990fe2 Clean up task selection logic, implement master task concept (take 2). r=luke
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla57
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: