Closed
Bug 1450644
Opened 7 years ago
Closed 7 years ago
WorkerRef should make APIs able to work until the worker is completely terminated
Categories
(Core :: DOM: Workers, defect, P2)
Tracking
()
RESOLVED
FIXED
mozilla61
Tracking | Status | |
---|---|---|
firefox61 | --- | fixed |
People
(Reporter: baku, Assigned: baku)
Details
Attachments
(6 files)
6.13 KB,
patch
|
asuth
:
review-
|
Details | Diff | Splinter Review |
5.56 KB,
patch
|
asuth
:
review+
|
Details | Diff | Splinter Review |
1.99 KB,
patch
|
asuth
:
review+
|
Details | Diff | Splinter Review |
3.72 KB,
patch
|
asuth
:
review+
|
Details | Diff | Splinter Review |
8.53 KB,
patch
|
asuth
:
review+
|
Details | Diff | Splinter Review |
8.34 KB,
patch
|
asuth
:
review+
|
Details | Diff | Splinter Review |
Some APIs (MessagePort, BroadcastChannel) have a wrong behavior after self.close().
MessagePort/BroadcastChannel do not dispatch events after a self.close(), and this is against the spec. Note that this is not a regression, because this problem was there also before the introduction of WorkerRef.
But now, we can fix this just changing how WorkerRef is created and how its callback is executed.
The new approach is: WorkerRef should follow the JS execution. If we have JS code running, all the APIs should behave correctly. This is done with 2 changes:
1. WorkerRef can be created until the Web Worker goes in canceling state. This means that a WorkerRef can be created until we have JS code running (also after a self.close()). When the Web Worker goes in canceling state we don't have JS code running any more.
2. The WorkerRef callback is executed only when the Web Worker state goes to Canceling. Only at this point the components should start the shutdown procedure.
I'm going to submit several patches with tests.
Assignee | ||
Comment 1•7 years ago
|
||
Andrew, how familiar are you with the worker shutdown?
Usually smaug helps me with these patches, but he is busy doing event handling.
Attachment #8964247 -
Flags: review?(bugmail)
Assignee | ||
Comment 2•7 years ago
|
||
Attachment #8964248 -
Flags: review?(bugmail)
Assignee | ||
Comment 3•7 years ago
|
||
Attachment #8964249 -
Flags: review?(bugmail)
Updated•7 years ago
|
Priority: -- → P2
Comment 4•7 years ago
|
||
(In reply to Andrea Marchesini [:baku] from comment #0)
> Some APIs (MessagePort, BroadcastChannel) have a wrong behavior after
> self.close().
> MessagePort/BroadcastChannel do not dispatch events after a self.close(),
> and this is against the spec. Note that this is not a regression, because
> this problem was there also before the introduction of WorkerRef.
Affirming, my understanding of the spec coverage is: The operating model for workers is that self.close() sets the closing flag, but the event loop doesn't get terminated until the event loop has been drained. Specifically:
- The event processing model's definition of the "event loop" at https://html.spec.whatwg.org/multipage/webappapis.html#event-loop-processing-model is in charge.
- Step 8 is key which says: "If this is a worker event loop (i.e. one running for a WorkerGlobalScope), but there are no tasks in the event loop's task queues and the WorkerGlobalScope object's closing flag is true, then destroy the event loop, aborting these steps, resuming the run a worker steps described in the Web workers section below."
- "Terminate a worker" is a different thing. Defined at the bottom of https://html.spec.whatwg.org/multipage/workers.html#worker-processing-model it calls for discarding the contents of the event loop without processing. (This happens when its owner stops being "fully active" or Worker.terminate() is called by its owner.)
Comment 5•7 years ago
|
||
Comment on attachment 8964247 [details] [diff] [review]
part 1 - MessagePort
Review of attachment 8964247 [details] [diff] [review]:
-----------------------------------------------------------------
You seem to have left the worker-close.html file out of the patch. This is r=asuth modulo that, but as a good WPT citizen I should review before potential uplift, so minusing for now.
::: dom/messagechannel/MessagePort.cpp
@@ +297,5 @@
> StrongWorkerRef::Create(workerPrivate, "MessagePort",
> [self]() { self->CloseForced(); });
> if (NS_WARN_IF(!strongWorkerRef)) {
> + // The worker is shutting down.
> + aRv.Throw(NS_ERROR_FAILURE);
Restating: Because of the change in StrongWorkerRef::Create so the WorkerPrivate::AddHolder(,aFailStatus) aFailStatus value goes from Closing to Canceling, we will now successfully get a StrongWorkerRef in cases where self.close() has been called where previously we would not. This allows us to more aggressively fail and throw here rather than silently creating an already-closed port.
Note that the change from Closing to Canceling also means the Terminating case will now also succeed where it previously would not return a ref. As covered in comment 4, however, the event loop will be cleared in these cases, so it nets out to be the same.
::: dom/workers/WorkerRef.cpp
@@ +69,5 @@
> Notify(WorkerStatus aStatus) override
> {
> MOZ_ASSERT(mWorkerRef);
>
> + if (aStatus < Canceling) {
Restating: The idiom for WorkerHolder::Notify is to remove yourself as a holder when it's appropriate, ignoring lower levels of WorkerStatus that you don't care about. As :baku describes in comment 0, the point here is to keep worker refs around as long as JS can be executing, hence the change to aFailStatus=Canceling from aFailStatus=Closing. Accordingly, we ignore anything less than Canceling. (And we always return true because returning false gets you a spammy NS_WARNING.)
::: testing/web-platform/meta/MANIFEST.json
@@ +360235,5 @@
> "/webmessaging/message-channels/close.html",
> {}
> ]
> ],
> + "webmessaging/message-channels/worker-close.html": [
It looks like you forgot to attach this file to the patch.
::: testing/web-platform/tests/webmessaging/message-channels/worker-post-after-close.html
@@ +4,5 @@
> +<script src="/resources/testharness.js"></script>
> +<script src="/resources/testharnessreport.js"></script>
> +<script>
> +
> +async_test(t => {
I suggest adding an explanatory block comment like the following. It's a little overkill, but I don't see it hurting:
The Worker Event Loop should not be shutdown until its task queue is empty, as checked after running a task (and microtask checkpoint). And postMessage does not check on the closing state of the recipient globals. This means that a Worker should be able to invoke postMessage() targeted at itself after calling self.close() and still receive the resulting event. This test creates a Worker, postMessage()ing it a MessagePort to use to indicate test success. The worker closes itself when it receives the results port, then postMessage()s itself, sending a success result if the message event is received.
@@ +10,5 @@
> + onmessage = function(e) {
> + close();
> + var mc = new MessageChannel();
> + mc.port1.onmessage = function() {
> + e.ports[0].postMessage(true);
Given the magic nature of MessageEvent.ports and the choice to not include the port as part of the serialized data payload, a comment like the following might be helpful for those trying to understand the test:
// MessageEvent.ports[0] is the first transferred MessagePort, which need not be part of the deserialized MessageEvent.data payload.
Attachment #8964247 -
Flags: review?(bugmail) → review-
Assignee | ||
Comment 6•7 years ago
|
||
> You seem to have left the worker-close.html file out of the patch. This is
No, I haven't. I just rename it. The patch says:
diff --git a/testing/web-platform/tests/webmessaging/message-channels/worker.html b/testing/web-platform/tests/webmessaging/message-channels/worker-close.html
rename from testing/web-platform/tests/webmessaging/message-channels/worker.html
rename to testing/web-platform/tests/webmessaging/message-channels/worker-close.html
Flags: needinfo?(bugmail)
Comment 7•7 years ago
|
||
Comment on attachment 8964248 [details] [diff] [review]
part 2 - BroadcastChannel
Review of attachment 8964248 [details] [diff] [review]:
-----------------------------------------------------------------
Restating: BCPostMessageRunnable existed because PBackground actor creation was async. Interestingly, not for the obvious reason of actor availability, but instead for the scenario :baku described in https://bugzilla.mozilla.org/show_bug.cgi?id=966439#c41:
var x = new BroadcastChannel('a');
var y = new BroadcastChannel('a');
x.postMessage('foobar');
Because the actor is now synchronously constructed inside the BroadcastChannel constructor, this is no longer necessary.
In any event, that's all a bit of a distraction from the bug-relevant part of this patch which is the same set of changes to MessagePort.cpp from part 1:
- Because we will now get a WorkerRef during Closing, we can now more aggressively throw an error if we don't get a worker ref.
- Correspondingly, we have to update the InitializeRunnable's WorkerMainThreadRunnable::Dispatch()'s aFailStatus to be Canceling for the syncloop it runs.
::: testing/web-platform/tests/webmessaging/broadcastchannel/workers.html
@@ +134,5 @@
> + t.done();
> + }
> +
> + var workerBlob = new Blob([workerCode.toString() + ";workerCode();"], {type:"application/javascript"});
> + new Worker(URL.createObjectURL(workerBlob));
nit: Elsewhere in the file we add onmessage handlers to the Worker, which should keep their ports in this global alive and thereby keep the Worker a "protected worker". Given the ordering of https://html.spec.whatwg.org/multipage/workers.html#worker-processing-model it's theoretically possible for the closing flag of the worker to be set before we start executing it. It doesn't really matter for the test because we call close() first thing and we don't start the event loop until after we run the worker's script, but it seems appropriate to either add a comment about why people shouldn't be concerned or just add an empty onmessage handler that says "keep worker protected until it closes itself" or something like that.
Attachment #8964248 -
Flags: review?(bugmail) → review+
Comment 8•7 years ago
|
||
Comment on attachment 8964249 [details] [diff] [review]
part 3 - WebSocket
Review of attachment 8964249 [details] [diff] [review]:
-----------------------------------------------------------------
Restating: Overall, the same rationale as part 1 and part 2 for consumer changes; switch to Canceling from Closing.
::: dom/websocket/WebSocket.cpp
@@ +618,5 @@
> // web socket count here.
> if (mWebSocket->GetOwner()) {
> mWebSocket->GetOwner()->UpdateWebSocketCount(-1);
> }
> + } else {
Restating: This guard was added in bug 1446366 because it was possible for RegisterWorkerRef to fail to create a StrongWorkerRef, which would then result in Disconnect() being called without the ConnectRunnable ever being created or dispatched so there would be no need to disconnect. Since the StrongWorkerRef should always be created now (per comment 0 rationale) AND the RegisterWorkerRef caller in the next hunk will now throw and return a nullptr instead of the websocket, the guard is no longer needed.
Attachment #8964249 -
Flags: review?(bugmail) → review+
Comment 9•7 years ago
|
||
Comment on attachment 8964247 [details] [diff] [review]
part 1 - MessagePort
Review of attachment 8964247 [details] [diff] [review]:
-----------------------------------------------------------------
(In reply to Andrea Marchesini [:baku] from comment #6)
> > You seem to have left the worker-close.html file out of the patch. This is
>
> No, I haven't. I just rename it. The patch says:
Whoops! Sorry. Very much looking forward to our adoption of phabricator. (Although if I paid more attention to the manifest, I would have noticed the rename too. Luckily splinter fails to show the incriminating evidence in the bugzilla comments...)
Attachment #8964247 -
Flags: review- → review+
Updated•7 years ago
|
Flags: needinfo?(bugmail)
Assignee | ||
Comment 10•7 years ago
|
||
The 3 patches introduce an interesting bug.
SharedWorker has its own MessagePort, and, this port releases its WorkerRef on Closing state. Now we want to release it on Canceling state. But here the bug:
When a Worker calls self.close(), it goes in Closing state, the WorkerHolders/WorkerRefs are notified, timers are canceled. Then, the worker waits for control runnables. If ControlRunnables are not dispatched, the worker waits forever.
I'm sure I can reproduce the same bug using other APIs.
The fix is:
1. When the worker goes in closing state, it dispatch a normal runnable (the dispatching of this is a bit tricky, because WorkerRunnables are not dispatched at that state of the shutdown): CancelingRunnable
2. when that runnable is executed, it means that the JS chunk of code is completed and we are back to the event loop of the Worker thread. The CancelingRunnable calls Cancel() from the parent thread. The shutdown procedure continues.
3. But if the code is doing an infinite loop, CancelingRunnable doesn't run. In this scenario we have a nsITimer (mCancelingTimer), managed by the parent, that, after 30seconds, calls Cancel(). This 3rd point is done in the following patch.
This is part of a set of patches to cleanup the web worker shutdown, but I need it now because otherwise, these 3 patches cannot land.
Attachment #8964998 -
Flags: review?(bugmail)
Assignee | ||
Comment 11•7 years ago
|
||
Attachment #8965001 -
Flags: review?(bugmail)
Comment 12•7 years ago
|
||
(In reply to Andrea Marchesini [:baku] from comment #0)
> Some APIs (MessagePort, BroadcastChannel) have a wrong behavior after
> self.close().
> MessagePort/BroadcastChannel do not dispatch events after a self.close(),
> and this is against the spec. Note that this is not a regression, because
> this problem was there also before the introduction of WorkerRef.
Can you elaborate on this / provide spec support or link to the issue where a change is planned?
I believe the following to be true based on the spec:
1. We can create a MessageChannel on a post-close() worker.
2. We can postMessage() on the ports of that channel, resulting in events that will be dispatched in other globals. The part 1 fix corrects our behavior here.
3. BUT messages received on those ports will never result in dispatched events following the call to close(). Because when we set the closing flag the spec prohibit any future tasks being enqueued and requires us to clear out any existing tasks. (My excerpts in comment 4 were misleading; indeed the event loop won't terminate until there are no more events, but the spec does call for clearing the event loop for close() too. Please see citations below.)
This may be what you meant in your description above, but the part 1 test currently relies on us dispatching an event after close() is invoked. To make the test correct I think the situation needs to be that the worker does something like.
onmessage = function(e) {
close();
const nmc = new MessageChannel();
// ship the port to the page, where it will assert it received the port and received `42`.
e.ports[0].postMessage('', nmc.port2);
nmc.port1.postMessage(42);
}
## The closing flag is set synchronously and clears the task queues.
https://html.spec.whatwg.org/multipage/workers.html#close-a-worker
"""
To close a worker, given a workerGlobal, run these steps:
1. Discard any tasks that have been added to workerGlobal's event loop's task queues.
2. Set workerGlobal's closing flag to true. (This prevents any further tasks from being queued.)
The close() method, when invoked, must close a worker with this DedicatedWorkerGlobalScope object.
"""
Same deal for SharedWorkers:
https://html.spec.whatwg.org/multipage/workers.html#dom-sharedworkerglobalscope-close
"""
The close() method, when invoked, must close a worker with this SharedWorkerGlobalScope object.
"""
## Additional task queueing is forbidden on the worker's event loop:
https://html.spec.whatwg.org/multipage/workers.html#worker-event-loop
"""
Once the WorkerGlobalScope's closing flag is set to true, the event loop's task queues must discard any further tasks that would be added to them (tasks already on the queue are unaffected except where otherwise specified). Effectively, once the closing flag is true, timers stop firing, notifications for all pending background operations are dropped, etc.
"""
## The MessagePorts do not get to bypass this
Their port message queues as defined at https://html.spec.whatwg.org/multipage/web-messaging.html#port-message-queue are task sources: "When a port's port message queue is enabled, the event loop must use it as one of its task sources."
Task sources map to event queues, and because of the closing flag, those are all closed:
https://html.spec.whatwg.org/multipage/webappapis.html#task-source
"""
Each task is defined as coming from a specific task source. All the tasks from one particular task source and destined to a particular event loop (e.g. the callbacks generated by timers of a Document, the events fired for mouse movements over that Document, the tasks queued for the parser of that Document) must always be added to the same task queue, but tasks from different task sources may be placed in different task queues.
"""
And that's what MessagePort's postMessage is doing at step 7 of https://html.spec.whatwg.org/multipage/web-messaging.html#dom-messageport-postmessage: "Add a task that runs the following steps to the port message queue of targetPort". That task will never run so we won't get to 7.6 wherein we "fire an event". (And it's "fire an event" that triggers "dispatching" in step 5.)
For BroadcastChannel's postMessage at step 7 of https://html.spec.whatwg.org/multipage/web-messaging.html#dom-broadcastchannel-postmessage the closing worker is excluded by "a global object that is a WorkerGlobalScope object whose closing flag is false and whose worker is not a suspendable worker." Now, there was some discussion on https://github.com/whatwg/html/issues/1371 about the impact of a BroadcastChannel's closing flag (which resulted in changes made on bug 1278340), but that's a separate issue.
Flags: needinfo?(amarchesini)
Comment 13•7 years ago
|
||
Comment on attachment 8964247 [details] [diff] [review]
part 1 - MessagePort
Review of attachment 8964247 [details] [diff] [review]:
-----------------------------------------------------------------
::: testing/web-platform/tests/webmessaging/message-channels/worker-post-after-close.html
@@ +9,5 @@
> + function workerCode() {
> + onmessage = function(e) {
> + close();
> + var mc = new MessageChannel();
> + mc.port1.onmessage = function() {
Setting to r- for now since I don't think the spec should allow this onmessage event to fire. Based on my needinfo, maybe I have this wrong though and I'll r+ again if that's the case.
Attachment #8964247 -
Flags: review+ → review-
Assignee | ||
Comment 14•7 years ago
|
||
> Setting to r- for now since I don't think the spec should allow this
> onmessage event to fire. Based on my needinfo, maybe I have this wrong
> though and I'll r+ again if that's the case.
Right. I correct the test. I'm going to upload a new version of it.
Flags: needinfo?(amarchesini)
Comment 15•7 years ago
|
||
Comment on attachment 8964998 [details] [diff] [review]
part 4 - Shutdown when closing
Review of attachment 8964998 [details] [diff] [review]:
-----------------------------------------------------------------
::: dom/workers/WorkerPrivate.cpp
@@ +963,5 @@
>
> virtual bool Notify(WorkerStatus aStatus) override { return true; }
> };
>
> +// A runnable to cancel the worker from the parent process.
nit: s/parent process/parent thread/.
Also, can you expand on this comment?
@@ +4525,2 @@
> if (aStatus == Closing) {
> + if (!mSyncLoopStack.IsEmpty()) {
(Patch meta: The "!" gets removed in the next patch.
Attachment #8964998 -
Flags: review?(bugmail) → review+
Comment 16•7 years ago
|
||
Comment on attachment 8965001 [details] [diff] [review]
part 2 - timeout
Review of attachment 8965001 [details] [diff] [review]:
-----------------------------------------------------------------
::: dom/workers/WorkerPrivate.cpp
@@ +72,5 @@
>
> // A shrinking GC will run five seconds after the last event is processed.
> #define IDLE_GC_TIMER_DELAY_SEC 5
>
> +#define CANCELING_TIMEOUT 30000 // 30 seconds
nit: suffix with units to be CANCELING_TIMEOUT_MS (especially since the preceding constants use a different unit)
q: Given that you're using this in a test, wouldn't it be better to have this be controlled by an unlisted pref so that tests don't have to wait 30 seconds? Note that it's helpful to note near the pref string that search will find whether dynamically changing the preference has an effect. (Since it can be useful for tests to only trigger timeout termination of something after they've established the expected steady state has been reached.)
@@ +4577,5 @@
> +
> + // At the same time, we want to be sure that we interrupt infinite loops.
> + // The following runnable starts a timer that cancel the worker, from the
> + // parent thread, after CANCELING_TIMEOUT millseconds.
> + RefPtr<CancelingWithTimeoutOnParentRunnable> rr =
It'd be great to expand this comment (or above at the mSyncLoopStack.IsEmpty() conditional) to explain why it's not necessary to start the timer in the sync loop case. Based on my cursory investigation, it seems like it's possible for the worker to be in a sync XHR, have that complete, and then end up in an infinite loop that we don't cancel.
@@ +4962,5 @@
> + // This is not needed if we are already in an advanced shutdown state.
> + {
> + MutexAutoLock lock(mMutex);
> + if (ParentStatus() >= Terminating) {
> + return;
Should you be nulling out mCancelingTimer on the way out here to make the invariant more clear? The failure case below also will leave mCancelingTimer around. A ScopeExit could work, or if you don't care, maybe a comment above mCancelingTimer that it's okay to leave around uninitialized in the failure case.
::: dom/workers/test/test_WorkerDebugger.xul
@@ +109,5 @@
> + promise = waitForRegister(SHARED_WORKER_URL);
> + worker = new SharedWorker(SHARED_WORKER_URL);
> + sharedDbg = await promise;
> +
> + info("Send a message to the shared worker to tell it to close " +
I think it would be preferable to make the termination timeout pref controlled, as covered elsewhere in the review. But if not, I think it's very appropriate to put a big comment here that this step is expected to take 30 seconds. Also, it seems very likely you'll need to do requestLongerTimeout(2) or more. I think the file doesn't currently use it, which means the timeout is 45 seconds which means this change is very likely to make the test push right up against intermittent failure.
Attachment #8965001 -
Flags: review?(bugmail) → review+
Assignee | ||
Comment 17•7 years ago
|
||
Here I make the timeout value configurable via pref.
Attachment #8968410 -
Flags: review?(bugmail)
Comment 18•7 years ago
|
||
Comment on attachment 8968410 [details] [diff] [review]
shutdown 3
Review of attachment 8968410 [details] [diff] [review]:
-----------------------------------------------------------------
::: dom/base/DOMPrefs.cpp
@@ +55,5 @@
> +#define DOM_UINT32_PREF(name, pref, defaultValue) \
> + /* static */ uint32_t \
> + DOMPrefs::name() \
> + { \
> + static bool initialized = false; \
nit: This upper section seems accidentally over-indented.
::: dom/base/DOMPrefsInternal.h
@@ +56,5 @@
> DOM_WEBIDL_PREF(NetworkInformationEnabled)
> DOM_WEBIDL_PREF(FetchObserverEnabled)
> DOM_WEBIDL_PREF(PerformanceObserverEnabled)
> +
> +DOM_UINT32_PREF(WorkerCancelingTimeout, "dom.worker.canceling.timeout",
Please rename to include units. We have actually had issues related to this in ServiceWorkers before, so this isn't just me wishing I'd been a physicist or anything. Based on the existing prior art in "about:config" when I search for "milli", I propose:
DOM_UINT32_PREF(WorkerCancelingTimeoutMillis, "dom.worker.canceling.timeoutMilliseconds"
::: dom/workers/WorkerPrivate.cpp
@@ +4966,5 @@
> WorkerPrivate::StartCancelingTimer()
> {
> AssertIsOnParentThread();
>
> + auto raii = MakeScopeExit([&] {
nit: No need to actually fix, but "cleanup" or "errorCleanup" seem more useful than name-checking RAII.
@@ +4984,5 @@
> return;
> }
> }
>
> + uint32_t cancelingTimeout = DOMPrefs::WorkerCancelingTimeout();
Units:
uint32_t cancelingTimeoutMillis = DOMPrefs::WorkerCancelingTimeoutMillis();
::: dom/workers/test/test_WorkerDebugger.xul
@@ +23,5 @@
> function test() {
> (async function() {
> SimpleTest.waitForExplicitFinish();
>
> + // Only 5 seconds of canceling timeout.
Please justify the choice of 5 seconds. Perhaps something like:
"""
When the closing process begins, we schedule a timer to terminate the worker in case it's in an infinite loop, which is exactly what we do in this test. We want a duration long enough that we can be confident that the infinite loop was entered as measured by performance.now() and that we terminated it, but not as long as our 30 second default we currently ship.
"""
Going along with that, I think it's appropriate below to use performance.now() to ensure that at least 5000 milliseconds (perhaps less some epsilon depending on how your fuzzy-fox mitigations mess with things) have elapsed. This gives us greater confidence that we were actually testing the termination timer here, and not that some other mechanism came into play.
::: modules/libpref/Preferences.cpp
@@ +4958,5 @@
> uint32_t,
> bool);
>
> +template nsresult
> +Preferences::AddAtomicUintVarCache(Atomic<uint32_t, SequentiallyConsistent>*,
Restating: This instantiation is necessary because we use Atomic<uint32_t> in DOMPrefs and it gets the default ordering of SequentiallyConsistent and there wasn't an instantiation of this yet.
Attachment #8968410 -
Flags: review?(bugmail) → review+
Comment 19•7 years ago
|
||
Pushed by amarchesini@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/ed93e3547096
WorkerRef should make APIs able to work until the worker is completely terminated - part 1 - MessagePort, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/629e499c0f75
WorkerRef should make APIs able to work until the worker is completely terminated - part 2 - BroadcastChannel, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/246fb3ee14cd
WorkerRef should make APIs able to work until the worker is completely terminated - part 3 - WebSocket, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/0d2088370d0c
Better shutdown approach for Workers - part 1 - CancelingRunnable, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/cbe3ad4833b6
Better shutdown approach for Workers - part 2 - Timeout + ControlRunnable, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/08239799d43e
Better shutdown approach for Workers - part 3 - Preference for time worker timeout, r=asuth
Comment 20•7 years ago
|
||
Backed out 7 changesets (bug 1450644, bug 1454633) for for failing browser_storage_permission.js
https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=f4989e0da2216f6cf3fbe3d3a616d31447f068ec&filter-searchStr=browser-chrome&filter-job_type_symbol=bc3&filter-job_type_symbol=bc6
https://hg.mozilla.org/integration/mozilla-inbound/rev/73615fe67ab6424c9334f95c39a74fc224977993
https://treeherder.mozilla.org/logviewer.html#?job_id=174187061&repo=mozilla-inbound
20:11:19 INFO - WARNING | IO Completion Port failed to signal process shutdown
20:11:19 INFO - Parent process 5084 exited with children alive:
20:11:19 INFO - PIDS: 7296
20:11:19 INFO - Attempting to kill them, but no guarantee of success
20:11:19 INFO - TEST-INFO | Main app process: exit 0
20:11:19 ERROR - 948 ERROR TEST-UNEXPECTED-FAIL | dom/serviceworkers/test/browser_storage_permission.js | leaked 2 window(s) until shutdown [url = http://mochi.test:8888/browser/dom/serviceworkers/test/empty.html?storage_permission]
20:11:19 INFO - TEST-INFO | dom/serviceworkers/test/browser_storage_permission.js | windows(s) leaked: [pid = 6728] [serial = 40], [pid = 6728] [serial = 42]
20:11:19 INFO - runtests.py | Application ran for: 0:03:30.185000
20:11:19 INFO - zombiecheck | Reading PID log: c:\users\task_1523994445\appdata\local\temp\tmpa_ftpcpidlog
20:11:19 INFO - ==> process 5084 launched child process 8040 ("Z:\task_1523994445\build\application\firefox\firefox.exe" -contentproc --channel="5084.0.1628532536\581954630" -childID 1 -isForBrowser -prefsHandle 2040 -prefsLen 16433 -schedulerPrefs 0001,2 -greomni "Z:\task_1523994445\build\application\firefox\omni.ja" -appomni "Z:\task_1523994445\build\application\firefox\browser\omni.ja" -appdir "Z:\task_1523994445\build\application\firefox\browser" - 5084 "\\.\pipe\gecko-crash-server-pipe.5084" 2084 tab)
20:11:19 INFO - ==> process 5084 launched child process 6728 ("Z:\task_1523994445\build\application\firefox\firefox.exe" -contentproc --channel="5084.6.2006398579\1445751292" -childID 2 -isForBrowser -prefsHandle 2432 -prefsLen 16433 -schedulerPrefs 0001,2 -greomni "Z:\task_1523994445\build\application\firefox\omni.ja" -appomni "Z:\task_1523994445\build\application\firefox\browser\omni.ja" -appdir "Z:\task_1523994445\build\application\firefox\browser" - 5084 "\\.\pipe\gecko-crash-server-pipe.5084" 2444 tab)
20:11:19 INFO - ==> process 5084 launched child process 4984 ("Z:\task_1523994445\build\application\firefox\firefox.exe" -contentproc --channel="5084.12.1142979272\2112065260" -childID 3 -isForBrowser -prefsHandle 3464 -prefsLen 22004 -schedulerPrefs 0001,2 -greomni "Z:\task_1523994445\build\application\firefox\omni.ja" -appomni "Z:\task_1523994445\build\application\firefox\browser\omni.ja" -appdir "Z:\task_1523994445\build\application\firefox\browser" - 5084 "\\.\pipe\gecko-crash-server-pipe.5084" 3336 tab)
20:11:19 INFO - ==> process 5084 launched child process 3088 ("Z:\task_1523994445\build\application\firefox\firefox.exe" -contentproc --channel="5084.18.301647013\1202071366" -childID 4 -isForBrowser -prefsHandle 3332 -prefsLen 22099 -schedulerPrefs 0001,2 -greomni "Z:\task_1523994445\build\application\firefox\omni.ja" -appomni "Z:\task_1523994445\build\application\firefox\browser\omni.ja" -appdir "Z:\task_1523994445\build\application\firefox\browser" - 5084 "\\.\pipe\gecko-crash-server-pipe.5084" 3628 tab)
20:11:19 INFO - zombiecheck | Checking for orphan process with PID: 8040
20:11:19 INFO - zombiecheck | Checking for orphan process with PID: 6728
20:11:19 INFO - zombiecheck | Checking for orphan process with PID: 4984
20:11:19 INFO - zombiecheck | Checking for orphan process with PID: 3088
20:11:19 INFO - Stopping web server
20:11:19 INFO - Stopping web socket server
20:11:19 INFO - Stopping ssltunnel
20:11:19 INFO - TEST-INFO | leakcheck | default process: leak threshold set at 0 bytes
20:11:19 INFO - TEST-INFO | leakcheck | plugin process: leak threshold set at 0 bytes
20:11:19 INFO - TEST-INFO | leakcheck | tab process: leak threshold set at 1000 bytes
20:11:19 INFO - TEST-INFO | leakcheck | geckomediaplugin process: leak threshold set at 20000 bytes
20:11:19 INFO - TEST-INFO | leakcheck | gpu process: leak threshold set at 0 bytes
https://treeherder.mozilla.org/logviewer.html#?job_id=174181622&repo=mozilla-inbound&lineNumber=2211
Flags: needinfo?(amarchesini)
Assignee | ||
Updated•7 years ago
|
Flags: needinfo?(amarchesini)
Comment 21•7 years ago
|
||
Pushed by amarchesini@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/c8c11bd92218
WorkerRef should make APIs able to work until the worker is completely terminated - part 1 - MessagePort, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/f2341e59a4b5
WorkerRef should make APIs able to work until the worker is completely terminated - part 2 - BroadcastChannel, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/0d3ab01ff51b
WorkerRef should make APIs able to work until the worker is completely terminated - part 3 - WebSocket, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/52a992c65070
Better shutdown approach for Workers - part 1 - CancelingRunnable, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/05cb4efbcd69
Better shutdown approach for Workers - part 2 - Timeout + ControlRunnable, r=asuth
https://hg.mozilla.org/integration/mozilla-inbound/rev/61b3c0c87bb5
Better shutdown approach for Workers - part 3 - Preference for time worker timeout, r=asuth
Created web-platform-tests PR https://github.com/w3c/web-platform-tests/pull/10516 for changes under testing/web-platform/tests
Upstream web-platform-tests status checks passed, PR will merge once commit reaches central.
Upstream PR merged
Comment 25•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/c8c11bd92218
https://hg.mozilla.org/mozilla-central/rev/f2341e59a4b5
https://hg.mozilla.org/mozilla-central/rev/0d3ab01ff51b
https://hg.mozilla.org/mozilla-central/rev/52a992c65070
https://hg.mozilla.org/mozilla-central/rev/05cb4efbcd69
https://hg.mozilla.org/mozilla-central/rev/61b3c0c87bb5
Status: NEW → RESOLVED
Closed: 7 years ago
status-firefox61:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla61
You need to log in
before you can comment on or make changes to this bug.
Description
•