Worker is slower than Main Thread

VERIFIED FIXED in mozilla28

Status

()

Core
JavaScript Engine: JIT
VERIFIED FIXED
5 years ago
4 years ago

People

(Reporter: bdahl, Assigned: jandem)

Tracking

Trunk
mozilla28
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [Shumway:P1])

Attachments

(2 attachments)

(Reporter)

Description

5 years ago
Created attachment 817970 [details]
main-vs-worker-perf.html

This was noticed in pdf.js where decoding some JPG2000 images in a worker takes nearly 37s as opposed to just 4s on the main thread.  However, the performance can be made equal by applying this patch https://github.com/mozilla/pdf.js/pull/3554/files.

As bhackett mentioned in IRC "just a guess, but workers don't have parallel compilation enabled, and its presence affects the length of scripts we decide to ion compile"

More relevant conversation at http://logbot.glob.com.au/?c=mozilla%23jsapi#c272673

Attached demo also illustrates the issue.

Comment 1

5 years ago
Specifically, from IRC, it seems like we could compile much bigger scripts in workers (regardless of whether we had worker threads) since we are more able to tolerate the compilation janks.
This is very relevant for Shumway, too. We already do all SWF parsing in a worker, and will move the VM into a worker as well, soon.
Blocks: 885526

Updated

5 years ago
Whiteboard: [Shumway:P1]
OS: Mac OS X → All
Hardware: x86 → All
Brendan: are you sure you linked the correct IRC conversation? That IRC log has no comments from you or bhackett or discussion of parallel compilation. :)
Flags: needinfo?(bdahl)
(Reporter)

Comment 4

5 years ago
Hmm...seems the link has gone bad. Try:
http://logbot.glob.com.au/?c=mozilla%23jsapi&s=16+Oct+2013&e=16+Oct+2013&h=parallel+compilation#c272673
Flags: needinfo?(bdahl) → needinfo?(cpeterson)
Flags: needinfo?(cpeterson)
(Assignee)

Comment 5

5 years ago
I think we want to pass a flag/enum to JS_NewRuntime to indicate it's a worker runtime. We can then use this information in CheckScriptSize, that's where we determine if a script is too large for Ion.

I will get to this later this week, but anybody feel free to steal :)
Flags: needinfo?(jdemooij)
(Assignee)

Comment 6

5 years ago
Created attachment 832148 [details] [diff] [review]
Patch

This patch adds a friend API (JS_SetIsWorkerRuntime) to mark a JSRuntime as being a worker runtime. For such runtimes, it increases the size limits to 8 times the current limit.

I really think we should consider using helper threads for workers though...
Assignee: nobody → jdemooij
Status: NEW → ASSIGNED
Attachment #832148 - Flags: review?(bhackett1024)
Flags: needinfo?(jdemooij)
Attachment #832148 - Flags: review?(bhackett1024) → review+
(Assignee)

Comment 7

5 years ago
https://hg.mozilla.org/integration/mozilla-inbound/rev/597712fa283f

@bdahl, till, others: let me know if/how this works for Shumway and pdf.js.
I don't believe we should change the limits between workers and the main thread...

It could mean that the same script could get ion-compiled on workers but not the main thread, and that the worker version could have very different memory usage, right? And on B2G where we're already so memory-constrained that we're removing workers about as fast as we can we have no way of controlling these numbers...

I'm also not sure that having a isWorkerRuntime_ flag is all that helpful. As it exists in this patch all it does is set much larger ion compilation limits. On B2G we may not even want that, so we'd have to set isWorkerRuntime_ to false, even though it's a worker...

Comment 9

5 years ago
It looks like the main thread threshold is still higher than the worker threshold (compare MAX_OFF_THREAD_SCRIPT_SIZE and MAX_DOM_WORKER_SCRIPT_SIZE) so this change is still rather conservative (I'm assuming b2g has not disabled all JS worker threads in the main-thread runtime because that'd be Bad).
Wait, MAX_OFF_THREAD_SCRIPT_SIZE is the same for workers and main thread. The difference is MAX_MAIN_THREAD_SCRIPT_SIZE (2000) and MAX_DOM_WORKER_SCRIPT_SIZE (16000), right? Or am I misunderstanding?

Comment 11

5 years ago
(In reply to ben turner [:bent] (needinfo? encouraged) from comment #10)
If you have JS worker threads in your JSRuntime (which the main-thread JSRuntime does, but worker JSRuntimes do not, iirc), then your limit is MAX_OFF_THREAD_SCRIPT_SIZE.  If you are a worker JSRuntime, your limit is MAX_DOM_WORKER_SCRIPT_SIZE.  Thus, MAX_MAIN_THREAD_SCRIPT_SIZE is mostly irrelevant to this discussion.  (Maybe I'm misreading, though.  This logic could be simplified a bit...)
https://hg.mozilla.org/mozilla-central/rev/597712fa283f
Status: ASSIGNED → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla28
(In reply to ben turner [:bent] (needinfo? encouraged) from comment #8)
> I don't believe we should change the limits between workers and the main
> thread...

Workers have different use cases than the main thread.  It totally make sense to have settings which are optimal for what workers are made for.

> It could mean that the same script could get ion-compiled on workers but not
> the main thread, and that the worker version could have very different
> memory usage, right? And on B2G where we're already so memory-constrained
> that we're removing workers about as fast as we can we have no way of
> controlling these numbers...

This is irrelevant to this discussion, the removal of workers was mainly caused by the recent interest in the memory occupied by the workers.  We already have multiple bug filed to address these issues, and Ion compilations are hardly showing up in profiles.

(In reply to Luke Wagner [:luke] from comment #9)
> (I'm
> assuming b2g has not disabled all JS worker threads in the main-thread
> runtime because that'd be Bad).

No, they have been removing DOM Workers, as the JSRuntime is taking 1 MB, and we never cared/noticed before B2G.
(In reply to ben turner [:bent] (needinfo? encouraged) from comment #8)
> It could mean that the same script could get ion-compiled on workers but not
> the main thread, and that the worker version could have very different
> memory usage, right?

In addition to what luke and nbp have said, note that it's actually the other way around. Before and after this patch. All this patch does is decrease the difference somewhat. Comment one gives a very good example of why we need this, and comment 2 makes clear why this solution, while not ideal, isn't all that bad.

Comment 15

5 years ago
(In reply to Nicolas B. Pierron [:nbp] from comment #13)
> (In reply to Luke Wagner [:luke] from comment #9)
> > (I'm assuming b2g has not disabled all JS worker threads in the main-thread
> > runtime because that'd be Bad).
> 
> No, they have been removing DOM Workers, as the JSRuntime is taking 1 MB,
> and we never cared/noticed before B2G.

I'm referring to the JS-internal worker threads of a the main thread's JSRuntime.
(In reply to Till Schneidereit [:till] from comment #14)
> In addition to what luke and nbp have said, note that it's actually the other way around.

Ah, I see. I misread the way that these numbers are used, comment 11 seems right. Since we're making the main runtime and worker runtimes more similar I'm happy.

I still think these values should be controlled via prefs though. Desktop and B2G may need totally different values.

(In reply to Nicolas B. Pierron [:nbp] from comment #13)
> Ion compilations are hardly showing up in profiles.

You mean memory profiles, right? How much memory does ion compilation take, and how much does it have to hold onto afterwards?
(In reply to ben turner [:bent] (needinfo? encouraged) from comment #16)
> (In reply to Nicolas B. Pierron [:nbp] from comment #13)
> > Ion compilations are hardly showing up in profiles.
> 
> You mean memory profiles, right? How much memory does ion compilation take,
> and how much does it have to hold onto afterwards?

Yes, Ion compilations are not showing up in memory profiles, and they are garbage collected.  And even if you use some tricks such as requestAnimationFrame which is keeping some IonCode a bit longer, they are still garbage collected every 5 minutes and on shrinking GCs.
Blocks: 941783

Comment 18

4 years ago
(In reply to Jan de Mooij [:jandem] from comment #7)
> @bdahl, till, others: let me know if/how this works for Shumway and pdf.js.

Yes, it works very well for pdf.js. Using the original test case, the workers are now as fast as the main thread in the current Nightly (as opposed to Firefox 26), and faster overall as well.

pdf.js 0.8.408 / Firefox 26
---------------------------
birdbookillustra00reedrich.pdf#pdfBug=all
Page: 1 Rendering 29284ms 
Page: 2 Rendering 29587ms 
Page: 3 Rendering 29604ms 

birdbookillustra00reedrich.pdf#pdfBug=all&disableWorker=true
Page: 1 Rendering 6953ms
Page: 2 Rendering 5083ms
Page: 3 Rendering 5038ms

pdf.js 0.8.408 / Firefox Nightly 29a1
---------------------------
birdbookillustra00reedrich.pdf#pdfBug=all
Page: 1 Rendering 4831ms 
Page: 2 Rendering 4795ms 
Page: 3 Rendering 4737ms 

birdbookillustra00reedrich.pdf#pdfBug=all&disableWorker=true
Page: 1 Rendering 4781ms
Page: 2 Rendering 4754ms
Page: 3 Rendering 4638ms
> I'm referring to the JS-internal worker threads of a the main thread's
> JSRuntime.

This use of the term "workers" for these threads is horribly confusing;  it always makes me think of DOM workers, and I've seen several other conversations where the same mistake has been made.  Since DOM workers have been around for ages, can we come up with a new term for these JS-internal threads?  Even just "threads" (with no mention of workers) is an improvement.
How about "helpers"?
> How about "helpers"?

SGTM.  Thanks.

Updated

4 years ago
Keywords: verifyme

Comment 22

4 years ago
Verified as fixed with Firefox28.0b3 on Windows 8 64bit, Mac OS X 10.8.5 and Ubuntu 13.04 32bit. The values I get for both worked and thread go around the same numbers: 300,400 ms (some times one is higher, other times the other).

Before the fix I got values above 2500ms for the worker.
Status: RESOLVED → VERIFIED
Keywords: verifyme
You need to log in before you can comment on or make changes to this bug.