Closed
Bug 799122
Opened 12 years ago
Closed 11 years ago
Consider allocating dense arrays more eagerly
Categories
(Core :: JavaScript Engine, defect)
Core
JavaScript Engine
Tracking
()
RESOLVED
FIXED
mozilla25
People
(Reporter: jandem, Assigned: jandem)
References
(Blocks 2 open bugs)
Details
Attachments
(1 file, 1 obsolete file)
1.16 KB,
patch
|
luke
:
review+
|
Details | Diff | Splinter Review |
Kraken's audio tests use this pattern: while(i--) { spectrum[i] = 2 * Math.sqrt(real[i] * real[i] + imag[i] * imag[i]) / bufferSize; } Since spectrum is filled backwards, it's not a dense array and the SETELEM is very slow. The array is created using "new Array(x)" and V8 eagerly allocates the elements vector if x <= ~10k. The attached patch hacks js_Array to do this for x <= 1000 and this helps Kraken's audio tests a lot: ============================================================================= ** TOTAL **: 1.081x as fast 1736.0ms +/- 0.4% 1606.3ms +/- ============================================================================= audio: 1.26x as fast 636.9ms +/- 0.5% 506.9ms +/- 0.6% beat-detection: 1.63x as fast 208.9ms +/- 0.8% 127.9ms +/- 1.0% dft: ?? 145.9ms +/- 0.4% 146.7ms +/- 1.6% fft: 1.64x as fast 127.9ms +/- 0.9% 78.2ms +/- 0.8% oscillator: - 154.2ms +/- 0.3% 154.1ms +/- 0.3% Memory usage is the main concern here and we should be careful, but since V8 does this already we should at least consider doing something similar. Any thoughts?
Comment 1•12 years ago
|
||
Back-filling seems to be a common pattern, writing to high elements without later filling in low ones seems weird, and those are nice speedups, so I think the change sounds good. This change could also be a memory *savings* when we keep arrays dense that would have been sparse (sparse arrays get own shapes, iirc, which incur a per-object 6 word-per-element overhead). Perhaps you could look at about:memory before/after on membuster just to be safe?
Comment 2•12 years ago
|
||
(In reply to Luke Wagner [:luke] from comment #1) > Back-filling seems to be a common pattern Yes. I consider this a long-standing performance fault in SpiderMonkey--we have had previous examples of this in web pages, and I listed it as a perf fault in my "Know Your Engines" talk. I would love this to get fixed. Copying V8's approach seems pretty safe. And nice find.
Comment 3•12 years ago
|
||
Should this be prioritized? Sounds like a potentially significant win.
Comment 4•12 years ago
|
||
It'll happen as part of bug 586842, which *is* prioritized. There's just a matter of other stuff to work through along the way (currently bug 792108, then others after it), plus the normal stream of reviews to do.
Blocks: 586842
Comment 5•12 years ago
|
||
Thanks for the insight. It was not obvious to the untrained eye that those two bugs were linked. It's much easier to follow now.
Comment 6•11 years ago
|
||
I believe Brian did some work recently that is very similar to this bug. I could be wrong, though.
Comment 7•11 years ago
|
||
Bug 835102 should improve our handling greatly on arrays that start out sparse and are then filled back in. It would still be better though to allocate large (but not gigantic) arrays eagerly, which would both avoid the initial sparse phase and improve long term performance (no need for a hole check when setting elements).
Comment 8•11 years ago
|
||
So, sounds like some relatively low-hanging perf fruit here?
Comment 9•11 years ago
|
||
Yeah.(In reply to Paul Wright from comment #8) > So, sounds like some relatively low-hanging perf fruit here? Yeah.
Comment 10•11 years ago
|
||
Do you have any reservations about this Jan? Also, you picked 1000 as the cutoff; do you know what v8 chose or whether they used some other heuristic?
Assignee | ||
Comment 11•11 years ago
|
||
(In reply to Luke Wagner [:luke] from comment #10) > Do you have any reservations about this Jan? Also, you picked 1000 as the > cutoff; do you know what v8 chose or whether they used some other heuristic? They use 10k as far as I can see (kInitialMaxFastElementArray). I went with 1000 because our values are twice as large, and it seemed good to be conservative. I will check how much this helps Kraken nowadays...
Assignee | ||
Comment 12•11 years ago
|
||
(In reply to Jan de Mooij [:jandem] from comment #11) > I will check how much this helps Kraken nowadays... No effect on Kraken/SS/Octane, the Kraken problem probably got fixed by bug 835102. Still, this should help bug 874174 and similar code. I will try to fix this later this week.
Assignee: general → jdemooij
Status: NEW → ASSIGNED
Comment 13•11 years ago
|
||
I think we should consider doing this. I think somebody recently in #jsapi had some test case that this should speed up.
Assignee | ||
Comment 14•11 years ago
|
||
When I wrote comment 12 ("No effect on Kraken/SS/Octane"), I tested eagerly allocating elements for arrays with length <= 1000. Guess what, Kraken audio-fft and audio-beat-detection allocate arrays of length 1024.. After upping the limit to 2048, this wins at least 12% on Kraken audio-fft, 10% on audio-beat-detection and ~0.4 ms on SS crypto-aes.
Attachment #669154 -
Attachment is obsolete: true
Attachment #770193 -
Flags: review?(luke)
Comment 15•11 years ago
|
||
Comment on attachment 770193 [details] [diff] [review] Patch \o/
Attachment #770193 -
Flags: review?(luke) → review+
Assignee | ||
Comment 16•11 years ago
|
||
https://hg.mozilla.org/integration/mozilla-inbound/rev/370e660265f3
Comment 17•11 years ago
|
||
https://hg.mozilla.org/mozilla-central/rev/370e660265f3
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla25
You need to log in
before you can comment on or make changes to this bug.
Description
•