Closed Bug 776741 Opened 13 years ago Closed 13 years ago

Talos Tp5, Ts, and Trace Malloc regressions from bug 761723 (implement function toString by saving source)

Categories

(Core :: JavaScript Engine, defect)

17 Branch
x86_64
Linux
defect
Not set
normal

Tracking

()

RESOLVED FIXED
Tracking Status
firefox17 - ---

People

(Reporter: mbrubeck, Assigned: Benjamin)

References

Details

(Keywords: perf, regression, Whiteboard: [MemShrink:P1][js:p1:fx18])

It looks like bug 761723 caused several performance regressions, including: Regression Tp5 No Network Row Major MozAfterPaint increase 8.83% on Linux Mozilla-Inbound-Non-PGO ---------------------------------------------------------------------------------------------------- Previous: avg 309.754 stddev 3.220 of 30 runs up to revision a10834675f4d New : avg 337.101 stddev 2.014 of 5 runs since revision 8f93bcc5bc56 Change : +27.348 (8.83% / z=8.493) Graph : http://mzl.la/MzVHrH Changeset range: http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=a10834675f4d&tochange=8f93bcc5bc56 For other possibly related regressions, see https://groups.google.com/d/topic/mozilla.dev.tree-management/ViqVMquqxbM/discussion
It looks like bug 776200 mostly fixed the Tp5 regression, but did not fix the Trace Malloc MaxHeap or Ts Paint regressions.
Depends on: 776200
Malloc MaxHeap is probably expected.
Depends on: 777190
Bug 776700 (disabling source compression) fixed some of the speed regressions, but worsened the MaxHeap regression dramatically. Will compression be re-enabled after the remaining SunSpider regressions in bug 776233 are fixed, and will that undo the additional MaxHeap regression?
Blocks: 776700
We can't ship with these memory regressions.
Whiteboard: [MemShrink:P1]
(In reply to Matt Brubeck (:mbrubeck) from comment #3) > Bug 776700 (disabling source compression) fixed some of the speed > regressions, but worsened the MaxHeap regression dramatically. Will > compression be re-enabled after the remaining SunSpider regressions in bug > 776233 are fixed, and will that undo the additional MaxHeap regression? Compression will be reenabled as soon as I get a review on bug 777190.
Whiteboard: [MemShrink:P1] → [MemShrink:P1][js:p1:fx1]
This has been fixed by turning off the new toSource implementation. Of course, this does need to be addressed before it can be turned back on.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
David, only compression has been preffed off, which is why memory usage is so high.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Compression has been reenabled! My guess is the startup time regression is from the need to load the XDR of inline XUL/XBL scripts, which contain the source code. If we could show that no addons (or at least not very many) depended on these sources, we could just drop them.
Since compression was re-enabled in bug 777190: * Trace Malloc MaxHeap has decreased back to pre-bug 776700 levels, but is still regressed compared to pre-bug 761723 levels. * Similarly, Ts is partially improved but is still regressed compared to its original levels. * Tp5 No Network Row Major MozAfterPaint, which had been improved by bug 776700, is now regressed by about 10% again. Of course, other changes may have also affected these benchmarks in the meantime and thrown off the comparisons, but I believe that we are essentially back to the same regressions we saw when bug 761723 first landed. If there is any work going on to fix these regressions, can you add it as a dependency of this bug? If there is not, can we discuss whether bug 761723 should be backed out?
I have some ideas like the one in comment #8. I can file bugs blocking this one for them. We do have to have web JS source in memory one way or another. As you can see with "Tp5 No Network Row Major MozAfterPaint", it's basically a classic performance-memory tradeoff. Compression settings can be tweaked.
Depends on: 779724
Depends on: 779975
Whiteboard: [MemShrink:P1][js:p1:fx1] → [MemShrink:P1][js:p1:fx17]
Memshrinkers: I don't know what exactly Trace Malloc Maxheap is doing, but I think it makes the problem look worse than it is. Script sources never take up more than 2% of explicit memory usage for me. For example, if I run the MemBuster test [1], about:memory consistently shows script-sources as 1-2% of total explicit allocation. [1] http://random.pavlov.net/membuster/index.html
Assignee: general → benjamin
Whiteboard: [MemShrink:P1][js:p1:fx17] → [MemShrink:P1][js:p1:fx18]
Naveed and I discussed the MaxHeap memory usage. We guess that MaxHeap is probably measuring the time we have both compressed and uncompressed sources in memory, and about:memory provides a better measure of the impact of this change.
Time for a summary! AIUI source compression is now on, but not for chrome code, and the main issue is that while compression is occurring we have both compressed and uncompressed code live (and possibly some intermediate zlib data structures?) so that creates a short-lived memory spike. The MaxHeap regression seems to be about 5%, which is not small -- I judge that from this: http://graphs.mozilla.org/graph.html#tests=[[29,1,17],[29,1,7],[29,1,22],[29,1,18],[29,1,6],[29,1,8]]&sel=1342725187000,1342897987000&displayrange=7&datatype=running Peak memory consumption is a reasonably important figure -- that's what causes OOMs, for example.
(In reply to Nicholas Nethercote [:njn] from comment #14) > Time for a summary! > > AIUI source compression is now on, but not for chrome code It's not on for "most" chrome code. It is still used for XBL and XUL inline scripts.
I talked to Benjamin about this. He has an idea to reduce the peak usage somewhat. Currently when compressing, we allocate a 2nd buffer the same size as the original, and compress into that. Once compression is finished, we realloc that 2nd buffer to the final compressed size. (In a pathological case where the compression increases the size of the code, we'll end up just copying the source code verbatim into the 2nd buffer.) Benjamin's idea is to be more optimistic about the compression. In practice, the compressed size is typically 1/4 to 1/3 the size of the original. So we could make the 2nd buffer half the size of the original, and compress into that. If the compressed source fits, then we realloc as usual and we've reduced the peak by 25%. If it fails, we can realloc the 2nd buffer to the same size as the original, and try again. As a result, in the 99% (or whatever the number is) of cases where the compressed size is 1/2 or less, we've saved space, and in the remaining cases we take a little longer but probably aren't worse off space-wise (well, it depends exactly what happens when realloc'ing the 2nd buffer...)
Depends on: 804857
We expect minimal user impact due to this memory increase, and the benefits to JS development are very large. Untracking for release, although we'd consider a low risk uplift to Aurora if nominated.
Bug 804857 improved the situation somewhat. I'm going to close this bug because we don't foresee any other improvements.
Status: REOPENED → RESOLVED
Closed: 13 years ago13 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.