Open Bug 1504926 Opened 3 years ago Updated 2 years ago

Asking for memory report can cause a virtual iloop comparing strings in JS CollectReports()


(Core :: JavaScript Engine, defect, P3)

58 Branch




(Reporter: jesup, Unassigned, NeedInfo)




(2 files)

+++ This bug was initially created as a clone of Bug #1441876 +++

If you do Measure in about:memory, sometimes a Content process will virtually iloop - taking minutes or even hours in the MemoryReporter. [snip]

A profile shows it's spending 100% of it's time in StatsCellCallback(), and within that 67% in js::HashStringChars<> and 33% in js::EqualStringsPure()


Erahm landed a patch in bug 1441876 to make hashing ropes more efficient.  Unfortunately, that's not enough:

It still spends *way* too much time trying to hash strings, to the point where the content process effectively locks up for minutes at a time.

Now in this profile we get ~60% of the hits directly in JSRope::hash() with another ~10% in realloc/free called from hash(), and the other 28% of the time is spent in EqualStringsPure; all in JSRope::copyCharsInternal()
Duplicate of this bug: 1515181
Don't we collect memory reports in the wild as part of telemetry? If so, this seems like a pretty serious bug, no?
(In reply to Randell Jesup [:jesup] from comment #2)
> See also profile 

This profile looks more like pathological performance on an unbalanced rope: we're spending a ton of time reallocating the node stack on our way through the rope.
(In reply to Bobby Holley (:bholley) from comment #3)
> Don't we collect memory reports in the wild as part of telemetry? If so,
> this seems like a pretty serious bug, no?

Those aren't full reports, so they shouldn't trigger this issue.
From experience, these sort of strings occur in the wild; once example I found was multiple strings in the 10-60+K range, each consisting of single chars appended one at a time to a string (or small sets of chars), leading to a huge number of nodes.  It would be interesting to investigate why the WebRTC stats strings trigger this, as that should be easily repeatable.

(I suspect also that a large number of these strings were created (perhaps 1/s?) but not gotten rid of before the memory report was triggered.)

This profile shows we can still spend crazy amounts of time collecting reports; we need to have some way to limit the time spent de-dupping strings for memory reports. (I've measured it taking 5-30+ minutes doing this.... effectively the process is locked)

Flags: needinfo?(sdetar)

Jan, do you have any thoughts on what the next steps with this bug are? Is this a GC related bug?

Flags: needinfo?(sdetar) → needinfo?(jdemooij)

Hmm, I'm wondering if doing only partial hashes instead of hashing the complete string is possible here. In addition to that, it should be possible to optimise the string comparison for certain cases, like for example when comparing a JSRope against a JSLinearString. See the attached WIP patches for both ideas.

Do we have any reproducible test cases where collecting the memory report takes too long?

It's unrelated to GC. Memory reporting looks for duplicated strings so it can mention them, because that can be indicative of interesting problems, but I don't know how useful it is in practice. A reasonable option would be to remove it entirely, which would solve the performance problem completely.

Flags: needinfo?(jdemooij)

Well, I have used this feature to identify a number of site-specific memory leaks/problems - adsafeprotected generating 100's of K of copies of 100-200-char strings; Facebook generating 500K copies of "<DIV", etc. So I'd suggest limiting the length we hash/compare is a good compromise. (an example that was tripping this up a lot was something on facebook (or an ad on facebook) was appending 1 char at a time to strings. Always the same char ('A' IIRC). and doing this for 10-60K times. And doing this in a number of variables. This really slowed down the comparisons

Note that bug 1568923 introduces deduplication for nursery strings during tenuring, which will hopefully dramatically reduce the number of duplicates we see. On the other hand, the initial implementation that Krystal will be landing in that bug doesn't deduplicate ropes, to avoid exactly the sort of blowup cases that this bug is about. (And currently there is no intention to handle that case.)

What about keeping memory reporting duplicate detection but having it ignore ropes?

It looks like that would still catch some of the problems jesup mentioned in comment 13, though not all. But perhaps the others would be better uncovered via some sort of rope height reporting rather than duplicate detection?

Simply using the first N chars for dups (or first N if it's a rope) would be good enough I think. That shouldn't be that expensive.

Flags: needinfo?(sphink)
You need to log in before you can comment on or make changes to this bug.