Closed Bug 407454 Opened 17 years ago Closed 13 years ago

cycle collector could remember previous size(s) of graph for mPtrToNodeMap

Categories

(Core :: XPCOM, enhancement, P5)

enhancement

Tracking

()

RESOLVED WONTFIX

People

(Reporter: dbaron, Unassigned)

Details

(Keywords: perf)

I was just discussing the allocations of mPtrToNodeMap with stuart on IRC.  We currently initialize with a table size of 32768; in many cases it probably gets a few binary orders of magnitude bigger than this.  If the cycle collector remembered the size from the previous cycle collection (or previous few cycle collections) we could avoid the repeated grows each time around (which cost performance and perhaps fragmentation).
Do we want to take another look at this?
Flags: blocking1.9?
Assignee: nobody → moz
Moving to wanted list - would take it for sure
Flags: wanted1.9+
Flags: blocking1.9?
Flags: blocking1.9-
Severity: normal → minor
Status: NEW → ASSIGNED
Priority: -- → P3
Documenting relationship to Bug 410678.
Increasing priority to P2 to better sync with wanted1.9+ flag.
Priority: P3 → P2
dbaron: can you (or anyone) suggest a workload which stresses the mPtrToNodeMap hashtable growth?  (Alternately, you could suggest a workload which is representative of typical growths and sizes for it.)
Opening more (or fewer) windows, or bigger (or smaller) documents.  We'll run cycle collection a bit after closing each window / leaving each document, if you wait for it, and some other times.
Opened bug 422125 for the (slightly related) tuning of pldhash users' hash table sizes.
I have devised my own "typical use" test based on dbaron's comment #6, and collected some stats.

Firstly, note that fragmentation is not at all an issue with the mPtrToNodeMap hash table.  The table begins with size 256KB (= entrySize * 32KB).  An allocation of that size does not cause fragmentation in the process's address space.

Secondly, note that the hash table only ever grows once or twice, to a maximum of 1MB (in my test).  It does not shrink.  It always grows by a factor of 2, so the other allocations sizes are 512KB and 1024KB.  These are wonderful allocation sizes for avoiding fragmentation.

Thirdly, using the prior size of the hash table as a guess about the size required on the next use would actually result in a bad guess, in my test.  Though, this is not necessarily true in the general case.  The table would end up being unnecessarily large in my test, if that method of guessing were used.

So, there is no worry about fragmentation for this particular instance of pldhash tables, and I am therefore reducing the priority of this bug.  This is because, if there is no fragmentation, the opportunity for improvement here is slim (though I have 2 ideas for that).

I suggest that the "wanted-1.9" flag also be removed.
Severity: minor → enhancement
Priority: P2 → P5
Takin off the wanted list then.
Flags: wanted1.9+ → wanted1.9-
WONTFIX per comment #8?
With the new CC telemetry code I put in (that should be in Fx7) we can take a look at some real numbers for the size of the graph.  But yeah, if anything the CC graphs are going to be smaller than they were in 2008, due to the marked JS GC optimization.
Assignee: moz → nobody
My original analysis would have merited a WONTFIX. Giving up ownership.
Thanks for the analysis, Robin.
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.