Open
Bug 1153865
Opened 9 years ago
Updated 2 years ago
Use a segmented hash table in the CCGraph
Categories
(Core :: XPCOM, defect)
Tracking
()
NEW
People
(Reporter: away, Unassigned)
Details
(Whiteboard: [MemShrink:P2])
(In reply to David Major [:dmajor] from bug 1144649 comment #14) > Allocations over 1 meg really should be expected to fail, and the larger the > size, the more we should care. Large contiguous regions are hard to find on > Windows. The users who fail at the 4M and 8M sizes likely could have kept > going for a while before reaching hopeless OOM|small territory. > > So I think this is worth doing something about. Either by gracefully > handling failure or by playing tricks to keep the individual allocations > smaller. (In reply to Nathan Froyd [:froydnj] [:nfroyd] from bug 1144649 comment #16) > Maybe we could make (template?) a variant of PLDHashTable that uses > SegmentedVector or similar underneath, so growing the hashtable would make > lots of smaller allocations rather than one large one?
Updated•9 years ago
|
Summary: Use a segmented data structure in the CCGraph → Use a segmented hash table in the CCGraph
It would be cool if all of our hash tables could switch to the segmented strategy if they get too big...
Comment 2•9 years ago
|
||
Well, if we can come up with a way to do it without any additional performance cost in the non-segmented case.
Updated•9 years ago
|
Whiteboard: [MemShrink] → [MemShrink:P2]
Of course with segmented storage you pay a cache locality price, so you will probably want to measure CC times if you end up doing anything here.
Comment 4•9 years ago
|
||
Yeah. When I last measured it, hash table lookups were like 20% of the CC time. I think it is less of an issue now that CC is incremental, but it is something we'd want to check, for sure.
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•