Closed Bug 1194544 Opened 5 years ago Closed 4 years ago

[meta] Convert all rekeyed hash tables to stable hashing

Categories

(Core :: JavaScript: GC, defect)

defect
Not set

Tracking

()

RESOLVED FIXED
Tracking Status
firefox43 --- affected

People

(Reporter: terrence, Assigned: terrence)

References

(Blocks 2 open bugs)

Details

Attachments

(2 files)

After two days of intense hacking, I've got a patch which more or less passes all jit tests that I think finally has approximately the correct basic approach.

Some observations:
------------------
  * We cannot eagerly store a unique hashcode for each object because we don't have the space for it. This means that we need to allocate that space dynamically.

  * The space we allocate for hash codes cannot (easily) live in GC memory because doing so would (a) force all hashing operations to become GC points and, arguably worse, (b) we would need a way to get the GCRuntime down into the table innards. These are probably doable, but would be very hard to retrofit quickly or elegantly.

  * The only other option is to attach manually managed hashcode storage space directly to GC structures.

  * Once we go this route, most of the hard API problems go away and we are left with the simple problem of applying our CS degree, for once.

The approach:
-------------
  * Add an assertion to PointerHasher that we never hash a JSObject*, or derivative. Since this class is external, we need to supply a list of forbidden classes. My attempts to add a template with internal linkage failed utterly, so I eventually just copied in a list of all object subclasses in HashTable header. This is supremely ugly, but on a scale of 0 to rekeying, it's still approximately 0. It's at least DEBUG only.

  * My initial plan was to allocate eagerly in the nursery for simplicity, but it occurred to me that if we do generate a hashcode, we have to keep the thing live, so this space would, by definition, be a huge waste. It then occurred to me that if I put the hash codes somewhere in the chunk trailer, we can share most of the code anyway. So that's where I put it.

  * The second step is to add a new Hasher that is usable with JSObject and subclasses. Since we need the keys to automatically update when moved by the nursery, this embeds a RelocatablePtr as the Lookup. This nicely forces us to update all the users at the same time. And not so nicely causes all sorts of fiddly conversion errors. I'll comment more on the patch.

  * The new Hasher dispatches to chunk()->info.trailer.hashCodes, which is currently just a large malloced array with ChunkSize / CellSize entries. I still need to throw CS this until it's not an atrocity: should be the easiest part, so I'm leaving it until everything else is ready.

  * Background finalization: still an absolute nightmare. I don't /think/ it's possible to race on the current word-map implementation because nothing can use the backing arena until it comes off the background thread, and that operation is gated. Still, I think we'll eventually want a threadsafe queue of bitmaps or something equally ugly. More CS needed still.
Still failing in 2 jit-tests on the WeakGlobalMap that the Debugger holds, but otherwise passing all tests. Still needs a ton of polish and performance work.
Could jimb and I use these unique hash codes to diff heap snapshots? We eventually want to do things like report which objects from the first snapshot were reclaimed sometime before the seconds snapshot was taken, or which objects are present in the second snapshot that were not present in the first, etc.
Depends on: 1194832
(In reply to Nick Fitzgerald [:fitzgen][:nf] from comment #2)
> Could jimb and I use these unique hash codes to diff heap snapshots? We
> eventually want to do things like report which objects from the first
> snapshot were reclaimed sometime before the seconds snapshot was taken, or
> which objects are present in the second snapshot that were not present in
> the first, etc.

Yes. You can request and get a unique id (uint64_t) or unique hash code (HashCode == uint32_t) for any GC thing. Please note, however, that these will add a substantial amount of overhead that we have no mechanism to drop later, so this is probably just a single part of a more robust solution. In particular, I think we could just lock out compaction while you have a heap snapshot active and use the address for everything that does not already have a uid attached to it, in order to save space. We can get into more detail via IRC.
If we want to use any sort of intelligent data-structure for the uid storage, this gets tremendously complicated by background sweeping. Specifically, we need to be able to release the associated unique id when the cell dies so that when we allocate a new cell in the same slot it does not inherit the unique id of the previous cell. When sweeping an arena off main thread, even if it cannot be allocated from, adjacent cells from both the arena and chunk can be in active use by the mutator. This includes insertion of new hash codes on the chunk and even within the arena we are busily sweeping. It cannot, however remove or replace the uids associated with anything in the background swept arenas, so there is that at least.
After considerable thought, I think we can actually do the transition away from rekeying incrementally. If an object only lands in rekeyed hashtables it will never get a uid and everything continues working as at present. Once one lands in a uid based table, it will get a uid, but inclusion in rekeyed tables should continue to work fine. There is no cooperation needed at the table level, once the hash policy is swapped out.

I'll drop a patch in here, once I've got it split out, that will fail the build if an object passes through PointerHasher. This should let us easily detect tables that need to be fixed. It's pretty ugly, but maybe we can still land it in some form once everything is converted.
Depends on: 1196847
Depends on: 1196848
Depends on: 1199822
Depends on: 1200732
Depends on: 1200734
Summary: Give objects unique hash codes → [meta] Convert all rekeyed hash tables to stable hashing
Depends on: 1223490
Depends on: 1223519
Depends on: 1223853
Depends on: 1223863
Depends on: 1223918
Depends on: 1224038
Depends on: 1224041
Depends on: 1224044
Depends on: 1224048
Depends on: 1224050
Depends on: 1224347
Blocks: 1224352
This is the patch I've been using to detect incorrect hasher usage by causing build failures. This isn't suitable for checkin, obviously, but useful nonetheless.
Depends on: 1224404
Depends on: 1225233
Depends on: 1225237
Depends on: 1225577
Depends on: 1225650
Depends on: 1226687
Depends on: 1226732
Depends on: 1226801
Depends on: 1239494
Depends on: 1239515
Depends on: 1239751
Depends on: 1239754
Depends on: 1239869
All blocking bugs are fixed. We should be able to move rekeying out of hashmap as soon as the CCW nursery bug is finished.
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.