When implementing compacting GC, I ran into a problem when rekeying a hashtable where I had already moved a bunch of gcthings, including ones used in the keys. Rekeying basically does: if (new_lookup.match(old_key)) return; remove(current_entry); add(new_lookup); This may crash on the first line if Lookup::match() dereferences any of its fields, but that could be easily fixed by either removing the line or adding a RekeyFrontNoReally() that omits the check. The remove() is fine, since it uses the entry from front(). The add() may crash when you get a hash collision, and do a comparison between the found entry and the new lookup, because the Lookup::match() will dereference the found key's fields, which may have been moved and that entry has not yet been rekeyed. Options: 1. Since rekey's add() is guaranteed to not find any matching entries (since it just removed the only matching entry), we could implement addNew() or something. 2. We could effectively do the above externally to the hashtable by adding a boolean 'neverMatch' field to Lookup. Lookup::match then checks the boolean first, before attempting to dereference anything. 3. Rekeying could be done as a single sweep through the table. For now, I have implemented #2 for a couple of places that run into this problem, but it's a subtle requirement: if your Lookup::match() dereferences relocatable pointers, then it needs to implement the neverMatch thing. Currently, it seems like #1 might duplicate a bunch of code (it'd probably be templatized, but more code would get generated.) #3 is theoretically slower when moving a small fraction of the table, which is somewhat likely with a compacting GC that only compacts underutilized or fragmented arenas. Luke, what do you think?
#1 was implemented at some point as putNewInfallible.