Currently JS::HashTable::checkOverloaded() calls changeTableSize(0) if the table is overloaded and more than a quarter of the entries are removed. Would it make more sense to call rehashTable() instead? AIUI that does the same thing, without an extra allocation.
Created attachment 698482 [details] [diff] [review] Patch, v1 I'll push this to try before asking for a review here.
This caused some mysterious crashes in Linux64 and Windows XP debug m1. These look real, but I'll push to try with a different base rev just to check. :shrug:
Try run for 6b7239513f70 is complete. Detailed breakdown of the results available here: https://tbpl.mozilla.org/?tree=Try&rev=6b7239513f70 Results (out of 183 total builds): exception: 1 success: 167 warnings: 11 failure: 4 Builds (or logs if builds failed) available at: http://firstname.lastname@example.org
Try run for 84e114fb5a42 is complete. Detailed breakdown of the results available here: https://tbpl.mozilla.org/?tree=Try&rev=84e114fb5a42 Results (out of 181 total builds): exception: 4 success: 161 warnings: 11 failure: 5 Builds (or logs if builds failed) available at: http://email@example.com
You really don't want to do this. As noted in comments inline, |rehashTable| has some serious negative caveats and should only be used in contexts where it is completely impossible to malloc, like during a LastDitch GC (which is what it was created for). If you /really/ want to do this, then you will at the very least need to implement a second pass to clear tombstones. You'll also want to extend testHashTable to pound on this routine a bit harder: something we need to do anyway before GGC lands.
> You really don't want to do this. Sounds good; thanks for enlightening me!
You are welcome! Sorry if that sounded overly discouraging, I didn't want to keep you from working on it if you think you can improve the inline re-keying situation. Just be warned that it is going to be tough going. Steve came up with the existing algorithm after I spent a week or so proving that several other approaches we had come up with are theoretically impossible. What we have right now should basically work, but it does so by spamming tombstones. We're fairly certain that it is impossible to find a non-maximal one-pass tombstoning scheme with less than 2 bits of extra information per entry. As is, the two-pass scheme we have in mind is non-minimal.
> I didn't want to keep you from working on it if you think you can improve the inline > re-keying situation. Just be warned that it is going to be tough going. Yeah; I was mostly looking for incidental easy wins (e.g. bug 826951). If you guys have thought about this and decided that it's hard, that's more than good enough for me.