Currently, we check compartment()->needsBarrier() before doing HeapSlot::set to move values and use memmove in most situations. This will not trigger post write barriers in most cases. Bill, I thought I had already fixed up this method when I was running my generational verifier: is there some benchmark where not doing the memmove is slow? If so, I think it will be okay to replace the memmove with .init calls so that we only trigger the post barriers, since we've already ensured that pre barriers are not needed.
Sorry. I changed this because just doing the set calls was causing a big regression on Dromaeo. Anything is fine with me as long as it doesn't regress the benchmarks.
Ah, thanks! I'll try not to make it too ugly.
A for loop that copies elements 1-by-1 is about half the performance of gcc's memmove. This is not acceptable. What we need to do here instead is have a write buffer for whole-object invalidation.
Created attachment 612749 [details] [diff] [review] v0 This is a bit silly, but it will serve to isolate the write buffer from the object internals and to make sure it doesn't get lost in rebasing if Waldo's refactoring touches moveDenseArrayElements.
Comment on attachment 612749 [details] [diff] [review] v0 Please add to the comment saying that for now it's just a placeholder. Otherwise someone might delete it.
Good idea. Although it's now occurred to me that this is true of the entire post-barrier infrastructure at the moment. :-) https://hg.mozilla.org/integration/mozilla-inbound/rev/e8fb716946e0