Closed Bug 985694 Opened 10 years ago Closed 9 years ago

SharedArrayBuffers should eliminate the extra header

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: sfink, Unassigned)

References

(Blocks 1 open bug)

Details

With the landing of bug 979480, SharedArrayBuffer should no longer need the SharedArrayRawBuffer. I think. I don't know if having an atomic refcount Value is possible or not. But if possible, it seems like it could drastically simplify things.
Blocks: 1045841
Blocks: shared-array-buffer
No longer blocks: 1045841
Currently the SharedArrayRawBuffer header has three fields:

- the reference count
- the length of the buffer
- the list of futexes waiting on a location within the buffer

Those fields are shared among the agents that access the shared memory, and must themselves be in shared memory.  If they're not stored in the SharedArrayRawBuffer they will have to be stored in another shared object.  That probably can't be in a SharedArrayBuffer object, since many of those can point to the same shared memory (even within the same worker).

I'll close this for now; the current structure is serviceable and pretty flexible.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
Might it be possible to move this header to the end of the buffer to allow the buffer to be placed at absolute zero in the address space?  If only for the purpose to exploring the potential of placing the buffer at zero.
Flags: needinfo?(lhansen)
(In reply to Douglas Crosher [:dougc] from comment #2)
> Might it be possible to move this header to the end of the buffer to allow
> the buffer to be placed at absolute zero in the address space?  If only for
> the purpose to exploring the potential of placing the buffer at zero.

Yes, I think so.  I don't know how much good that'll do, since the header needs to remain writable (the list of futex waiters will be hot), but I've no objections to it.

If we really want to do some address space magic with this buffer for asm.js then that might favor splitting the header out as a separate object.  But I think that'll still take us further away from where this bug suggested we should be heading.  (That's not an argument about anything much, just an observation.)
Flags: needinfo?(lhansen)
Also array access optimizations are being explored that place protected guard zones at the end, and potentially also the start, of the heap and having a header at the start or end would block these.
Storing the header elsewhere so that an array with 2^n elements takes up 2^(n+m) bytes does generally help reduce wasted memory. Our malloc allocator, jemalloc, allocates in size classes that are mostly a power of 2 (and then in integer multiples of the chunksize, 1MB on desktop), so allocating 2^n + k bytes is generally a bad idea.
The SharedArrayRawBuffer is mmapped directly, and we always allocate an integral number of pages, so jemalloc's behavior might not play much of a role here (maybe apart from making use of the holes when those arrays are freed).  Also, the general expectation is that there will be "few" SharedArrayBuffer objects in use at any one time.  The larger concern is definitely optimizations for asm.js.

(Dougc alludes to bug 1056027, a note about which appears in the comments on vm/SharedArrayObject.h.)
Yeah, you won't be affected by jemalloc then, although by 'spilling over' into another chunk, you might make it harder for jemalloc to get aligned chunks.

jemalloc currently does the following:
(1) mmap a chunksized region. If chunk-aligned, return it. Otherwise munmap it and
(2) mmap a region of |chunksize + alignment - pagesize| bytes. munmap the non-aligned portions of it, and return the aligned region.

This works well until you get an unaligned hole of less than |chunksize + alignment - pagesize| bytes. Then (1) will always fail, and it'll start leaving aligned, chunksized holes behind. The GC uses similar logic, but I've added some fallbacks to make it more robust. The same approach is attached as a patch to bug 1005844, but I don't know when it will land.

We currently use a fork of jemalloc1 that I added a chunk cache to, which should mitigate the hole creation, but that will (at least initially) go away when we finally switch to jemalloc3.
Also see bug 986981 which looks like it is intended to land soon. Extensions of this to handle small negative offsets by using a protected guard zone would be blocked by a header immediately before the buffer, but there is still a lot to explore.
You need to log in before you can comment on or make changes to this bug.