Closed Bug 1442544 Opened 6 years ago Closed 2 years ago

[exploration] Wasm's OffsetGuardLimit and HugeMappedSize can potentially be much smaller


(Core :: JavaScript: WebAssembly, task, P3)




103 Branch
Tracking Status
firefox103 --- fixed


(Reporter: lth, Assigned: rhunt)


(Blocks 1 open bug)


(Whiteboard: [arm64:m4])


(1 file)

ARM64 is configured with WASM_HUGE_MEMORY but probably does not need a 6GB+64KB reservation.  This requires some in-depth investigation of the memory access instructions on the platform, but LDR, which is likely typical, essentially has two reg+imm forms, one with a signed nine-bit offset (used for preindex/postindex) and one with an unsigned twelve-bit offset.  The offsets are scaled by 4 or 8, so effectively we're looking at twelve-bit signed or fifteen-bit unsigned offsets.

The main effect here would be to reduce pressure on the memory mappings; that might be beneficial as some operating systems have historically had fairly low per-process limits on the combined size of the mappings.

Other than that, a smaller OffsetGuardLimit would tend to move the code that folds in the offset from MacroAssembler / Assembler (where BaseIndex is resolved) and into the compiler (where we perform an add with overflow check).  Since large offsets will be handled in the MacroAssembler by moving a constant to a register and then performing a register+register load (without overflow checking per se but the arithmetic is performed in a 64-bit space so this is OK) it's probably more or less a wash, performance-wise - we could investigate.
Priority: -- → P3
Component: JavaScript Engine: JIT → Javascript: Web Assembly
Whiteboard: [arm64:m4]
Depends on: 1590305
Blocks: 1590305
No longer depends on: 1590305

Discussed this with Luke a bit. Really, since we can handle any size offset with a fallback path, we can reduce the offsetGuardLimit on all platforms, if we want to reduce the VM footprint. There is no magic per se about 2^31, though it may be the largest offset representable in an x86 instruction and so it may be "optimal" for that architecture if we don't consider VM footprint as a factor. In practice, and especially for wasm content, it is very likely that almost all offsets are quite small.

Hardware: ARM64 → All
Summary: Wasm's OffsetGuardLimit and HugeMappedSize can potentially be much smaller on ARM64 → Wasm's OffsetGuardLimit and HugeMappedSize can potentially be much smaller

Two more thoughts from a discussion with Lars today.

  1. We've seen offsets as large as 1MiB in hot-paths. We need to thoroughly investigate a reasonable guard page size, and not just assume it.
  2. If we lower the guard limit size, our load/store instructions will only have to deal with smaller offsets (as we'll fold them into the base with a guard check when they're over the guard size limit), and we may be able to improve codegen on some platforms by assuming this.
See Also: → 1710012
See Also: → 1615988
Type: enhancement → task
Summary: Wasm's OffsetGuardLimit and HugeMappedSize can potentially be much smaller → [exploration] Wasm's OffsetGuardLimit and HugeMappedSize can potentially be much smaller

We can use large v-mem reservations to omit bounds checks for 32-bit wasm
code. This requires at least 4GiB to catch any possibly dynamic index
given to a wasm memory access instruction. Memory accesses also can contain
a static 'offset' immediate which is to be added to the dynamic index
to get the effective address. To handle those, we reserve 'offset' guard
pages after the 4GiB main region. Any offset that is larger than the
offset guard region forces the access to have a bounds check.

Our current reservation is 2GiB, which is unreasonably large. After
profiling our internal wasm-corpus, the largest offset found was
~20MiB with most being <1MiB. 32MiB should therefore be more than
enough and reduce our vmem reservations by 33%.

Assignee: nobody → rhunt
Pushed by
wasm: Drop offset guard reservation from 2GiB to 32MiB. r=jseward
Closed: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → 103 Branch
You need to log in before you can comment on or make changes to this bug.