There is some ambiguity (bug 1447577) as to whether we actually hit the unaligned access trap on modern devices, but, assuming we do, this case currently redirects to a rather special stub generated by GenerateGenericMemoryAccessTrap() that doesn't provide stack traces. Now that all memory accesses have proper metadata (thanks to Lars in bug 1481171), I think we could instead use the normal trap stub (that does provide stack traces), keeping wasm::Trap::UnalignedAccess so that we properly the failure. I think this should be pure win: we get to remove the unaligned-access stub and have better diagnostics.
Pure code-removal. This means unaligned traps end up reported as "out of bounds", which I considered fixing (by piping various bits into HandleOutOfBounds()), but given that there is a real possibility that exactly 0 humans will ever hit this, I thought perhaps not.
Assignee: nobody → luke
Attachment #9000067 - Flags: review?(lhansen)
Comment on attachment 9000067 [details] [diff] [review] rm-unaligned-trap Review of attachment 9000067 [details] [diff] [review]: ----------------------------------------------------------------- On all platforms we currently must emit metadata for all loads and stores that we can't prove are in bounds because we rely on the trap handler to deal with short-range heap overruns even on 32-bit systems. So the question is whether we emit metadata for loads and stores that are provably within bounds, ie, proved for the index expression along a dominating path or proven by the compiler because it's a constant address below the heap minimum. We haven't had to emit metadata in these cases for range checking, but now we have to do it for alignment checking, except for byte accesses. At the moment we do emit the metadata, because all loads and stores go through wasmLoad and wasmStore and sibling functions, and they emit metadata. So we should be ok, we just need to be mindful of this (and Cranelift will need to be careful too). The patch seems fine, it's good to be rid of this path. I'm not ecstatic about signaling an (hypothetical) implementation problem with unaligned accesses as OOB since OOB can be swallowed by an indiscriminate exception handler, but I see the tradeoff and I'm not really objecting.
Attachment #9000067 - Flags: review?(lhansen) → review+
That's a good point w.r.t metadata-elision opportunities in the future. Thinking about it some more: of the reasons we can remove bounds checks: - constant indices: in this case we can also statically know the access is aligned - constant offset from already-checked access: we could also remove the metadata if the dominating access has equal or greater access size So that could let us keep most of the savings on ARM32 if we end up caring.
Pushed by firstname.lastname@example.org: https://hg.mozilla.org/integration/mozilla-inbound/rev/6129cb421151 Baldr: replace unaligned access trap with out of bounds trap (r=lth)
You need to log in before you can comment on or make changes to this bug.