WebGL 2 does not work TypedArrays with a view length >=2GB
Categories
(Core :: Graphics: CanvasWebGL, defect, P2)
Tracking
()
People
(Reporter: jujjyl, Unassigned)
References
Details
Attachments
(2 files)
WebAssembly has in the past years lifted its signed int32 2GB linear Memory restrictions, and is able to do up to unsigned int32 4GB maximum Memory sizes.
We are in the process of updating Unity to support up to 4GB of memory. While testing this change, it looks like the WebGL 2 implementation in Firefox is not ready for this change, but throws an exception
TypeError: WebGL2RenderingContext.uniform4fv: Float32Array branch of
(Float32Array or sequence<unrestricted float>) can't be an ArrayBuffer
or an ArrayBufferView larger than 2 GB
This prevents Firefox from being able to run newer Unity content that requires more than 2GB of RAM.
Reporter | ||
Comment 1•1 year ago
|
||
A test case can be downloaded from http://clb.confined.space/bugs/webgl2_more_than_2gb_of_ram.zip
Comment 2•1 year ago
|
||
The severity field is not set for this bug.
:jgilbert, could you have a look please?
For more information, please visit BugBot documentation.
Reporter | ||
Comment 3•1 year ago
|
||
Attaching a possibly smaller-to-work-with repro.
This issue is still important, all Unity content will soon be running in this mode, which would break running on Firefox.
Updated•1 year ago
|
Comment 4•1 year ago
|
||
This exception is thrown from Codegen.py-generated bindings code, and is unconditional when the length of the view is too big.
However, the workaround is to make a subview of just the range you need, and that seems to work.
Comment 5•1 year ago
|
||
I don't know what our migration path to allowing >2GB arrays through the bindings code looks like. I imagine we'll need to craft an opt-in of some sort, and piecemeal update each binding's backing implementation. (e.g. some custom [AllowLarge])
@jandem probably knows more, I hope? :)
Comment 6•1 year ago
|
||
An example of the workarounds that I did to make the demo work locally:
function _glUniformMatrix4fv(location, count, transpose, value) {
value >>>= 0;
+ if (false) {
count && GLctx.uniformMatrix4fv(webglGetUniformLocation(location), !!transpose, HEAPF32, (value >>> 2), count * 16);
+ } else {
+ const view = new Float32Array(HEAPF32.buffer, value, count*16);
+ count && GLctx.uniformMatrix4fv(webglGetUniformLocation(location), !!transpose, view);
+ }
}
Comment 7•1 year ago
|
||
Actionable next steps in my mind are to add [AllowLarge] to Codegen.py and start using it for WebGL, but I want to know if there's an actual plan for this migration already.
Comment 9•1 year ago
|
||
(In reply to Kelsey Gilbert [:jgilbert] from comment #5)
I don't know what our migration path to allowing >2GB arrays through the bindings code looks like. I imagine we'll need to craft an opt-in of some sort, and piecemeal update each binding's backing implementation. (e.g. some custom [AllowLarge])
@jandem probably knows more, I hope? :)
The problem is that dom::TypedArray_base
uses uint32_t
instead of size_t
for mLength
, so this needs a careful audit/tests of consumers to check they can handle large array buffers without truncation (or worse).
Ideally we'd support large array buffers everywhere, but the opt-in you mentioned makes sense to me and is a good incremental step I think.
Comment 10•1 year ago
|
||
This needs changes to Codegen.py that I don't want to dive into for now, given how many things I'm working on otherwise.
Updated•1 year ago
|
Updated•3 months ago
|
Description
•