cache results of shader compilation
Categories
(Core :: Graphics: CanvasWebGL, enhancement, P3)
Tracking
()
People
(Reporter: vlad, Unassigned)
References
(Depends on 1 open bug, Blocks 3 open bugs)
Details
(Whiteboard: [games:p1] webgl-perf [platform-rel-Games])
Attachments
(3 files)
2.35 KB,
patch
|
Details | Diff | Splinter Review | |
1.70 KB,
text/x-python
|
Details | |
3.62 MB,
image/png
|
Details |
Comment 1•12 years ago
|
||
Updated•12 years ago
|
Reporter | ||
Comment 2•12 years ago
|
||
Comment 3•12 years ago
|
||
Reporter | ||
Comment 4•12 years ago
|
||
Comment 5•12 years ago
|
||
Reporter | ||
Comment 6•12 years ago
|
||
Updated•12 years ago
|
Comment 7•12 years ago
|
||
Comment 8•12 years ago
|
||
Comment 9•12 years ago
|
||
Comment 10•11 years ago
|
||
Comment 11•11 years ago
|
||
Comment 12•11 years ago
|
||
Comment 13•11 years ago
|
||
Comment 14•11 years ago
|
||
Comment 15•11 years ago
|
||
Comment 16•11 years ago
|
||
Reporter | ||
Comment 18•10 years ago
|
||
Reporter | ||
Updated•10 years ago
|
Comment 19•9 years ago
|
||
Updated•9 years ago
|
Comment 20•9 years ago
|
||
Updated•9 years ago
|
Updated•9 years ago
|
Updated•9 years ago
|
Updated•9 years ago
|
Updated•6 years ago
|
Comment 21•6 years ago
|
||
Hello! I would like to present a new use case that has popped up: the use of machine learning framework Tensorflow.js. As best as I can tell, this compiles many shaders (a shader per graph op?) when the model initially loads and does the first inference. This causes quite heavy page loads.
While this can be used directly on a web page (which perhaps represents the bulk of usage?), I use it in a Firefox plugin for inference on images.
Basically this causes a 10 second stall when the plugin is initially loaded. As a point of reference on size, that's for a model based on MobilenetV2 - not something considered to be huge. I'm tracking my corresponding issue here
Beyond the undue level of hype that machine learning has received in recent years, I do anticipate a significant growth in its usage - even on the web. Based on that, I would imagine that - whether Tensorflow.js retains its popularity or not - caching shaders will remain important for this use case in the future as the CPU is simply not cut out for the linear algebra needed. (I also see active work on a WebGPU backend, but that's a future story...) For the curious, Tensorflow.js has several demos online.
Thank you team for developing Firefox and I hope you find this new use case report helpful!
Updated•3 years ago
|
Comment 22•3 months ago
|
||
Hello,
I'm checking in to see if there have been any recent updates or progress regarding this issue. Our team has recently encountered significant slowness in the shader linking process within our product on the FireFox browser, and we suspect it may be related to this bug. The performance degradation is particularly pronounced on Mac.
This is a sample webpage that consistently reproduces this behavior:
Webpage: https://prideout.net/slow_compile/repro.html
Source Code: https://github.com/prideout/slow_compile
This sample measures the time taken for shader linking. In our tests on Mac, Firefox takes approximately 640ms, whereas Chrome completes the same operation in about 30ms.
Within this sample, the bottleneck appears to be this line (https://github.com/prideout/slow_compile/blob/893994d876482ff6b67ce17fea2d744bcc44d6ec/vshader.js#L15). We observed that reducing the array size in this specific line (e.g., down to 8 or 1) dramatically decreases the link time from ~640ms to around 6ms.
We hope this information is helpful.
Description
•