Open
Bug 806207
Opened 12 years ago
Updated 2 years ago
Long WebGL rendering updates slow down page responsiveness
Categories
(Core :: Graphics: CanvasWebGL, defect, P3)
Tracking
()
REOPENED
Tracking | Status | |
---|---|---|
firefox32 | --- | affected |
People
(Reporter: steve, Unassigned)
References
()
Details
(Keywords: perf, Whiteboard: webgl-perf)
User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0
Build ID: 20121023124120
Steps to reproduce:
Load a page with a complex WebGL scene/model with a long render time.
Actual results:
The entire page slowed down.
Expected results:
The page should remain responsive even if the webgl frame rate is low.
Reporter | ||
Comment 1•12 years ago
|
||
webgl rendering should be decoupled from the main page paint refresh (tested: win7/64) so when a webgl render is slow, the entire page doesn't slow down.
the fix is to decouple and double buffer, updating the image in the main page thread as new frames become available.
to achive this, create a seperate d3d10 or 11 device in another thread for webgl renderering then share the texture with primary renderer using a sychronized shared surface (win vista +). transfer new frames into main renderer image surface when available in a similar way to how this is down by the video renderer.
you could also convert ready frames into compressed textures on-the-fly using the gpu to save video memory.
use one render buffer for all webgl pages being processed on a page, traverse each in turn, updating when available to keep video memory overhead to the minimum.
also decouple webgl rendering from processing loop so you don't waste cycles drawing webgl content when offscreen but you do keep it simulating correctly.
Keywords: perf
Whiteboard: webgl decouple double-buffer response performance
Updated•12 years ago
|
Component: Untriaged → Canvas: WebGL
Product: Firefox → Core
Reporter | ||
Comment 3•12 years ago
|
||
any webgl scene with a low frame rate demonstrates the adverse affect on page performance.
example : http://martin.cyor.eu/benchmark/test.html
click on high then start. obviously depends on the performance of the gpu in your system.
if you have a fast gpu, you'll get high webgl performance from that test so no degraded performance. if so use a more demanding test, an older gpu or (what I do) drop a ::Sleep at the end of the render loop on a minimal test case while developing to simulate a slow render.
Reporter | ||
Updated•12 years ago
|
I was able to reproduce.
Firefox Version: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0
Build ID: 20121106195758
Steps to Reproduce:
1. Load the URL http://martin.cyor.eu/benchmark/test.html
2. Click on the "High" option
3. Click "Start"
What happens:
The page will render the test, however, performance of the page will suffer (actions such as scrolling).
Thanks for confirming, Jen. Can you please try some previous Firefox releases to see if this ever regressed? You can find all our previous releases here:
ftp://ftp.mozilla.org/pub/firefox/releases/
Status: UNCONFIRMED → NEW
Ever confirmed: true
Steve, we use the whiteboard field to tag and bucket bugs that don't have keywords or components for tracking. It's not really intended to add any other information. Please don't update this field unless requested.
Whiteboard: webgl decouple double-buffer response performance
Reporter | ||
Comment 7•12 years ago
|
||
No point testing previous versions, decoupling has not been implemented yet so it's not a bug as such, just limited functionality.
(In reply to Steve Williams from comment #7)
> No point testing previous versions, decoupling has not been implemented yet
> so it's not a bug as such, just limited functionality.
I see, thank you Steve.
Updated•12 years ago
|
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → DUPLICATE
Comment 10•12 years ago
|
||
Jeff -- I duped that bug on bug 716859 as its original statement was about fixing exactly that issue. Please correct me / undupe if the scope of 716859 changed and fixing it will no longer be sufficient to fix this.
Comment 11•12 years ago
|
||
Also, a remark here:
- what we can do something about (bug 716859) is the typical case where a WebGL page does many WebGL draw operations, so the total frame rendering time is high, but typically each draw operation is not so expensive.
- there also exist WebGL pages doing single draw operations that are very expensive. These can currently constitute denial-of-service attacks and we are entirely dependent on graphics drivers to prevent that. We do take care of letting the driver know that we want to recover from it, by using ARB_robustness where it's available, but in the end it's up to the driver, and many drivers are not robust in this respect.
Comment 12•12 years ago
|
||
I believe the Fence/Sync stuff coming in the Streaming work will help somewhat, but doesn't really solve this. It puts down a better base to solve this on top of, however.
By default we block on everything finishing rendering now on the Compositor thread, which is unfortunately where the UI and friends live. (We do this because it allows better perf for well-behaved WebGL apps) What we should likely do is instead poll if the work is completed, and just run UI updates until the new work is completed and rendered, and only then mark ourselves 'clean'.
For one of Steve's other points, afaik, there are no full-color lossless compressed texture formats, so there's nothing we can use here in the general case. (Some special cases could use low-precision or compressed textures for things like solid colors, or anything which wouldn't demonstrate artifacts.
Compressed textures don't generally offer very high compression ratios, and given the limited selection of things we could use them for, I doubt we'd see very much benefit to GPU memory usage at the browser/Layers level.
Reporter | ||
Comment 13•12 years ago
|
||
> Compressed textures don't generally offer very high compression ratios
6:1, Jeff :
http://msdn.microsoft.com/en-gb/library/windows/desktop/bb694531%28v=vs.85%29.aspx#BC1
Sure it's lossy but so are the jpgs (for example) that you'd use this for.
Video memory is a scarce resource especially on low end systems. At the dynamic/layers level, the argument is weaker (depends how fast the compression implmentation is) but 6:1 is a significant win.
Obviously on a high end system with plenty of video memory, there's no point doing this. On a more restricted system, there's every reason to do this. Can turn it on/off via a preferences switch.
Comment 14•12 years ago
|
||
Compressing frames is not an option because we need pixel exactness. People use WebGL for all sorts of things that rely on it. And even without going as far as pixel exactness, using compressed texture formats for that would already create all sorts of undesirable effects along the edges of triangles. In addition, the compression would be expensive to do; to do it reasonably fast (no readback) we would have to implement it ourselves in a fragment shader and then we would face all sorts of IP issues related to compressed texture formats, which we currently avoid with WebGL extensions only exposing features of the underlying system.
Reporter | ||
Comment 15•12 years ago
|
||
BC1 is colour compression not wavelet or DCT so you wouldn't get edge artifacts but whatever. it's possible whether or not it's worth it is open.
Reporter | ||
Comment 16•12 years ago
|
||
also texture compression is not specifically a webgl issue, we're just discussing it here.
Updated•11 years ago
|
Whiteboard: webgl-perf
Updated•10 years ago
|
status-firefox32:
--- → affected
Updated•6 years ago
|
Priority: -- → P3
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•