On several WebGL demos, for example http://media.tojicode.com/md5Mesh/ it looks like the demo is GPU-bound. Frame rates in Chrome and Firefox are similar. However Chrome uses much less CPU, 35% across 3 processes, while Firefox uses 68% and XOrg uses another 34%. In both cases Compiz uses 32% so that part is identical. I've seen this in demos where I am sure JS isn't a factor (the best testcase I have I can't share I am afraid). One theory I had is that these demos are GPU bound, and Firefox does a busy wait for the GPU while Chrome does not. Presumably this is related to the heavy XOrg CPU usage, which Chrome manages to avoid entirely. Is there a simple way to verify that, basically to make a testcase in which almost all the time is on the GPU and nowhere else?
It seems that profiling this, to see where we're spending this 68% cpu, would be very useful! However since you're on linux and you're saying that XOrg uses 34% cpu, it's very likely that what we'll find is that this time is spent compositing. On Linux, we're using BasicLayers, which is doing readPixels hence waiting for the GPU-side WebGL work to complete, then using Cairo to do the actual compositing, and Cairo uses XRender which would perfectly explain why the XOrg process is using that 34% cpu. One way to confirm that theory would be to enabled layers.acceleration.force-enabled, run this again in a new browser window, and confirm that CPU usage is much lower. Chrome is better there because they use Skia instead of Cairo for rendering, so in particular they don't use Xrender and they have a good software-only renderer. See bug 720523 for using cairo's image backend instead of xrender on linux, which is a first requisite step in the right direction, the end goal being: never use Xrender, never use X pixmaps, enable GL layers on linux. Generally speaking: if you're going to profile any WebGL demo on linux, at least enable layers.acceleration.force-enabled as we're no longer trying to make the current xrender-based linux rendering paths fast --- it's a lost cause, we're instead trying to move away from that and finally enable gl layers on linux.
With layers.acceleration.force-enabled XOrg usage decreases substantially, from 26% to 16% (this is on another machine, so the values are not comparable to the previous ones). Chrome still does much better though, 6% XOrg, and much less in the other processes too. Thanks for the info, I'll focus on layers.acceleration.force-enabled from now.
This is still expected, as even with layers acceleration, we currently still use a lot of pixmaps. There is ongoing work to fix that. See bug 720523. Once that work is done, we should no longer suck. The tracking bug for properly enabling GL layers on linux and not sucking , is bug 594876. Aside from the above-mentioned work to remove dependency on pixmaps, the other area of work toward that goal is bug 722012, OMTC.
Thanks for the info! I guess everything is covered in those other bugs so I'll close this.
Status: NEW → RESOLVED
Last Resolved: 6 years ago
Resolution: --- → INVALID
You need to log in before you can comment on or make changes to this bug.