I'd like to note up front that Firefox tends to do very well under nvidia and intel in linux - HWACCEL even on my sucky-ish intel card on my laptop yielded 73FPS, dropping to 13FPS only if I enabled layers.accelerate-all Anyway. On to the bug. Ubuntu Maverick 10.10, fglrx 8.78.30, ATI Technologies Inc RV730XT [Radeon HD 4670] Firefox 3.6 yielded 15FPS in http://demos.hacks.mozilla.org/openweb/HWACCEL/ Firefox 4, latest nightly, 9FPS. If I enable layers.accelerate-all it climbs to 13FPS, still slower than Firefox 3.6 by a good margin. Didn't file this under Core since I have no idea what the problem might be.
Oh, and FWIW, webgl works great. High frame rates. Also, I tried disabling compiz to see if that made any difference. It didn't. I can't try radeon/radeonhd since my last attempt w/ them simply crashed on X startup.
And in other browsers... Chromium 6 - 6 FPS and also the images were oddly blurry - a bug in image transform? However: Chrome 7 - 18 FPS and no blurriness.
Oh. One more thing I tried to see if I could get Xrender to suck less: http://www.rojtberg.net/102/fglrx-89-improved-2d-performance/ Of those options, the first 2 were ignored according to Xorg log, and the last one, while clearly operating, made no difference whatsoever. FPS was same across the board, still far lower than intel and nvidia. Clearly overall sucky FPS is ATI's fault - although FF4 did lose ground at this game. Perhaps they are trying to render more frequently or something? <- I'm sure that guess emphasises cluelessness.
Might be related to us now using EXTEND_PAD. In fact, is this a duplicate of bug 594319? nemo, what version of Xorg do you have?
Well, whatever comes w/ Ubuntu 10.10 Which, quoting xorg.conf appears to be: [532216.449] Build Date: 16 September 2010 06:18:41PM [532216.449] xorg-server 2:1.9.0-0ubuntu7 (For technical support please see http://www.ubuntu.com/support)
What does "xdpyinfo | head" have to say for itself?
$ DISPLAY=:0 xdpyinfo | head name of display: :0.0 version number: 11.0 vendor string: The X.Org Foundation vendor release number: 10900000 X.Org version: 1.9.0 maximum request size: 16777212 bytes motion buffer size: 256 bitmap unit, bit order, padding: 32, LSBFirst, 32 image byte order: LSBFirst number of supported pixmap formats: 7
Thanks. Not a dupe of bug 594319. With xf86-video-radeon, I get similar miserable single-digit fps with 3.6 and trunk. CPU time is spent in the server, which is consistent with us still using EXTEND_NONE for some operations.
Perhaps 40% could be the switch from nearest-neighbour to bilinear filters. With radeon and nouveau, we should be using the gpu if we switch to EXTEND_PAD. Can't promise anything with fglrx, but TexturedXRender looks like it should provide hw accel of some operations. I expect it will only help if Textured2D is already on.
I tried switching to EXTEND_PAD in the relevant places (PreparePatternForUntiledDrawing in gfxDrawable.cpp, and nsCanvasRenderingContext2D::DrawImage) but here I still get fallback to pixman because nsCanvasRenderingContext2D::DrawImage is copying images out of a 640x7760 surface, but my hardware (r500) only supports compositing from source textures up to 4096x4096. 7760 will just nicely fit within the 8192 limit on r600 cards. I'm sure there are improvements to make, but this benchmark seems targeted at particular hardware and so is not the best way to measure improvements.
Heh. 640x7760 ? Wow. That's a huge narrow texture. And to think we still support 512x512 cards in Hedgewars :D (well, 0.9.14 - 0.9.13 requires 1024x1024) Well. Thanks. Guess I can try to figure out why this Texture2D option got tossed out.
I'm not in a hurry to optimize for certain cards, although I guess we could turn off linear image scaling for them. I'm still leaning towards wontfixing this bug, however. I don't believe it blocks.
I don't know that this is a "certain cards" issue. I am using nvidia Geforce GTX 285 with nvidia blob 260.19.06 and X Server 1.9.2 and I get 72 FPS in Firefox 3.6.12 and 13 FPS in the nightly from Nov. 7 2010. Perhaps the OP and I have a completely different issue causing our regressions, but from a quick search, this bug seemed to be the best match. I think this should be considered a blocker if it is found to make the majority of cards much slower.
hey Jeff, does seem like a different issue to me. Can you check if layers.accelerate-all is enabled on your build? Personally, I found behaviour like that (from 70 to 15 or so) happened on my intel card if layers.accelerate-all was set - the intel driver did much better at Xrender.
layers.accelerate-all was enabled when I received the above-quoted results. I disabled it and now I get 86 FPS with a current nightly. Thanks for that suggestion.
The landing of http://hg.mozilla.org/mozilla-central/rev/4b8d96e463fd increased numbers on my evergreen system from 13 to 60 fps. I expect this is also fixed on r600/r700 cards.
FWIW, I retested this bug on the intial reporting system, my FPS was 11. I retested in Firefox 3.6 since after all the version of fglrx and X had changed in the mean time. Firefox 3.6 managed 20. IMO closing it as FIXED was incorrect... Maybe WONTFIX if it isn't worth doing, but anyway, issue still seems to be there. fglrx 8.90.5, X.Org 7.6 (Ubuntu 11.10)
Thanks for checking. Sounds like the driver is not accelerating the operations, in which case moving from nearest-neighbour to bilinear would cause an expected speed regression, so WONTFIX based on that. If bug 679257 is the reason why the driver is not accelerating some operations, we might be able to address that there.