Closed Bug 1303054 Opened 3 years ago Closed 3 years ago

Horrible graphics performance on http://www.vsynctester.com/

Categories

(Core :: Graphics, defect, P1)

All
macOS
defect

Tracking

()

VERIFIED FIXED
mozilla52
Tracking Status
firefox48 --- unaffected
firefox49 + wontfix
firefox50 --- wontfix
firefox51 --- wontfix
firefox52 --- fixed

People

(Reporter: mkem, Assigned: lsalzman)

References

()

Details

(Keywords: regression, Whiteboard: [gfx-noted])

Attachments

(1 file, 1 obsolete file)

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0
Build ID: 20160915030417

Steps to reproduce:

Visit website http://www.vsynctester.com/ and check the vsync performance.


Actual results:

Horrible graphic's performance on OS X, ~13-16 fps, while Safari has always 60 fps. Also Firefox 48.0.2 on Windows 7 is able to perform 60 fps most time.


Expected results:

I would expect similar / nearly performance of Firefox on Windows 7 or Safari on OS X.
I made additional test with Firefox 48.0 on Ubuntu Linux 16.04.1 LTS in VM and I got nearly the same performance (~60 fps) of Firefox on Windows 7 / Safari on OS X. 

So it looks like it is only OS X related.
I can reproduce this. We spend almost all of the time uploading textures in GrUploadBitmapToTexture.
Component: Untriaged → Graphics
Flags: needinfo?(lsalzman)
Product: Firefox → Core
Whiteboard: [gfx-noted]
Version: 51 Branch → Trunk
Status: UNCONFIRMED → NEW
Ever confirmed: true
OS: Unspecified → Mac OS X
Hardware: Unspecified → All
I can also reproduce it Windows 7.
The page animation is in slow motion (must be less than 1 fps) and the graph on the page does not show any data.

If it is of any help, the Failure Log in about:support displays the following errors.
first this one:
  (#0) "CP+[GFX1-]: [D2D1.1] Attempt to use unsupported surface size Size(15924,2898)"
then several thousands of:
  (#xxx) "CP+[GFX1-]: [D2D1.1] 4CreateBitmap failure Size(15924,2898) Code: 0x80070057 format 0"
(In reply to Franck (Wip) from comment #3)
> I can also reproduce it Windows 7.
> The page animation is in slow motion (must be less than 1 fps) and the graph
> on the page does not show any data.
> 
> If it is of any help, the Failure Log in about:support displays the
> following errors.
> first this one:
>   (#0) "CP+[GFX1-]: [D2D1.1] Attempt to use unsupported surface size
> Size(15924,2898)"
> then several thousands of:
>   (#xxx) "CP+[GFX1-]: [D2D1.1] 4CreateBitmap failure Size(15924,2898) Code:
> 0x80070057 format 0"

I checked my "Failure Log" and it says something different: 

(#0) Error Refresh driver waiting for the compositor for5.65798 seconds.
(#XXXX) Error	Refresh driver waiting for the compositor forX.XXXXX seconds.
(#3817) Error	Refresh driver waiting for the compositor for1.00098 seconds.
(#3818) 	CP+[GFX1-]: Failed to create a SkiaGL DrawTarget, falling back to software 
(#3819) Error	Refresh driver waiting for the compositor for3.52009 seconds.
(#3820) Error	Refresh driver waiting for the compositor for52.3305 seconds.

Maybe due to OS X platform? Probably.
Tracking for 49, though this issue will ship with 49 now, we can keep an eye on the issue in case there is an upliftable fix.  Does this seem like a problem that may be more widespread? Or just a few sites?
Summary: Horrible graphic's performance on http://www.vsynctester.com/ → Horrible graphics performance on http://www.vsynctester.com/
(In reply to Liz Henry (:lizzard) (needinfo? me) from comment #5)
> Tracking for 49, though this issue will ship with 49 now, we can keep an eye
> on the issue in case there is an upliftable fix.  Does this seem like a
> problem that may be more widespread? Or just a few sites?

I can't answer it, because I noticed it on http://www.vsynctester.com/ firstly. I didn't know about the problem until I visited http://www.vsynctester.com/
User Agent  Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:51.0) Gecko/20100101 Firefox/51.0
Build ID  20160919030422

I was able to reproduce the issue on Windows 7, Windows 10 and on Mac 10.11 (MacBook Pro early 2015). However the good builds I found for Mac were not good on Windows. 
On Mac the bug that caused the regression can be found in the following pushlog:
Last good revision: e5746165c704a82fc92d059ef9303562d6b3c886
First bad revision: 5ed72fdd632767e03d6862efd10fc15a34256845
Pushlog:
https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=e5746165c704a82fc92d059ef9303562d6b3c886&tochange=5ed72fdd632767e03d6862efd10fc15a34256845

I've also found that changing the gfx.canvas.azure.backends pref to "cg" will make the issue go away on Mac.

I've found builds that work as expected on Windows 10, as far as Vsync is concerned, but there is no background image. On builds that are able to load the background image Vsync is at 2-3 FPS.
Last good revision: 688f821edcd4 (2014-11-12)
First bad revision: ab137ddd3746 (2014-11-13)
Pushlog:
https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=688f821edcd4&tochange=ab137ddd3746
Too late for 49. Milan can you take a look here for a possible fix so we can consider uplift to 50?
Flags: needinfo?(milan)
Bulk move of gfx-noted bugs without priority to P3 for tracking.
Priority: -- → P3
This is a regression on the Mac, not a regression on other platforms.  We leave the cache size fixed at 96MB (except on Firefox OS), and we should change that.  Beyond that, there is nothing we're going to do here - which means that the performance issue in this particular test case will go away for users with 4GB of memory or more.

We can land the enabling of the dynamic cache computation in this bug; this will mean higher memory usage, and perhaps more OOMs, so I would prefer to have it ride the trains.

Lee, can you get the patch for that?
Assignee: nobody → lsalzman
Priority: P3 → P1
As it turns out, hal::GetTotalSystemMemory() is only implemented for Linux and is actually busted on anything with >= 4GB since it uses a uint32_t to return the result.

Luckily, we have a replacement which is cross-platform and not buggy: PR_GetPhysicalMemorySize. So, I cleaned up the mechanism to use this instead as well as changing the pref default for the dynamic cache.
Flags: needinfo?(lsalzman)
Attachment #8794025 - Flags: review?(milan)
Comment on attachment 8794025 [details] [diff] [review]
use dynamic cache size with SkiaGL on desktop

Review of attachment 8794025 [details] [diff] [review]:
-----------------------------------------------------------------

There are a couple of issues - the default pref value should be resolved before landing, I'll let you decide if you want to change the pref type given the "it's not really unsigned" issue with preferences.

::: gfx/thebes/gfxPlatform.cpp
@@ +1273,5 @@
>    if (UseAcceleratedCanvas()) {
>  #ifdef USE_SKIA_GPU
>      bool usingDynamicCache = gfxPrefs::CanvasSkiaGLDynamicCache();
>      int cacheItemLimit = gfxPrefs::CanvasSkiaGLCacheItems();
> +    uint64_t cacheSizeLimit = gfxPrefs::CanvasSkiaGLCacheSize();

If the pref stays int, we should check for negative value.  Not that we were before, but we can make it better.

::: gfx/thebes/gfxPlatform.h
@@ +769,5 @@
>  
>      // max number of entries in word cache
>      int32_t mWordCacheMaxEntries;
>  
> +    uint64_t mTotalSystemMemory;

Nice.

::: gfx/thebes/gfxPrefs.h
@@ +354,5 @@
>    DECL_GFX_PREF(Live, "gfx.canvas.azure.accelerated",          CanvasAzureAccelerated, bool, false);
>    // 0x7fff is the maximum supported xlib surface size and is more than enough for canvases.
>    DECL_GFX_PREF(Live, "gfx.canvas.max-size",                   MaxCanvasSize, int32_t, 0x7fff);
>    DECL_GFX_PREF(Once, "gfx.canvas.skiagl.cache-items",         CanvasSkiaGLCacheItems, int32_t, 256);
> +  DECL_GFX_PREF(Once, "gfx.canvas.skiagl.cache-size",          CanvasSkiaGLCacheSize, uint32_t, 96);

I would not change to uint32_t from int32_t.  Preferences natively only support signed, and just cast values and pointers (without checking for negative), so it's better to explicitly stick with signed.  At least we know to check.

@@ +355,5 @@
>    // 0x7fff is the maximum supported xlib surface size and is more than enough for canvases.
>    DECL_GFX_PREF(Live, "gfx.canvas.max-size",                   MaxCanvasSize, int32_t, 0x7fff);
>    DECL_GFX_PREF(Once, "gfx.canvas.skiagl.cache-items",         CanvasSkiaGLCacheItems, int32_t, 256);
> +  DECL_GFX_PREF(Once, "gfx.canvas.skiagl.cache-size",          CanvasSkiaGLCacheSize, uint32_t, 96);
> +  DECL_GFX_PREF(Once, "gfx.canvas.skiagl.dynamic-cache",       CanvasSkiaGLDynamicCache, bool, true);

Probably better to leave this one as false, and just make the change in all.js.  If people switch between different versions of Firefox, the value gets lost.  I run beta, explicitly set to true, the value gets saved as the default is false; I then run nightly, true is the default, so the value doesn't get saved; run beta again, the value is now false, even though I set it to true.  There are ways around this with sticky prefs, but probably not worth it.
all.js change is probably easier, just like what we have in b2g.js today.
Attachment #8794025 - Flags: review?(milan) → review+
Made minor tweaks to address Milan's concerns:

Changed pref back to int32_t and instead clamped it to be >= 0 when reading it.

Left the hard-coded pref to false and changed it to true in libpref.
Attachment #8794025 - Attachment is obsolete: true
Attachment #8794242 - Flags: review+
Pushed by lsalzman@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/2d0944de7efb
use dynamic cache size with SkiaGL on desktop. r=milan
https://hg.mozilla.org/mozilla-central/rev/2d0944de7efb
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla52
I see, that the bug is fixed. When the fix will be available to test in Nightly? Thank you.
Should be in today's nightly (2016-09-27)
(In reply to Milan Sreckovic [:milan] from comment #17)
> Should be in today's nightly (2016-09-27)

Thank you. It works now very well. ~60 fps almost time.
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.