Closed Bug 1127925 Opened 10 years ago Closed 10 years ago

Very high GPU memory on Firefox 36 with Intel graphics hardware when playing YouTube with HTML5 video

Categories

(Core :: Graphics, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla38
Tracking Status
firefox36 --- fixed
firefox37 + fixed
firefox38 --- fixed

People

(Reporter: mccr8, Assigned: mattwoodrow, NeedInfo)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

(Whiteboard: [MemShrink:P1][gfx-noted])

Attachments

(1 file)

When playing youtube videos on Firefox 36 with certain Intel hardware, the GPU memory usage is very high. This problem does not show up on the same hardware with Firefox 33, which is using Flash to display the video. The hardware is described as: Dell Latitude E6530, Intel Core i5-3360M, 2800Mhz, 8192MB RAM, Intel HD Graphics 4000, 1720MB (shared). This is showing up on tests 2, 4 and 5, on two different versions of the drivers. Test 1 seems okay. Test 2 is playing this video: https://www.youtube.com/watch?v=hicBgE6XndM in 1080HD, full screen. Test 4 is playing this video: https://www.youtube.com/watch?v=mGRpobfnQ8k Test 5 is playing this video: https://www.youtube.com/watch?v=mpi0qsp3v_w The GPU committed and dedicated are pretty low, but the gpu-shared, resident, and vsize are very high, and the vsize-max-contiguous is very low in this example from test 5 (new driver): 235.41 MB ── gpu-committed 9.94 MB ── gpu-dedicated 1,535.37 MB ── gpu-shared 66.52 MB ── heap-allocated 204 ── heap-chunks 1.00 MB ── heap-chunksize 71.37 MB ── heap-committed 204.00 MB ── heap-mapped 7.29% ── heap-overhead-ratio 1 ── host-object-urls 0.53 MB ── imagelib-surface-cache-estimated-locked 0.53 MB ── imagelib-surface-cache-estimated-total 1.53 MB ── js-main-runtime-temporary-peak 0 ── low-commit-space-events 464.74 MB ── private 3,155.55 MB ── resident 3,974.72 MB ── vsize 1.31 MB ── vsize-max-contiguous In the other tests on this hardware, vsize-max-contiguous was larger, but still no more than 16MB. The next worse hardware configuration in this set of testing has a vsize-max-contiguous of 131MB. This may be a dupe of some other bug, but it seemed worth filing separate given the specific steps to reproduce.
Whiteboard: [MemShrink]
Summary: Very high GPU memory on Firefox 36 with Intel graphics hardware → Very high GPU memory on Firefox 36 with Intel graphics hardware when playing YouTube with HTML5 video
We now report shared texture usage in about:memory as "d3d11-shared-textures" can you post that number when this problem happens?
Flags: needinfo?(continuation)
This isn't on my machine, this is some hardware lab. Clint Talbert might be able to set up a run on this machine with Nightly.
Flags: needinfo?(continuation) → needinfo?(ctalbert)
(In reply to Andrew McCreight [:mccr8] from comment #2) > This isn't on my machine, this is some hardware lab. Clint Talbert might be > able to set up a run on this machine with Nightly. Do you need it run with Nightly (and nightlies current as to when)? Or do you need it run with beta 5 of 36?
Flags: needinfo?(ctalbert)
36 is probably the more urgent.
Beta doesn't have extra instrumentation yet.
Bug 1128765 might be required to see the memory.
Depends on: 1128765
Oh, right, sorry. In that case maybe a nightly run can be a good start in the meantime. Can't really afford to wait for an uplift.
Bug 1123535, which just made it into beta 6, should reduce GPU memory used. Can we retest the latest beta (6), *and* in latest Nightly, to see if the problem is solved there?
clint: Can you retest on Beta 6, and on latest Nightly?
Flags: needinfo?(ctalbert)
Oops, midair'd with you. (In reply to Chris Pearce (:cpearce) from comment #8) > Bug 1123535, which just made it into beta 6, should reduce GPU memory used. > Can we retest the latest beta (6), *and* in latest Nightly, to see if the > problem is solved there? Yes, that is my plan. I was planning to use the test2 video since it seemed to repro issues more often than the others. Do you folks have any sense whether playing the video for a longer period of time (perhaps a full two minutes) would help us reproduce issues? I can use other videos in the test as well, but the more we add to it, the shorter the playback times will become so that these folks can get through the entire matrix.
Flags: needinfo?(ctalbert)
I think seeking a lot, and changing the bitrate a lot, and opening a lot of tabs loading and/or playing YouTube videos will increase the chance of repro.
Whiteboard: [MemShrink] → [MemShrink:P1]
Bug 1128170 has a user that can reliably reproduce and OOM crash just by playing HD videos.
See Also: → 1128170
Getting the graphics section from about:support would also be valuable.
Can you please save the results of about:memory before it crashes, and upload it here. Thanks!
(In reply to jescalle from comment #16) > Crashed after trying to run test 3 and close the browser with all the video > tabs open using FF38N. > https://drive.google.com/file/d/0B90X-uehgobJdnJrVWhabGh6V3c/view?usp=sharing > > https://crash-stats.mozilla.com/report/index/bp-6f649cb6-8c20-460d-9e42- > 77ea62150205 https://drive.google.com/open?id=0B90X-uehgobJdkJlTVpSSmYzbmM&authuser=0
Flags: needinfo?(jescalle)
Or more importantly can you get the about:memory report with that build?
(In reply to Jeff Muizelaar [:jrmuizel] from comment #23) > Or more importantly can you get the about:memory report with that build? I couldn't get the app to crash. Here is the about:memory report - https://drive.google.com/file/d/0B90X-uehgobJR1czVUZXR2NFQWs/view?usp=sharing
Flags: needinfo?(jescalle)
(In reply to jescalle from comment #24) > (In reply to Jeff Muizelaar [:jrmuizel] from comment #23) > > Or more importantly can you get the about:memory report with that build? > > I couldn't get the app to crash. > > Here is the about:memory report - > https://drive.google.com/file/d/0B90X-uehgobJR1czVUZXR2NFQWs/view?usp=sharing Can you make the crash happen with Nightly?
Flags: needinfo?(jescalle)
(In reply to Jeff Muizelaar [:jrmuizel] from comment #25) > (In reply to jescalle from comment #24) > > (In reply to Jeff Muizelaar [:jrmuizel] from comment #23) > > > Or more importantly can you get the about:memory report with that build? > > > > I couldn't get the app to crash. > > > > Here is the about:memory report - > > https://drive.google.com/file/d/0B90X-uehgobJR1czVUZXR2NFQWs/view?usp=sharing > > Can you make the crash happen with Nightly? If not can you find a regression window using http://mozilla.github.io/mozregression/
Whiteboard: [MemShrink:P1] → [MemShrink:P1][gfx-noted]
Flags: needinfo?(ssimon)
The crash IDs so far are not out-of-memory issues, they are other crashes. To focus the OOM cases, I suspect we should not be fiddling with the settings but rather letting the videos sit and play for a long time. I think some user in another bug said 6-7 minutes. Jeff/cpearce does that sound reasonable?
(In reply to David Major [:dmajor] (UTC+13) from comment #29) > The crash IDs so far are not out-of-memory issues, they are other crashes. > To focus the OOM cases, I suspect we should not be fiddling with the > settings but rather letting the videos sit and play for a long time. I think > some user in another bug said 6-7 minutes. Jeff/cpearce does that sound > reasonable? Sure.
Priority: -- → P1
We received the machine that was ordered on eBay, but it doesn't have the exact specs that are described in Comment 0: Dell Latitude E6530, Intel Core i5-3230M, 2.60Mhz, 4 GB RAM, Intel HD Graphics 400 Graphics -------- Adapter Description: Intel(R) HD Graphics 4000 Adapter Description (GPU #2): NVIDIA NVS 5200M Adapter Drivers: igdumd64 igd10umd64 igd10umd64 igdumd32 igd10umd32 igd10umd32 Adapter Drivers (GPU #2): nvd3dumx,nvwgf2umx,nvwgf2umx nvd3dum,nvwgf2um,nvwgf2um Adapter RAM: Unknown Adapter RAM (GPU #2): 1024 Device ID: 0x0166 Device ID (GPU #2): 0x0dfc Direct2D Enabled: Blocked for your graphics driver version. DirectWrite Enabled: false (6.2.9200.16571) Driver Date: 2-1-2012 Driver Date (GPU #2): 10-28-2013 Driver Version: 8.15.10.2639 Driver Version (GPU #2): 9.18.13.2762 GPU #2 Active: false GPU Accelerated Windows: 0/1 Basic (OMTC) Blocked for your graphics driver version. Subsys ID: 05351028 Subsys ID (GPU #2): 15351028 Vendor ID: 0x8086 Vendor ID (GPU #2): 0x10de WebGL Renderer: Blocked for your graphics driver version. windowLayerManagerRemote: true AzureCanvasBackend: skia AzureContentBackend: cairo AzureFallbackCanvasBackend: cairo AzureSkiaAccelerated: 0 I will continue running scenarios on the machine playing long you tube videos to see what happens.
We found two machines in Auckland that can reproduce this (or at least one variant of it). It seems the key is to have a video in a *background* tab. A bunch of d3d9 textures accumulate until the entire address space is consumed, and they suddenly go away when a composite happens. ajones/mattwoodrow are investigating.
It looks like create the D3D11 textures for videos that won't actually be visible causes an accumulation of memory. It appears that causing the browser to composite releases all this memory, so may be related to the GPU pipeline being flushed. Lazily creating the D3D11 textures when we know we'll actually use them fixes this class of memory leaks, though we don't understand the exact mechanism behind why this helps yet.
Assignee: nobody → matt.woodrow
Attachment #8563772 - Flags: review?(jmuizelaar)
Attachment #8563772 - Flags: review?(jmuizelaar) → review+
(In reply to Matt Woodrow (:mattwoodrow) from comment #34) > Created attachment 8563772 [details] [diff] [review] > lazily-open-shared-handl > > It looks like create the D3D11 textures for videos that won't actually be > visible causes an accumulation of memory. > > It appears that causing the browser to composite releases all this memory, > so may be related to the GPU pipeline being flushed. > > Lazily creating the D3D11 textures when we know we'll actually use them > fixes this class of memory leaks, though we don't understand the exact > mechanism behind why this helps yet. It would be nice to understand why. Bas, any thoughts?
Flags: needinfo?(bas)
Bas, some interesting information: We are seeing that the number of IDirect3DTexture9 objects alive in our address space was accumulating during the 'leak', so we assume something must be holding refs to these objects. Doing anything in the foreground tab that causes a composite (scrolling, page with animations etc) causes all the memory to be released, which I'm attributing to the pipeline flush, but could but something else. My theory is that the HANDLE -> ID3D11Texture2D conversion done by OpenSharedResource behaves lazily in some way (on some drivers), and puts a task in the GPU pipeline. The weird part is that this would have to look up the IDirect3DTexture9 userspace object and addref it, rather than just the kernel one. I can probably write a patch to track the reference counts on this object across the call next week, will have access to the machine that reproduces it. Very open to other theories that explain the behaviour we're seeing :)
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla38
[Tracking Requested - why for this release]: Given we are trying to ship MSE on 37 and we know that memory issues were one reason we disabled it on 36, I think we should track this and make sure it lands for 37, ideally before we hit beta, but probably after some baking.
Comment on attachment 8563772 [details] [diff] [review] lazily-open-shared-handl Approval Request Comment [Feature/regressing bug #]: OMTC [User impact if declined]: Leaking memory when playing video (HTML5 and MSE) in background tabs on some GPUs/drivers [Describe test coverage new/current, TreeHerder]: Tested manually [Risks and why]: Very low risk, just delays initialization of texture objects until needed. [String/UUID change made/needed]: None.
Attachment #8563772 - Flags: approval-mozilla-beta?
Attachment #8563772 - Flags: approval-mozilla-aurora?
Given how little time remains, I think we could live without this on beta. The GPU-memory related crashes went way down after MSE was pref'ed off for 36.
Comment on attachment 8563772 [details] [diff] [review] lazily-open-shared-handl Very late in the 36 cycle and low impact. Let it ride the train from 37.
Attachment #8563772 - Flags: approval-mozilla-beta?
Attachment #8563772 - Flags: approval-mozilla-beta-
Attachment #8563772 - Flags: approval-mozilla-aurora?
Attachment #8563772 - Flags: approval-mozilla-aurora+
Comment on attachment 8563772 [details] [diff] [review] lazily-open-shared-handl [Triage Comment] Taking it after discussion with Anthony Jones
Attachment #8563772 - Flags: approval-mozilla-release+
Flags: needinfo?(ssimon)
Flags: needinfo?(bas)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: