Closed Bug 1378528 Opened 7 years ago Closed 6 years ago

Webrender consumes excessive amounts of memory.

Categories

(Core :: Graphics: WebRender, defect, P1)

x86_64
Windows 10
defect

Tracking

()

RESOLVED FIXED
Tracking Status
firefox56 --- unaffected
firefox57 --- unaffected

People

(Reporter: bugzilla.mozilla.org, Assigned: aosmond)

References

(Blocks 1 open bug)

Details

(Whiteboard: [wr-reserve] [gfx-noted][needs-investigation])

Currently I'm seeing 6.3GB private bytes, 12GB working set size, 15.6GB virtual size consumed by the GPU process with webrender enabled. ------ nvidia driver: Version 22.21.13.8205 about:config: gfx.webrender.enabled true gfx.webrender.force-angle false gfx.webrendest.enabled true memory report: GPU (pid 152248) Explicit Allocations 1,758.04 MB (100.0%) -- explicit β”œβ”€β”€1,744.49 MB (99.23%) ── heap-unclassified └─────13.55 MB (00.77%) ++ (5 tiny) Other Measurements 134,217,727.94 MB (100.0%) -- address-space β”œβ”€β”€134,205,137.77 MB (99.99%) ── free(segments=8751) └───────12,590.17 MB (00.01%) ++ (2 tiny) 1,758.04 MB (100.0%) -- heap-committed β”œβ”€β”€1,746.24 MB (99.33%) ── allocated └─────11.80 MB (00.67%) ── overhead 10 (100.0%) -- observer-service └──10 (100.0%) -- referent β”œβ”€β”€10 (100.0%) ── strong └───0 (00.00%) ++ weak 1,746.24 MB ── heap-allocated 1.00 MB ── heap-chunksize 1,923.00 MB ── heap-mapped 4,763.62 MB ── private 8,919.81 MB ── resident 7,474.79 MB ── resident-unique 39.02 MB ── system-heap-allocated 12,588.11 MB ── vsize 131,741,728.13 MB ── vsize-max-contiguous
Blocks: webrender
If you turn on gfx.webrender.profiler.enabled you should get some numbers for the texture cache. Are the numbers high? We currently do rounded clipping using mask layers and this can cause very high memory usage. Bug 1375059 should fix this.
I did reset my gfx process, so I'm not seeing quite the same figures as before. Currently it's at 3.5/8.4/10G private/wss/virtual. It seems to climb with uptime or active use, shortly after the reset it was at 2GB each. The texture cache looks fairly stable though, A8 cache at 192MB, RGBA8 cache at 3.5GB. I'll add another data point once it has climbed back up to the originally reported numbers.
Whiteboard: [gfx-noted]
Could it be that textures are allocated twice? Process explorer shows 3GB GPU memory, but the working set size is about double that 16 A8 texture pages, 256MB 48 RGBA8 pages, 3181.75MB 251 font templates 435 MB GPU (pid 249976) Explicit Allocations 2,306.97 MB (100.0%) -- explicit β”œβ”€β”€2,292.28 MB (99.36%) ── heap-unclassified └─────14.68 MB (00.64%) ++ (5 tiny) Other Measurements 134,217,727.94 MB (100.0%) -- address-space β”œβ”€β”€134,201,710.05 MB (99.99%) ── free(segments=7628) └───────16,017.89 MB (00.01%) -- (2 tiny) β”œβ”€β”€15,571.27 MB (00.01%) -- commit β”‚ β”œβ”€β”€10,204.63 MB (00.01%) -- mapped β”‚ β”‚ β”œβ”€β”€10,135.16 MB (00.01%) ── readwrite(segments=7696) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€69.23 MB (00.00%) ── readonly(segments=19) β”‚ β”‚ └───────0.23 MB (00.00%) ── writecopy(segments=1) β”‚ β”œβ”€β”€β”€5,158.40 MB (00.00%) -- private β”‚ β”‚ β”œβ”€β”€2,604.13 MB (00.00%) ── readwrite+writecombine(segments=68) β”‚ β”‚ β”œβ”€β”€2,515.75 MB (00.00%) ── readwrite(segments=1159) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€37.74 MB (00.00%) ── readwrite+stack(segments=33) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€0.61 MB (00.00%) ── readwrite+guard(segments=33) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€0.13 MB (00.00%) ── execute-readwrite(segments=1) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€0.03 MB (00.00%) ── noaccess(segments=7) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€0.02 MB (00.00%) ── readwrite+nocache(segments=1) β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€0.00 MB (00.00%) ── execute-read(segments=1) β”‚ β”‚ └──────0.00 MB (00.00%) ── readonly(segments=1) β”‚ └─────208.25 MB (00.00%) -- image β”‚ β”œβ”€β”€123.20 MB (00.00%) ── execute-read(segments=72) β”‚ β”œβ”€β”€β”€75.27 MB (00.00%) ── readonly(segments=233) β”‚ β”œβ”€β”€β”€β”€8.36 MB (00.00%) ── writecopy(segments=75) β”‚ └────1.42 MB (00.00%) ── readwrite(segments=128) └─────446.62 MB (00.00%) -- reserved β”œβ”€β”€432.29 MB (00.00%) ── private(segments=835) └───14.33 MB (00.00%) ── mapped(segments=4) 2,306.97 MB (100.0%) -- heap-committed β”œβ”€β”€2,294.03 MB (99.44%) ── allocated └─────12.94 MB (00.56%) ── overhead 10 (100.0%) -- observer-service └──10 (100.0%) -- referent β”œβ”€β”€10 (100.0%) ── strong └───0 (00.00%) ++ weak 2,294.03 MB ── heap-allocated 1.00 MB ── heap-chunksize 2,484.00 MB ── heap-mapped 5,202.66 MB ── private 6,313.79 MB ── resident 5,866.57 MB ── resident-unique 90.17 MB ── system-heap-allocated 16,017.83 MB ── vsize 131,306,666.69 MB ── vsize-max-contiguous
Priority: P3 → P2
Whiteboard: [gfx-noted] → [wr-mvp] [gfx-noted]
Priority: P2 → P3
Whiteboard: [wr-mvp] [gfx-noted] → [wr-reserve] [gfx-noted]
Since this bug was filed we've fixed a number of memory leaks. Do you still see this problem?
Flags: needinfo?(bugzilla.mozilla.org)
The situation improved somewhat, but there's still that huge amount of shareable mappings. 41k blocks, 12GB committed, 8GB working set size according to vmmap. GPU (pid 5656) Explicit Allocations 72,187,904 B (100.0%) -- explicit β”œβ”€β”€68,016,128 B (94.22%) ── heap-unclassified β”œβ”€β”€β”€3,733,008 B (05.17%) -- heap-overhead β”‚ β”œβ”€β”€2,249,168 B (03.12%) ── bin-unused β”‚ β”œβ”€β”€β”€β”€959,552 B (01.33%) ── bookkeeping β”‚ └────524,288 B (00.73%) ── page-cache β”œβ”€β”€β”€β”€β”€148,736 B (00.21%) ── telemetry β”œβ”€β”€β”€β”€β”€144,176 B (00.20%) -- xpcom β”‚ β”œβ”€β”€133,072 B (00.18%) ── component-manager β”‚ └───11,104 B (00.02%) ── category-manager β”œβ”€β”€β”€β”€β”€143,360 B (00.20%) -- atoms β”‚ β”œβ”€β”€143,360 B (00.20%) ── table β”‚ └────────0 B (00.00%) -- dynamic β”‚ β”œβ”€β”€0 B (00.00%) ── atom-objects β”‚ └──0 B (00.00%) ── unshared-buffers └───────2,496 B (00.00%) ── profiler/profiler-state Other Measurements 140,737,488,289,792 B (100.0%) -- address-space β”œβ”€β”€138,524,216,004,608 B (98.43%) ── free(segments=32325) β”œβ”€β”€β”€β”€2,199,435,616,256 B (01.56%) -- reserved β”‚ β”œβ”€β”€2,199,019,790,336 B (01.56%) ── mapped(segments=14) β”‚ └────────415,825,920 B (00.00%) ── private(segments=892) └───────13,836,668,928 B (00.01%) -- commit β”œβ”€β”€12,964,724,736 B (00.01%) -- mapped β”‚ β”œβ”€β”€12,919,877,632 B (00.01%) ── readonly(segments=40672) β”‚ β”œβ”€β”€β”€β”€β”€β”€25,030,656 B (00.00%) ── readwrite(segments=394) β”‚ β”œβ”€β”€β”€β”€β”€β”€19,558,400 B (00.00%) ── noaccess(segments=60) β”‚ └─────────258,048 B (00.00%) ── writecopy(segments=1) β”œβ”€β”€β”€β”€β”€661,016,576 B (00.00%) -- private β”‚ β”œβ”€β”€503,603,200 B (00.00%) ── readwrite(segments=861) β”‚ β”œβ”€β”€135,741,440 B (00.00%) ── readwrite+writecombine(segments=95) β”‚ β”œβ”€β”€β”€21,090,304 B (00.00%) ── readwrite+stack(segments=24) β”‚ β”œβ”€β”€β”€β”€β”€β”€417,792 B (00.00%) ── readwrite+guard(segments=24) β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€77,824 B (00.00%) ── noaccess(segments=19) β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€73,728 B (00.00%) ── execute-read(segments=3) β”‚ └───────12,288 B (00.00%) ── readonly(segments=2) └─────210,927,616 B (00.00%) -- image β”œβ”€β”€125,222,912 B (00.00%) ── execute-read(segments=82) β”œβ”€β”€β”€75,149,312 B (00.00%) ── readonly(segments=260) β”œβ”€β”€β”€β”€9,060,352 B (00.00%) ── writecopy(segments=73) └────1,495,040 B (00.00%) ── readwrite(segments=135) 72,187,904 B (100.0%) -- heap-committed β”œβ”€β”€68,454,896 B (94.83%) ── allocated └───3,733,008 B (05.17%) ── overhead 11 (100.0%) -- observer-service └──11 (100.0%) -- referent β”œβ”€β”€10 (90.91%) ── strong └───1 (09.09%) -- weak β”œβ”€β”€1 (09.09%) ── alive └──0 (00.00%) ── dead 249,744 B ── gfx-surface-win32 68,454,896 B ── heap-allocated 1,048,576 B ── heap-chunksize 92,274,688 B ── heap-mapped 704,016,384 B ── private 8,664,616,960 B ── resident 8,608,530,432 B ── resident-unique 18,753,727 B ── system-heap-allocated 2,213,272,219,648 B ── vsize 136,289,257,259,008 B ── vsize-max-contiguous
Flags: needinfo?(bugzilla.mozilla.org)
The above is from Nightly Build 20180418112600 with gfx.webrender.all = true, gfx.webrender.enabled = true
Thanks, looks like we still have something to fix.
The growth seems to be unbounded. It has bloated up to └───────33,309,036,544 B (00.02%) -- commit β”œβ”€β”€32,439,861,248 B (00.02%) -- mapped β”‚ β”œβ”€β”€32,345,358,336 B (00.02%) ── readonly(segments=100947) over the course of several days, leading to my system commit exhaustion on windows. Issues like this used to be easier to spot in the past since it showed up in the virtual memory column in process explorer. Now all FF do a 2TB reservation (which doesn't count towards system commit) which gets lumped together with the readonly mapping (does count towards system commit).
Still an issue as of Firefox 63.0a1 20180727103347
Priority: P3 → P2
Whiteboard: [wr-reserve] [gfx-noted] → [wr-reserve] [gfx-noted][needs-investigation]
After sleeping on this one, I want to make this a P1 (at least until we know how bad it is and what we need to do to fix it.).
Priority: P2 → P1
This will be easier to diagnose once we have memory reporters.
Assignee: nobody → bobbyholley
Depends on: 1480293
We've got some memory reporters landed. Can you take a new memory report?
Flags: needinfo?(bugzilla.mozilla.org)
This is the GPU process diff over 1 day of uptime as of Nightly 20180915115425 Explicit Allocations 36.90 MB (100.0%) -- explicit β”œβ”€β”€27.63 MB (74.88%) ── heap-unclassified β”œβ”€β”€β”€9.23 MB (25.02%) -- heap-overhead β”‚ β”œβ”€β”€8.53 MB (23.12%) ── bin-unused β”‚ β”œβ”€β”€0.59 MB (01.59%) ── bookkeeping β”‚ └──0.12 MB (00.32%) ── page-cache └───0.04 MB (00.10%) ++ (2 tiny) Other Measurements 0.00 MB (100.0%) -- address-space β”œβ”€β”€-134,210,571.55 MB (100.0%) ── free(segments=8581) [-] β”œβ”€β”€134,207,226.83 MB (100.0%) ── free(segments=22684) [+] β”œβ”€β”€3,331.97 MB (100.0%) -- commit β”‚ β”œβ”€β”€3,238.25 MB (100.0%) -- mapped β”‚ β”‚ β”œβ”€β”€5,156.51 MB (100.0%) ── readonly(segments=22852) [+] β”‚ β”‚ β”œβ”€β”€-1,922.00 MB (100.0%) ── readonly(segments=8422) [-] β”‚ β”‚ β”œβ”€β”€β”€β”€β”€20.76 MB (100.0%) ── readwrite(segments=550) [+] β”‚ β”‚ └────-17.02 MB (100.0%) ── readwrite(segments=469) [-] β”‚ β”œβ”€β”€β”€β”€β”€93.69 MB (100.0%) -- private β”‚ β”‚ β”œβ”€β”€510.94 MB (100.0%) ── readwrite(segments=1020) [+] β”‚ β”‚ β”œβ”€β”€-462.36 MB (100.0%) ── readwrite(segments=241) [-] β”‚ β”‚ β”œβ”€β”€72.77 MB (100.0%) ── readwrite+writecombine(segments=47) [+] β”‚ β”‚ β”œβ”€β”€-27.76 MB (100.0%) ── readwrite+writecombine(segments=72) [-] β”‚ β”‚ β”œβ”€β”€22.11 MB (100.0%) ── readwrite+stack(segments=27) [+] β”‚ β”‚ β”œβ”€β”€-22.05 MB (100.0%) ── readwrite+stack(segments=25) [-] β”‚ β”‚ β”œβ”€β”€β”€0.45 MB (100.0%) ── readwrite+guard(segments=27) [+] β”‚ β”‚ β”œβ”€β”€-0.42 MB (100.0%) ── readwrite+guard(segments=25) [-] β”‚ β”‚ β”œβ”€β”€β”€0.04 MB (100.0%) ── noaccess(segments=10) [+] β”‚ β”‚ └──-0.03 MB (100.0%) ── noaccess(segments=7) [-] β”‚ └──────0.03 MB (100.0%) -- image β”‚ β”œβ”€β”€138.30 MB (100.0%) ── execute-read(segments=87) [+] β”‚ β”œβ”€β”€-138.30 MB (100.0%) ── execute-read(segments=86) [-] β”‚ β”œβ”€β”€70.70 MB (100.0%) ── readonly(segments=274) [+] β”‚ β”œβ”€β”€-70.68 MB (100.0%) ── readonly(segments=271) [-] β”‚ β”œβ”€β”€1.17 MB (100.0%) ── readwrite(segments=125) [+] β”‚ └──-1.16 MB (100.0%) ── readwrite(segments=124) [-] └──12.75 MB (100.0%) -- reserved β”œβ”€β”€4,487.76 MB (100.0%) ── private(segments=978) [+] β”œβ”€β”€-4,468.79 MB (100.0%) ── private(segments=261) [-] └──-6.22 MB (100.0%) ── mapped(segments=4) 36.89 MB (100.0%) -- heap-committed β”œβ”€β”€27.66 MB (74.97%) ── allocated └───9.23 MB (25.03%) ── overhead 0.23 MB ── gfx-surface-win32 27.66 MB ── heap-allocated 43.00 MB ── heap-mapped 101.59 MB ── private 2,545.54 MB ── resident 2,580.44 MB ── resident-unique 1.17 MB ── system-heap-allocated 3,344.72 MB ── vsize -4,388.00 MB ── vsize-max-contiguous
Flags: needinfo?(bugzilla.mozilla.org)
I realized today that this is about shared memory, not explicit heap allocations, so the memory reporters won't help here. If you go to about:memory and click "minimize memory usage", does mapped memory return to a normal level? That will tell us whether this is a bonafide leak or an out-of-control cache.
Flags: needinfo?(bugzilla.mozilla.org)
> If you go to about:memory and click "minimize memory usage", does mapped memory return to a normal level? It does not.
Flags: needinfo?(bugzilla.mozilla.org)
I see the readonly segments is quite large. I map in images as read-only, so this could be related.
Flags: needinfo?(aosmond)
Assignee: bobbyholley → aosmond
I'm currently extending the memory reports to include the shared surfaces in SharedSurfacesParent associated with each content process in the report. This should line up with what is in the surface cache, but if there is a leak, it will not.
(In reply to Andrew Osmond [:aosmond] from comment #18) > I'm currently extending the memory reports to include the shared surfaces in > SharedSurfacesParent associated with each content process in the report. > This should line up with what is in the surface cache, but if there is a > leak, it will not. Good plan. Make sure those end up in Other Measurements rather than "Explicit Allocations", since they're not heap allocations and treating them as such would mess up the heap-unclassified math and confuse DMD.
(In reply to Bobby Holley (:bholley) from comment #19) > (In reply to Andrew Osmond [:aosmond] from comment #18) > > I'm currently extending the memory reports to include the shared surfaces in > > SharedSurfacesParent associated with each content process in the report. > > This should line up with what is in the surface cache, but if there is a > > leak, it will not. > > Good plan. Make sure those end up in Other Measurements rather than > "Explicit Allocations", since they're not heap allocations and treating them > as such would mess up the heap-unclassified math and confuse DMD. I'll get you to review it to make sure I do this right as I will have some questions once it is ready :). In the meantime, what I have has been enough to diagnose at least one case where we "leak" the images: 1) Load some web page, and in another tab, go to about:memory and minimize memory. 2) WebRenderBridgeParent::RemoveExternalImage gets called on a whole bunch of external images. Images want to see epoch X before being released and AsyncImagePipelineManager::HoldExternalImage is called. 3) Eventually WebRenderBridgeParent is destroyed, but the AsyncImagePipelineManager remains, waiting on the pipeline events for the epoch. 4) AsyncImagePipelineManager::ProcessPipelineRendered is last called for epoch X-1 and the images never get released in the GPU process. Working on resolving that now.
Depends on: 1492925
See Also: → 1492930
If the AWSY numbers are any judge, bug 1492925 resolved a leak, but I suspect not the biggest one. I was hoping it would close the gap between linux64 and linux64-qr on the resident memory size. I still see a handful of chrome images missing from the surface cache but mapped in the GPU process -- I'll hunt that down today, along with polishing off the memory reporter.
Flags: needinfo?(aosmond)
Is that fix in nightly 20180921100113? If so I'll keep that build running for a few days.
(In reply to The 8472 from comment #22) > Is that fix in nightly 20180921100113? If so I'll keep that build running > for a few days. Yes it is. Thank you!
See Also: → 1493196
Improved image memory reporting is now in nightly, 20180925220052 and later. You can go to about:config and turn on gfx.mem.debug-reporting to get extra detailed image related reports. No need to restart, it takes effect immediately. It should detect any image mappings the GPU process has that is missing from the content process as candidates for having been leaked. You will see a section under gfx/webrender/images/owner_cache_missing in the content/main processes for that. Capturing a report *before* it gets leaked is quite useful as well, since I can use the external IDs added to the report (globally unique as the life of the browser) to determine what image got leaked exactly, but knowing there is any image related leaks is still quite useful.
Flags: needinfo?(bugzilla.mozilla.org)
Errr make that "image.mem.debug-reporting".
Bug 1493196 may also help here.
I'm getting crashes when running the memory reporters now. This might be related since the stacks refer to webrender. https://crash-stats.mozilla.com/report/index/25be0879-304a-4f72-8390-73bd40180926
(In reply to The 8472 from comment #27) > I'm getting crashes when running the memory reporters now. This might be > related since the stacks refer to webrender. > > https://crash-stats.mozilla.com/report/index/25be0879-304a-4f72-8390- > 73bd40180926 Hm, that's definitely a crash when running the WR memory reporter. Looks like RenderThread()->Get()->Loop() is returning null. Can you file a separate bug for it? Is it reproducible?
Some great news and a bit bad news, diffing memory reports A: build 20180915115425, 1 day uptime B: build 20180927100044, 2 days uptime shows 458.92 MB (100.0%) -- explicit β”œβ”€β”€469.29 MB (102.26%) ── heap-unclassified β”œβ”€β”€β”€-5.28 MB (-1.15%) -- gfx/webrender β”‚ β”œβ”€β”€-17.01 MB (-3.71%) -- gpu-cache β”‚ β”‚ β”œβ”€β”€-16.00 MB (-3.49%) ── cpu-mirror β”‚ β”‚ └───-1.01 MB (-0.22%) ── metadata β”‚ β”œβ”€β”€10.50 MB (02.29%) -- resource-cache β”‚ β”‚ β”œβ”€β”€10.46 MB (02.28%) ── fonts [+] β”‚ β”‚ └───0.03 MB (00.01%) ── images [+] β”‚ └───1.23 MB (00.27%) ++ (4 tiny) β”œβ”€β”€β”€-5.13 MB (-1.12%) -- heap-overhead β”‚ β”œβ”€β”€-8.20 MB (-1.79%) ── bin-unused β”‚ └───3.07 MB (00.67%) ++ (2 tiny) └────0.03 MB (00.01%) ++ (3 tiny) Other Measurements 0.00 MB (100.0%) -- address-space β”œβ”€β”€134,211,726.25 MB (100.0%) ── free(segments=750) [+] β”œβ”€β”€-134,207,226.83 MB (100.0%) ── free(segments=22684) [-] β”œβ”€β”€-4,559.23 MB (100.0%) -- commit β”‚ β”œβ”€β”€-4,913.64 MB (100.0%) -- mapped β”‚ β”‚ β”œβ”€β”€-5,156.51 MB (100.0%) ── readonly(segments=22852) [-] β”‚ β”‚ β”œβ”€β”€β”€β”€β”€246.80 MB (100.0%) ── readonly(segments=282) [+] β”‚ β”‚ β”œβ”€β”€β”€β”€β”€-20.76 MB (100.0%) ── readwrite(segments=550) [-] β”‚ β”‚ └──────16.83 MB (100.0%) ── readwrite(segments=203) [+] β”‚ β”œβ”€β”€β”€β”€β”€353.23 MB (100.0%) -- private β”‚ β”‚ β”œβ”€β”€878.80 MB (100.0%) ── readwrite(segments=1040) [+] β”‚ β”‚ β”œβ”€β”€-510.94 MB (100.0%) ── readwrite(segments=1020) [-] β”‚ β”‚ β”œβ”€β”€-72.77 MB (100.0%) ── readwrite+writecombine(segments=47) [-] β”‚ β”‚ β”œβ”€β”€β”€58.21 MB (100.0%) ── readwrite+writecombine(segments=48) [+] β”‚ β”‚ β”œβ”€β”€-22.11 MB (100.0%) ── readwrite+stack(segments=27) [-] β”‚ β”‚ β”œβ”€β”€β”€22.02 MB (100.0%) ── readwrite+stack(segments=25) [+] β”‚ β”‚ β”œβ”€β”€β”€-0.45 MB (100.0%) ── readwrite+guard(segments=27) [-] β”‚ β”‚ β”œβ”€β”€β”€β”€0.42 MB (100.0%) ── readwrite+guard(segments=25) [+] β”‚ β”‚ β”œβ”€β”€β”€β”€0.13 MB (100.0%) ── execute-read(segments=3) [+] β”‚ β”‚ β”œβ”€β”€β”€-0.07 MB (100.0%) ── execute-read(segments=2) [-] β”‚ β”‚ β”œβ”€β”€β”€-0.04 MB (100.0%) ── noaccess(segments=10) [-] β”‚ β”‚ └────0.02 MB (100.0%) ── noaccess(segments=6) [+] β”‚ └───────1.18 MB (100.0%) -- image β”‚ β”œβ”€β”€138.91 MB (100.0%) ── execute-read(segments=89) [+] β”‚ β”œβ”€β”€-138.30 MB (100.0%) ── execute-read(segments=87) [-] β”‚ β”œβ”€β”€71.20 MB (100.0%) ── readonly(segments=280) [+] β”‚ β”œβ”€β”€-70.70 MB (100.0%) ── readonly(segments=274) [-] β”‚ β”œβ”€β”€8.06 MB (100.0%) ── writecopy(segments=62) [+] β”‚ β”œβ”€β”€-8.00 MB (100.0%) ── writecopy(segments=64) [-] β”‚ └──0.03 MB (100.0%) ── readwrite(segments=125) └──59.81 MB (100.0%) -- reserved β”œβ”€β”€4,547.57 MB (100.0%) ── private(segments=648) [+] └──-4,487.76 MB (100.0%) ── private(segments=978) [-] 133.16 MB (100.0%) -- gfx └──133.16 MB (100.0%) -- webrender β”œβ”€β”€129.75 MB (97.44%) ++ textures └────3.40 MB (02.56%) -- images/mapped_from_owner/pid=NNN So the leaking readonly segments are pretty much gone but heap-unclassified went up. For me that's in tolerable territory now, so you can close this issue if you want to.
Flags: needinfo?(bugzilla.mozilla.org)
Awesome. Both bug 1493196 and bug 1492925, have landed. Hopefully that is enough to cover everyone :).
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.