memory leak while Jitsi Meet conf
Categories
(Core :: WebRTC, defect)
Tracking
()
People
(Reporter: support, Unassigned)
Details
Attachments
(6 files)
User Agent: Mozilla/5.0 (Windows NT 6.1; Win64; rv:86.1) Gecko/20100101 Firefox/86.1
Steps to reproduce:
we had a Jitsi Meet video confg with ~10 participants.The background image blur function was enabled ( possible it's the cause of the issue ).
Actual results:
On FF-Start the process used 500 MB , FF used up ~1 MB /s after 105 minutes the process reached 4 GB. The conference was running for 3h when the system suddenly swapped itself to death.
Fedora Linux: 32
Firefox: 88.0.1-1
Expected results:
no leak.
Comment 1•4 years ago
|
||
The Bugbug bot thinks this bug should belong to the 'Core::WebRTC' component, and is moving the bug to that component. Please revert this change in case you think the bot is wrong.
Comment 2•4 years ago
|
||
Thanks for filing this bug! Do you know which Firefox process was consuming a large amount of memory? The about:processes page can give a summary. Your suspicion may be correct, and if it is the process that is running the webpage JS we need to make Jitsi aware of this.
It's the blue one, it increased > 100MB in 3 minutes.
After closing this tab, the memory is -> not <- given back to the system:
5103 marius 20 0 3598456 -> 761688 <- 124096 S 4,3 4,7 10:21.84 /usr/lib64/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 282419 -parentBuildID 20210510093133 +
Comment 6•4 years ago
|
||
I tried today on Mac to repro this issue, but for me the memory usage stayed around the same value for several minutes of a Jitsi call with blurring turned on on the Firefox side.
Does this require more then two people to be in a call?
It also happens on a solo session.
atm. Fedora Linux: 32
Firefox: 88.0.1-1
Comment 8•4 years ago
|
||
Could you attach a saved report from about:memory?
It looks like, the memory dump does not only contain Firefox informations, but sensible data about OTHER processes and their content!
I won't send you these kind of informations, as i can't check every file for it's content. It dropped 190 MB textfiles, containing usernames of Matrix, UserIDs for websessions, visited urls etc. etc.
Comment 10•4 years ago
|
||
That makes sense. This might work to give us some information instead:
- Join a call without background blur
- Right-click on the page and click "Inspect" to bring up the Developer tools
- Go to the "Memory" tab.
- Click the π· icon (upper left) to take a memory snapshot every 30 seconds or so. Is the total memory reported in each snapshot growing?
- Turn on blur background
- Click the π· icon to take a memory snapshot every 30 seconds or so. Is the total memory reported in each snapshot growing?
Answers to these might help us locate the problem. There's also a "β Record call stacks" option but it wasn't showing a callstack for the ArrayBuffers, so I doubt that'll help.
I don't have linux to try on, and was unable to repro on mac. I'm seeing sustained higher mem usage (~270 Mb) with the blur function on than without (~75 Mb), which from the look of the snapshot is a ton of ArrayBuffer objects, which is not surprising given what it takes to blur.
Jitsi is probably creating tons of them over and over instead of using some pool of them, so there's going to be a high number of them around on average, as garbage collection works to keep the number from growing unbounded (since I'm not seeing your exponential growth symptom).
Comment 11•4 years ago
•
|
||
After closing this tab, the memory is -> not <- given back to the system:
Just checking, did you wait 10 seconds? It usually takes that long for garbage collection to kick in (I'm seeing memory drop when I close the tab)
| Reporter | ||
Comment 12•4 years ago
|
||
after more than 10 seconds after closing the tab, memory get freed up.
Also the memory tab showed an increase in STRINGS from 115MB to ~190 MB in ~2.5m .
| Reporter | ||
Comment 13•4 years ago
|
||
| Reporter | ||
Comment 14•4 years ago
|
||
| Reporter | ||
Comment 15•4 years ago
|
||
btw.. in the meantime i upgraded from Fedora 32 to Fedora 33 , no change ( not unexpected ).
Comment 16•4 years ago
|
||
(In reply to support from comment #9)
It looks like, the memory dump does not only contain Firefox informations, but sensible data about OTHER processes and their content!
I won't send you these kind of informations, as i can't check every file for it's content. It dropped 190 MB textfiles, containing usernames of Matrix, UserIDs for websessions, visited urls etc. etc.
support, I think there may have been a miscommunication on the getting the memory report. If you navigate to about:memory in the Firefox URL bar and then click measure you can see the current memory consumed by Firefox processes (and only Firefox processes). Clicking Measure and save... will produce a small (<1M) gzip'd json file. It will not contain file data, only info on Firefox's memory usage, in this form:
{
"version": 1,
"hasMozMallocUsableSize": true,
"reports": [
{
"process": "Main Process (pid 43678)",
"path": "explicit/cert-storage/storage",
"kind": 1,
"units": 0,
"amount": 6758398,
"description": "Memory used by certificate storage"
},
{
"process": "Main Process (pid 43678)",
"path": "page-faults-hard",
"kind": 2,
"units": 2,
"amount": 5128,
"description": "The number of hard page faults (also known as 'major page faults') that have occurred since the process started. A hard page fault occurs when a process tries to access a page which is not present in physical memory. The operating system must access the disk in order to fulfill a hard page fault. When memory is plentiful, you should see very few hard page faults. But if the process tries to use more memory than your machine has available, you may see many thousands of hard page faults. Because accessing the disk is up to a million times slower than accessing RAM, the program may run very slowly when it is experiencing more than 100 or so hard page faults a second."
},
{
"process": "Main Process (pid 43678)",
"path": "js-main-runtime-realms/system/null-principal",
"kind": 2,
"units": 1,
"amount": 1,
"description": "A live realm in the main JSRuntime."
},
{
"process": "Main Process (pid 43678)",
"path": "js-main-runtime-realms/system/[System Principal], shared JSM global",
"kind": 2,
"units": 1,
"amount": 1,
"description": "A live realm in the main JSRuntime."
<snip>
| Reporter | ||
Comment 17•4 years ago
|
||
File 1 of a jitsi meet session app. 1 minute after starting
| Reporter | ||
Comment 18•4 years ago
|
||
File 2 of the same session, with ~250MB of memory growth
Comment 19•4 years ago
|
||
support, thank you for those attachments!
Looking at the diff between those two files (using about:memory's "Load and diff..."):
51.42 MB ββ heap-allocated [5]
56.00 MB ββ heap-mapped [5]
-8.04 MB ββ imagelib-surface-cache-estimated-total [5]
-2 ββ imagelib-surface-cache-image-count [5]
-2 ββ imagelib-surface-cache-image-surface-count [5]
-2 ββ imagelib-surface-cache-tracked-cost-count [5]
-2 ββ imagelib-surface-cache-tracked-expiry-count [5]
2,765,453 ββ page-faults-soft [5]
83.80 MB ββ resident [5]
89.75 MB ββ resident-peak [5]
75.23 MB ββ resident-unique [5]
32.39 MB ββ shmem-allocated [4]
32.39 MB ββ shmem-mapped [4]
96.39 MB ββ vsize [5]
Under "Explicit Allocations" I can see string use was most of the growth in the heap-allocated number:
52.50 MB (120.18%) -- strings
Quite a lot of shmem growth as well.
jib: any additional thoughts here?
Comment 20•4 years ago
•
|
||
Please note we recently fixed an aggressive memory leak in about:webrtc in bug 1703584. β If you had about:webrtc open in any tab, please retest either with the latest nightly or without opening about:webrtc. Thanks.
Comment 22•4 years ago
•
|
||
Support - sorry for the long gap in progress. We've had a lot of changes in webrtc-related code in the last couple weeks. Would you have time to re-test this in recent version of Nightly 96? Additionally, as Jan-Ivar mentioned in Comment 20, a memory leak bug landed some time back. Would you mind retesting with his suggestions in Comment 20? Thank you!
| Reporter | ||
Comment 23•4 years ago
|
||
I can confirm that...
a) FFN 97.0a1.de Linux does not produce a noticeable memory leak
b) FF 94 Fedora still has that memory leak.
c) FFN uses way more CPU ressources ( 1 core ) for the same task, than FF 94 does ( 0.5 core ).
Comment 24•4 years ago
|
||
Thank you for confirming that the leak is resolved in Nightly 97. I'll mark this as a duplicate of Bug 1703584.
Description
•