Closed Bug 1711338 Opened 4 years ago Closed 4 years ago

memory leak while Jitsi Meet conf

Categories

(Core :: WebRTC, defect)

Firefox 86
defect

Tracking

()

RESOLVED DUPLICATE of bug 1703584

People

(Reporter: support, Unassigned)

Details

Attachments

(6 files)

User Agent: Mozilla/5.0 (Windows NT 6.1; Win64; rv:86.1) Gecko/20100101 Firefox/86.1

Steps to reproduce:

we had a Jitsi Meet video confg with ~10 participants.The background image blur function was enabled ( possible it's the cause of the issue ).

Actual results:

On FF-Start the process used 500 MB , FF used up ~1 MB /s after 105 minutes the process reached 4 GB. The conference was running for 3h when the system suddenly swapped itself to death.

Fedora Linux: 32
Firefox: 88.0.1-1

Expected results:

no leak.

The Bugbug bot thinks this bug should belong to the 'Core::WebRTC' component, and is moving the bug to that component. Please revert this change in case you think the bot is wrong.

Component: Untriaged → WebRTC
Product: Firefox → Core

Thanks for filing this bug! Do you know which Firefox process was consuming a large amount of memory? The about:processes page can give a summary. Your suspicion may be correct, and if it is the process that is running the webpage JS we need to make Jitsi aware of this.

Flags: needinfo?(support)

It's the blue one, it increased > 100MB in 3 minutes.

Flags: needinfo?(support)

3 minutes later..

After closing this tab, the memory is -> not <- given back to the system:

5103 marius 20 0 3598456 -> 761688 <- 124096 S 4,3 4,7 10:21.84 /usr/lib64/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 282419 -parentBuildID 20210510093133 +

I tried today on Mac to repro this issue, but for me the memory usage stayed around the same value for several minutes of a Jitsi call with blurring turned on on the Firefox side.

Does this require more then two people to be in a call?

It also happens on a solo session.

atm. Fedora Linux: 32
Firefox: 88.0.1-1

Could you attach a saved report from about:memory?

Flags: needinfo?(support)

It looks like, the memory dump does not only contain Firefox informations, but sensible data about OTHER processes and their content!

I won't send you these kind of informations, as i can't check every file for it's content. It dropped 190 MB textfiles, containing usernames of Matrix, UserIDs for websessions, visited urls etc. etc.

Flags: needinfo?(support)

That makes sense. This might work to give us some information instead:

  1. Join a call without background blur
  2. Right-click on the page and click "Inspect" to bring up the Developer tools
  3. Go to the "Memory" tab.
  4. Click the πŸ“· icon (upper left) to take a memory snapshot every 30 seconds or so. Is the total memory reported in each snapshot growing?
  5. Turn on blur background
  6. Click the πŸ“· icon to take a memory snapshot every 30 seconds or so. Is the total memory reported in each snapshot growing?

Answers to these might help us locate the problem. There's also a "βœ… Record call stacks" option but it wasn't showing a callstack for the ArrayBuffers, so I doubt that'll help.

I don't have linux to try on, and was unable to repro on mac. I'm seeing sustained higher mem usage (~270 Mb) with the blur function on than without (~75 Mb), which from the look of the snapshot is a ton of ArrayBuffer objects, which is not surprising given what it takes to blur.

Jitsi is probably creating tons of them over and over instead of using some pool of them, so there's going to be a high number of them around on average, as garbage collection works to keep the number from growing unbounded (since I'm not seeing your exponential growth symptom).

Flags: needinfo?(support)

After closing this tab, the memory is -> not <- given back to the system:

Just checking, did you wait 10 seconds? It usually takes that long for garbage collection to kick in (I'm seeing memory drop when I close the tab)

after more than 10 seconds after closing the tab, memory get freed up.

Also the memory tab showed an increase in STRINGS from 115MB to ~190 MB in ~2.5m .

Flags: needinfo?(support)

btw.. in the meantime i upgraded from Fedora 32 to Fedora 33 , no change ( not unexpected ).

(In reply to support from comment #9)

It looks like, the memory dump does not only contain Firefox informations, but sensible data about OTHER processes and their content!

I won't send you these kind of informations, as i can't check every file for it's content. It dropped 190 MB textfiles, containing usernames of Matrix, UserIDs for websessions, visited urls etc. etc.

support, I think there may have been a miscommunication on the getting the memory report. If you navigate to about:memory in the Firefox URL bar and then click measure you can see the current memory consumed by Firefox processes (and only Firefox processes). Clicking Measure and save... will produce a small (<1M) gzip'd json file. It will not contain file data, only info on Firefox's memory usage, in this form:

{
 "version": 1,
 "hasMozMallocUsableSize": true,
 "reports": [
  {
   "process": "Main Process (pid 43678)",
   "path": "explicit/cert-storage/storage",
   "kind": 1,
   "units": 0,
   "amount": 6758398,
   "description": "Memory used by certificate storage"
  },
  {
   "process": "Main Process (pid 43678)",
   "path": "page-faults-hard",
   "kind": 2,
   "units": 2,
   "amount": 5128,
   "description": "The number of hard page faults (also known as 'major page faults') that have occurred since the process started.  A hard page fault occurs when a process tries to access a page which is not present in physical memory. The operating system must access the disk in order to fulfill a hard page fault. When memory is plentiful, you should see very few hard page faults. But if the process tries to use more memory than your machine has available, you may see many thousands of hard page faults. Because accessing the disk is up to a million times slower than accessing RAM, the program may run very slowly when it is experiencing more than 100 or so hard page faults a second."
  },
  {
   "process": "Main Process (pid 43678)",
   "path": "js-main-runtime-realms/system/null-principal",
   "kind": 2,
   "units": 1,
   "amount": 1,
   "description": "A live realm in the main JSRuntime."
  },
  {
   "process": "Main Process (pid 43678)",
   "path": "js-main-runtime-realms/system/[System Principal], shared JSM global",
   "kind": 2,
   "units": 1,
   "amount": 1,
   "description": "A live realm in the main JSRuntime."
<snip>
Flags: needinfo?(support)
Attached file 1-memory-report.json.gz β€”

File 1 of a jitsi meet session app. 1 minute after starting

Flags: needinfo?(support)
Attached file 2-memory-report.json.gz β€”

File 2 of the same session, with ~250MB of memory growth

support, thank you for those attachments!

Looking at the diff between those two files (using about:memory's "Load and diff..."):

 51.42 MB ── heap-allocated [5]
 56.00 MB ── heap-mapped [5]
 -8.04 MB ── imagelib-surface-cache-estimated-total [5]
       -2 ── imagelib-surface-cache-image-count [5]
       -2 ── imagelib-surface-cache-image-surface-count [5]
       -2 ── imagelib-surface-cache-tracked-cost-count [5]
       -2 ── imagelib-surface-cache-tracked-expiry-count [5]
2,765,453 ── page-faults-soft [5]
 83.80 MB ── resident [5]
 89.75 MB ── resident-peak [5]
 75.23 MB ── resident-unique [5]
 32.39 MB ── shmem-allocated [4]
 32.39 MB ── shmem-mapped [4]
 96.39 MB ── vsize [5]

Under "Explicit Allocations" I can see string use was most of the growth in the heap-allocated number:
52.50 MB (120.18%) -- strings

Quite a lot of shmem growth as well.

jib: any additional thoughts here?

Flags: needinfo?(jib)

Please note we recently fixed an aggressive memory leak in about:webrtc in bug 1703584. β€” If you had about:webrtc open in any tab, please retest either with the latest nightly or without opening about:webrtc. Thanks.

Clearing NI after 6 months.

Flags: needinfo?(jib)

Support - sorry for the long gap in progress. We've had a lot of changes in webrtc-related code in the last couple weeks. Would you have time to re-test this in recent version of Nightly 96? Additionally, as Jan-Ivar mentioned in Comment 20, a memory leak bug landed some time back. Would you mind retesting with his suggestions in Comment 20? Thank you!

Flags: needinfo?(support)

I can confirm that...

a) FFN 97.0a1.de Linux does not produce a noticeable memory leak

b) FF 94 Fedora still has that memory leak.

c) FFN uses way more CPU ressources ( 1 core ) for the same task, than FF 94 does ( 0.5 core ).

Flags: needinfo?(support)

Thank you for confirming that the leak is resolved in Nightly 97. I'll mark this as a duplicate of Bug 1703584.

Status: UNCONFIRMED → RESOLVED
Closed: 4 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: