Open Bug 1674295 Opened 4 years ago Updated 5 months ago

high memory usage on Facebook [without ffmpeg installed]

Categories

(Core :: JavaScript Engine, defect, P3)

Firefox 82
defect

Tracking

()

UNCONFIRMED

People

(Reporter: tcflorea, Unassigned, NeedInfo)

References

(Blocks 1 open bug)

Details

Attachments

(12 files)

Attached file m14.json.gz

User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0

Steps to reproduce:

Restart Firefox (and restore previous session). Browsing (facebook, youtube, etc). A ridiculous amount of memory is used after a couple of hours.
Then, Firefox become sluggish and finally unresponsive
Restart Firefox and verify the memory usage goes back to something reasonable and repeat.

Actual results:

Firefox become unresponsive after one or two hour of usage because of improper memory usage.
To recover, I usually kill the process eating up memory (using procexp) Usually this kill the facebook tab.
To carry on using firefox I have to kill/restart it a few times a day.
Avert watching a couple of facebook videos the memoroy usage is exceed 1GB.

Expected results:

Firefox should be responsive and should not eat up memory.

Attached file m1.json.gz
Attached file m2.json.gz

Attached files (m1.json.gz and m2.json.gz) are memory usage snapshot 2 minutes apart for a new session with only 2 tabs. The difference of 700M memory consumption was caused by catch up on facebook for 2 minutes.

Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0

Hi,

I will move this over to a component so developers can take a look over it. If this is not the correct component please feel free to change it to an appropriate one.
@Andrew, when get a chance, could you please take a look over these memory reports? Thank you!

Thanks for the report.

Component: Untriaged → Performance
Flags: needinfo?(continuation)
Product: Firefox → Core
Summary: high memory usage → high memory usage on Facebook

Looking at the memory report, I'm assuming process 9016 is the one with Facebook. There's about 300MB of JS classes, 60MB of JS strings, 40MB of unused GC things, which I think indicates a lot of JS objects are being created and destroyed. There's also about 100MB of images, and about 30MB of heap overhead. I'll move this to JS, though I'm not sure how actionable it is. I also noticed there are a few other recent bug reports related to high memory on Facebook. I'll put those in the see also field.

Component: Performance → JavaScript Engine
Flags: needinfo?(continuation)

What do you mean by "catch up on facebook for 2 minutes"? What were you doing on Facebook during that two minutes? Was it just sitting there, or were you watching videos, or were you scrolling down?

Flags: needinfo?(tcflorea)
See Also: → 1669376, 1643560

I closed all tabs, except facebook.
I restarted the browser (that means facebook was the only tab).
A few posts were displayed.
I measured the memory.
I hit space tab, a few more posts were displayed.
I continued the same for about two minutes or so: about 30-40 postings/ 20 -30 screens has been displayed.
The last "posting" was: "Something Went Wrong This may be because of a technical error that we're working to get fixed. Try reloading this page."
I don't remember to click play on any movie.
I measured the memory again.

I noticed that if I start watching movies the memory usage will further increase.

Flags: needinfo?(tcflorea)
Attached file m12.json.gz

A new memory report using the same procedure: Start firefox (nightly) with facebook being the only tab.
Browse the postings for a couple of minutes.

procexp snapshot to be used along with 9186313: m12.json.gz.
Note that not only the process related to facebook tab (8924) is eating up memory but also the GPU process (12808).

See Also: → 1675837

I reproduced this by loading a facebook in a fresh profile and scrolling to the end 20 times (using a timer to try and get repeatable results). I recorded RSS as reported by the ps command. I'm testing on a Mac.

Maximum memory sizes:

Nightly: Main process 587MB, content process 1134MB
Beta: Main process 650MB, content process 1151MB

I also test Chrome: Main process 153MB, Helper process 1036MB

This confirms that facebook does use a lot of memory, however since it also affects Chrome I'd say it's more likely to be a facebook problem.

I saw the "Something Went Wrong This may be because of a technical error that we're working to get fixed" message, on both Firefox and Chrome.

I have tried myself Facebook on chrome.
I can confirm that Facebook is eating up memory on chrome as well (shame on Facebook!) however there are certain differences:

  1. (most important) I was not able to make the browser and/or the entire system becoming sluggish/unresponsive as it happened when browsing on Firefox/Nightly. Maybe I have not tried enough?
    2 As said above the GPU process is eating up the memory on par with Facebook tab. This is happening both on Firefox and on chrome. However in chrome the memory on GPU is immediately released when the Facebook tab become idle (e.g. I don't hit space to load more posts). In Firefox GPU process memory does not decrease (*).
    3 When click reload on Firefox, the memory on Facebook tab keeps increasing; when click reload on Chome the memory goes back to the initial value and increase again form there..

(*) I have noticed that the both memory on GPU process and on facebook tab is (partially?) decreasing from time to time but this happens only after many minutes (or even hours).

The system tends to become less responsive when switching to a large memory user tab (e.g. Facebook) I didn't used for a while.
E.g.
-Browsing Facebook for 5 minutes would be fine (even if memory usage goes up).
-Switch to different other memory intensive application(s) (or even a different tab); do some work.
-Switch back to Firefox, Facebook tab.
This is usually the moment when Firefox become sluggish / not responsive.

The fastest way to recover is to kill Firefox and restore the last session.
My not so educated guess is that this bug has something to so with swapping and memory management rather than javascript engine.

For reference, I'm using:
OS Name: Microsoft Windows 10 Home
OS Version: 10.0.18363 N/A Build 18363
The OS is installed on a SSD drive with about 22GB free space out of 128GB

Sample of 6GB of GPU memory usage of GPU.

mem usage after 1 day

task manager -> 4 GB of mem usage, that s well, a little too much ;)

Please provide a memory report from about:memory. Task manager data is not actionable for us.

Attached file m21.json.gz

The memory usage reported in about:memory for the sample of 6GB of GPU/3GB facebook usage pic above.
Unfortunately the memory reported is wrong. The following message is displayed:
WARNING: the 'heap-allocated' memory reporter does not work for this platform and/or configuration. This means that 'heap-unclassified' is not shown and the 'explicit' tree shows less memory than it should.

Attached file memory-report.json2.gz

2.4 GB fb tab usage mem dump

fb tab 3.6GB mem usage

Blocks: 1678656
Severity: -- → S3

Hi Tudor,

Would you mind sharing the dominator view of the memory which is reported by the devtools, to help us isolate which part of the JS code is holding on memory. However, beware that the following memory profile would not be anonymized, as far as I can tell.

https://developer.mozilla.org/en-US/docs/Tools/Memory/Dominators_view

Attached file 41926.fxsnapshot

Base denominators: Just after opening a facebook page (of a charity, with multiple postings) on a newly created profile.

Hi Nicolas,
I created two denominator view snapshot as follows:

  • The first one 41926.fxsnapshot just after opening a facebook page, without logging into facebook, on a newly created profile. The file has been attached to this bug.
  • The second one 250308.fxsnapshot after browsing the posting on the page until memory usage reached 1G. Unfortunately the file is too big to be attached here so I put it on Dropbox. The file (42M) will be available for a while here: https://www.dropbox.com/sh/4cmio3cntu7wz4i/AAABwXmHoIPAHODmWbM21GpIa/250308.fxsnapshot?dl=0
    Please let me know if you got the file and if you need anything else.

Thanks a lot!

One of the problem seems to be that cache IR and Warp cache IR are holding on a 44 MB Call object which captures a single WeakMap.

Iain, can you investigate how cache IR and Warp cache IR could keep a call object alive? (see comment 22 and comment 23)

Flags: needinfo?(iireland)

(In reply to Nicolas B. Pierron [:nbp] from comment #23)

One of the problem seems to be that cache IR and Warp cache IR are holding on a 44 MB Call object which captures a single WeakMap.

It would be worth checking if the heap snapshot stuff is using the full correct logic for weak maps.

The CallObject is the environment for a number of function objects. It looks like the CacheIR is keeping at least one of those functions alive directly (0x970c3026400 on the left side of the diagram) and a few more indirectly (set, remove, and purge on the right side of the diagram). The most obvious way this can happen is if we have a call IC with a GuardSpecificFunction to check the callee.

It's not clear to me whether there are other retaining paths that don't go through CacheIR: that is to say, whether we have an IC for an otherwise dead function. I think these ICs should be cleared out during a compacting GC, so I wouldn't expect dead ICs to stick around indefinitely.

(Unfortunately, it looks like the retaining paths diagram only shows a subset of the retaining paths. For example, if I look at the diagram for the weakmap itself, it shows a single path going through the call object - but we see multiple retaining paths for the same call object in its own diagram. Somebody with more devtools experience might be able to give a better answer.)

In theory, CacheIR could maybe use weak pointers for its guards, which would allow us to free up the memory prior to a compacting GC. I'm not convinced that would help here, though: if the function is still reachable in other ways, then weak pointers in CacheIR wouldn't do anything except slow down sweeping.

If somebody from Facebook is looking into this: it looks like some functions stored in DataStore.exports.(set|remove|purge) are closing over an environment containing a large weakmap. If you can fix that, memory usage will hopefully go down.

Flags: needinfo?(iireland)

(In reply to Iain Ireland [:iain] from comment #26)

It's not clear to me whether there are other retaining paths that don't go through CacheIR: that is to say, whether we have an IC for an otherwise dead function. I think these ICs should be cleared out during a compacting GC, so I wouldn't expect dead ICs to stick around indefinitely.

Do we have an about:config flag which could help the reporter test this hypothesis? Such as disabling Warp, and/or disabling Cache IR?

(In reply to Iain Ireland [:iain] from comment #26)

If somebody from Facebook is looking into this: it looks like some functions stored in DataStore.exports.(set|remove|purge) are closing over an environment containing a large weakmap. If you can fix that, memory usage will hopefully go down.

Maybe Benoit can forward this within Facebook?

Flags: needinfo?(b56girard)

Walking backward what can cause a warp-cacheir-object to appear under the rooted object, suggests this would be caused by a pending/ended compilation:

jit/WarpSnapshot.cpp#344
jit/WarpSnapshot.cpp#259
loop {
    jit/WarpSnapshot.cpp#386
    jit/WarpSnapshot.cpp#259
}
jit/WarpSnapshot.cpp#243
jit/WarpSnapshot.cpp#219
jit/IonCompileTask.cpp#67
vm/HelperThreads.cpp#2570,2574,2579,2587
gc/RootMarking.cpp#397

Note, the loop above handle inlined call, which are not reflected in the retained paths.

One thing we could do is add a profiler probe to record the length of the various lists of helper tasks. If this number keeps increasing it would imply that we are adding compilation task faster than our ability to compile these.

Or comment 23 only report a transient state.

Do we have an about:config flag which could help the reporter test this hypothesis? Such as disabling Warp, and/or disabling Cache IR?

Disabling CacheIR would mean executing all JS in the C++ interpreter. Given the sheer volume of JS on Facebook, that would likely make things excruciatingly slow, and it would be hard to verify that we've reached the same point in execution.

I think CacheIR edges are a red herring here. Looking at the source code for the retained paths view, we are displaying "the N shortest retaining paths for this node as a graph", not every path. Given that we have a) a pending compilation for a function with an edge to this CallObject, and b) multiple other functions with independent edges to this CallObject, it seems pretty clear that the CallObject is legitimately still reachable (as opposed to being kept alive artificially via CacheIR). It just so happens that the retaining paths through CacheIR are the shortest.

To me, the interesting information in this bug is here (from comment 12):

2 As said above the GPU process is eating up the memory on par with Facebook tab. This is happening both on Firefox and on chrome. However in chrome the memory on GPU is immediately released when the Facebook tab become idle (e.g. I don't hit space to load more posts). In Firefox GPU process memory does not decrease (*).
3 When click reload on Firefox, the memory on Facebook tab keeps increasing; when click reload on Chome the memory goes back to the initial value and increase again form there..

We are apparently freeing GPU memory less aggressively, and not freeing all memory when reloading the page. I think we should be looking into those issues.

I have a few more files available that might be useful The files are archived in 201215.zip available here:
https://www.dropbox.com/sh/4cmio3cntu7wz4i/AAC7AxsU6BHUs_D-14GdDkjFa?dl=0

I opened a Facebook page on a new profile on nightly.
I saved Dominators on 62562.fxsnapshot.
This correspond to the first arrow on memory_usage201215.png (on procexp)

I loaded a few posts on the Facebook page until the usage reached about 500M usage
I saved Dominators on 173699.fxsnapshot.
This correspond to the second arrow on memory_usage201215.png

I clicked reload page.
I saved Dominators on 301721.fxsnapshot.
This correspond to the third arrow on memory_usage201215.png

Note that at the end procexp reports about 300M usage in line with about:memory report (m201215_1.json.gz) while
devtools reports at the end only 40M (as per facebook_screen_snapshot.png / 301721.fxsnapshot).
Also note that memory usage does not decrease (as I would expect) when reloading the page

Thanks for the nudge. I'm actually chasing down memory usage on Facebook myself with the help of others. We have a tool we're hoping to open source for the web community actually! We have very detailed and actionable results. The latest blocker is we're trying to update to a new version of the React reconciler which will fix a reference leading to memory leaks.

If somebody from Facebook is looking into this: it looks like some functions stored in DataStore.exports.(set|remove|purge) are closing over an environment containing a large weakmap. If you can fix that, memory usage will hopefully go dow

This reference is a leaked DOM Event Listener that keeps this reference. It's a symptom of an event listener that's not cleaned up. Unless it's worst with Firefox for some reason like they're not being cleaned up in instance where we do not leak them.

Basically on our side we have two issues 1) We have known memory leak issues we're trying to fix, 2) Larger issue with our memory graph between 'strongly' connected that causes any simple memory leak to be a large one (and I'm worried about it's impact on GC algorithms but I have no data there). I'm working on these issues as fast as I can. If you are testing on your side I would encourage you to test navigating between two pages without the video player and without a facebook comment list.

There's probably nothing for Mozilla to do here on your side unless we have evidence to shows that Firefox is worst than Chrome for some reason, or the leaks happen while idle on Facebook in which case let me know.

3 When click reload on Firefox, the memory on Facebook tab keeps increasing; when click reload on Chome the memory goes back to the initial value and increase again form there..

Perhaps this is a Firefox issue, certainly interesting if it can be replicated.

Flags: needinfo?(b56girard)

Note that at the end procexp reports about 300M usage in line with about:memory report (m201215_1.json.gz) while
devtools reports at the end only 40M (as per facebook_screen_snapshot.png / 301721.fxsnapshot).
Also note that memory usage does not decrease (as I would expect) when reloading the page

Interesting. Reloading reduces the retained GC memory by ~130M, but we do not immediately release that memory. Should we? Who owns that memory when navigating?

Tudor: just to make sure I understand correctly: the about:memory dump was taken after reloading? It certainly looks like we're holding more JS memory than is displayed in the dev-tools memory view, although I don't have enough experience with our memory reports to identify any specific problems.

Just to illustrate the issue I made a video of what happens in a clean vmware image
(no additional data points other than a visual illustration of the issue at hand)

  • vmware image with 2 cores and 2 GB RAM
  • minimal ubuntu server 20.10 with all updates as of 210113
  • lightdm, i3-wm and latest firefox packages installed
  • video recording was started on the host system

There is no issue until I scroll a little in the facebook window, after which memory usage increases until
the system becomes almost unresponsive, killing firefox releases memory and system is usable again.
I would assume killing the facebook tab process would have had the same results had it been possible
I tried to switch to the facebook tab at around the 10 minute mark
I can't say if the issue is in firefox or facebook

video: https://drive.google.com/file/d/1SjQmpCfYKiroMeZJ8QUxmMgsTCd-sna1/view?usp=sharing

Apparently Google Drive doesn't like mkv so here's an avi version
https://drive.google.com/file/d/1BTZ8bXBLeaeY00axHODtPyxUuDWddd0B/view?usp=sharing

(In reply to Jari Matilainen from comment #33)

Just to illustrate the issue I made a video of what happens in a clean vmware image

Thanks a lot for the video, I tried to reproduce the issue based on the video. So far I am unable to reproduce this issue.
I presume that part of the issue might be from the content which is lazily loaded by facebook once you scroll.

Maybe you can find out the difference in visual elements by using the "…" symbol in the URL bar of Firefox. There you have a save screenshot option.
Try saving a full page screenshot before scrolling, and then a second one tens of seconds after scrolling down and back up, and maybe a third one after seeing the leak go on for a minute.

I would expect the second screenshot (after scrolling) to have more visual elements.
And the third screenshot would either contain the same number of elements, in which case this is likely some loaded content making use of JavaScript to leak even more, or it would have even more content loaded. If this is the later, and the amount of content keep being loaded in the page, then the issue would certainly be a facebook issue.

I have one more point of data which, although it seems sort-of-far fetched, but I was having problems on my host system which is why I tried the vm route to see if I could reproduce it. I installed ffmpeg so I could x11grab the screen for the video and after that my host system is behaving correctly, and I can confirm the same behaviour in the vm
These were the packages pulled in in the vm
ffmpeg i965-va-driver intel-media-va-driver libaacs0 libaom0 libass9 libavcodec58 libavdevice58 libavfilter7 libavformat58
libavresample4 libavutil56 libbdplus0 libblas3 libbluray2 libbs2b0 libchromaprint1 libcodec2-0.9 libdav1d4 libdc1394-25
libfftw3-double3 libflite1 libgfortran5 libgme0 libgsm1 libigdgmm11 liblapack3 liblilv-0-0 libmfx1 libmysofa1 libnorm1
libopenal-data libopenal1 libopenmpt0 libpgm-5.2-0 libpocketsphinx3 libpostproc55 libquadmath0 librabbitmq4 librubberband2
libsdl2-2.0-0 libserd-0-0 libshine3 libsnappy1v5 libsndio7.0 libsord-0-0 libsphinxbase3 libsratom-0-0 libsrt1-gnutls
libssh-gcrypt-4 libswresample3 libswscale5 libva-drm2 libva-x11-2 libva2 libvdpau1 libvidstab1.1 libwebpmux3 libx264-160
libx265-192 libxvidcore4 libzmq5 libzvbi-common libzvbi0 mesa-va-drivers mesa-vdpau-drivers ocl-icd-libopencl1
pocketsphinx-en-us va-driver-all vdpau-driver-all

Hope this helps

Priority: -- → P2

Thanks a lot, for the description of the VM setup that you mentioned in comment 33, I am able to reproduce the steady memory runaway.
However, I will note that as opposed to the video you recorded, I have to scroll much further before noticing the issue.

To reiterate, the memory usage is nominal on my host system once I installed ffmpeg where some or all of the above packages were pulled in alongside it(I tested the vm only briefly when I installed ffmpeg there but everything points to the problem being gone with the addition of one or more of those packages). Energy impact goes up to 50-60 for a short while and then drops down to 0.1-0.2 while without the packages ffmpeg pulled in, energy impact stays around 100 all the time and memory would keep rising until I'm forced to close the facebook tab.
No idea if this is ubuntu missing some dependency or if firefox has an unspecified dependency which has been overlooked which could explain this

Depends on: 1686930

I do see the cpu usage being very high, however the memory increase is really slow and hard to capture in a profile.
I opened Bug 1686930, such that the Performance team can investigate the CPU usage.

Ok, I've got a profile which shows a memory increase. Apart from Bug 1686930, in this new profile (Marker Table) we can see that Facebook attempt to load, and reload constantly the same 2 URLs, which are interleaved.

https://share.firefox.dev/3nOecv1

The memory seems to jump upward a bit for each Load (Marker table of the profiler) of:
https://video-cdg2-1.xx.fbcdn.net/v/t39.25447-2/135252009_405436307363856_4251171462897269923_n.mp4?_nc_cat=1&ccb=2&_nc_sid=5aebc0&efg=eyJ2ZW5jb2RlX3RhZyI6ImRhc2hfdjRfNXNlY2dvcF9ocTVfZnJhZ18yX3ZpZGVvIn0%3D&_nc_ohc=OvEo_jN_5b0AX8qsNu5&_nc_ht=video-cdg2-1.xx&oh=611ad9257cf4159bbf4900b3171c959c&oe=6026E10F&bytestart=2494520&byteend=3673232

Paul, would you know if there is something special about this video which might cause a small leak each time?
Benoit, would this profile be useful to isolate what is going on in Facebook code, and track this loop of video content request?

Flags: needinfo?(padenot)
Flags: needinfo?(b56girard)

(In reply to Nicolas B. Pierron [:nbp] from comment #40)

Paul, would you know if there is something special about this video which might cause a small leak each time?

I don't know offhand, but that sounds plausible. Snooping malloc with ebpf on media threads without ffmpeg installed should be enough to find the smoking gun.

Flags: needinfo?(padenot)

I Finally managed to get a stack trace … here is the result from running DMD=1 on nightly, after keeping it leak slowly for about an hour:

Once-reported {
  8,700 blocks in heap block record 1 of 4,773
  276,893,696 bytes (246,500,842 requested / 30,392,854 slop)
  Individual block sizes: 266,240; 253,952; 249,856; 245,760 x 507; 225,280; 212,992; 155,648; 131,072; 122,880; 106,496; 102,400 x 3; 98,304 x 3; 86,016; 81,920 x 5; 73,728; 69,632 x 2; 65,536 x 5; 61,440 x 3; 57,344 x 3; 53,248 x 4; 49,152 x 23; 45,056; 40,960 x 2; 36,864 x 12; 32,768 x 57; 28,672 x 5; 24,576 x 3,693; 20,480 x 18; 16,384 x 96; 12,288 x 4,223; 8,192 x 2; 4,096 x 4; 2,048 x 6; 1,024 x 12
  53.46% of the heap (53.46% cumulative)
  59.27% of once-reported (59.27% cumulative)
  Allocated at {
    #01: replace_calloc(unsigned long, unsigned long) [memory/replace/dmd/DMD.cpp:1101]
    #02: moz_arena_calloc [memory/build/malloc_decls.h:134]
    #03: js::ArrayBufferObject::createZeroed(JSContext*, js::BufferSize, JS::Handle<JSObject*>) [js/src/vm/ArrayBufferObject.cpp:1285]
    #04: JS_NewUint8Array(JSContext*, unsigned int) [js/src/vm/TypedArrayObject.cpp:2711]
    #05: JS::ReadableStreamUpdateDataAvailableFromSource(JSContext*, JS::Handle<JSObject*>, unsigned int) [js/src/builtin/streams/StreamAPI.cpp:351]
    #06: mozilla::dom::BodyStream::OnInputStreamReady(nsIAsyncInputStream*) [dom/base/BodyStream.cpp:419]
    #07: nsInputStreamReadyEvent::Run() [xpcom/io/nsStreamUtils.cpp:96]
    #08: mozilla::SchedulerGroup::Runnable::Run() [xpcom/threads/SchedulerGroup.cpp:146]
  }
  Reported at {
    #01: JSMallocSizeOf(void const*) [js/xpconnect/src/XPCJSRuntime.cpp:1357]
    #02: js::ArrayBufferObject::addSizeOfExcludingThis(JSObject*, unsigned long (*)(void const*), JS::ClassInfo*) [js/src/vm/ArrayBufferObject.cpp:1452]
    #03: void StatsCellCallback<(Granularity)0>(JSRuntime*, void*, JS::GCCellPtr, unsigned long, JS::AutoRequireNoGC const&) [js/src/vm/MemoryMetrics.cpp:343]
    #04: IterateRealmsArenasCellsUnbarriered(JSContext*, JS::Zone*, void*, void (*)(JSContext*, void*, JS::Realm*, JS::AutoRequireNoGC const&), void (*)(JSRuntime*, void*, js::gc::Arena*, JS::TraceKind, unsigned long, JS::AutoRequireNoGC const&), void (*)(JSRuntime*, void*, JS::GCCellPtr, unsigned long, JS::AutoRequireNoGC const&), JS::AutoRequireNoGC const&) [js/src/gc/PublicIterators.cpp:41]
    #05: js::IterateHeapUnbarriered(JSContext*, void*, void (*)(JSRuntime*, void*, JS::Zone*, JS::AutoRequireNoGC const&), void (*)(JSContext*, void*, JS::Realm*, JS::AutoRequireNoGC const&), void (*)(JSRuntime*, void*, js::gc::Arena*, JS::TraceKind, unsigned long, JS::AutoRequireNoGC const&), void (*)(JSRuntime*, void*, JS::GCCellPtr, unsigned long, JS::AutoRequireNoGC const&)) [js/src/gc/PublicIterators.cpp:58]
    #06: CollectRuntimeStatsHelper(JSContext*, JS::RuntimeStats*, JS::ObjectPrivateVisitor*, bool, void (*)(JSRuntime*, void*, JS::GCCellPtr, unsigned long, JS::AutoRequireNoGC const&)) [js/src/vm/MemoryMetrics.cpp:650]
    #07: xpc::JSReporter::CollectReports(nsDataHashtable<nsUint64HashKey, nsTString<char> >*, nsDataHashtable<nsUint64HashKey, nsTString<char> >*, nsIHandleReportCallback*, nsISupports*, bool) [js/xpconnect/src/XPCJSRuntime.cpp:2240]
    #08: nsWindowMemoryReporter::CollectReports(nsIHandleReportCallback*, nsISupports*, bool) [dom/base/nsWindowMemoryReporter.cpp:565]
  }
}

In addition, looking at the latest profile ( https://share.firefox.dev/3qK5vDG ) collected from a derived bug, highlight that a few URLs are queried in a loop. I suppose that the problem would be that we are failing to increment a counter while streaming content causing Firefox to repeatedly loading the same URL.

Baku, would you know what could cause such loop to query the same URL repeatedly?

Or these are not the same URL, and some data are hidden from the profiler report? Would I have a way to dump a log to show that we are making any progress on streaming data?

Flags: needinfo?(amarchesini)

(In reply to Nicolas B. Pierron [:nbp] from comment #42)

I suppose that the problem would be that we are failing to increment a counter while streaming content causing Firefox to repeatedly loading the same URL.

Bug 1686930 comment 29 has more details on which URLs are loaded how many times.

Jason, would you have any idea what could cause the problem with ReadableStream content?

Flags: needinfo?(jorendorff)

problem still present in latest version v85
also, seems problem can occur with other sites
i have the same issue with youtube more than 2GB mem usage for 1 tab
(in this case for https://www.youtube.com/watch?v=K3JGxj2rvAs)

(In reply to flo from comment #45)

problem still present in latest version v85
also, seems problem can occur with other sites
i have the same issue with youtube more than 2GB mem usage for 1 tab
(in this case for https://www.youtube.com/watch?v=K3JGxj2rvAs)

Do you have ffmpeg installed? If not, does installing it help?

Severity: S3 → --
Flags: needinfo?(jorendorff)
Priority: P2 → --
See Also: → 1699830
Summary: high memory usage on Facebook → high memory usage on Facebook [without ffmpeg installed]

I see this in SeaMonkey too. Memory use climbs up forever, slowly but surely (to 12 GiB and up). After some time uses 100% of one of my cores and skips about half of the keypresses unless I slow down to no more than 1 key every 2 or 3 seconds.
Only work around foound: restart SeaMonkey.

Session Manager extension is installed.

ffmpeg-3 was installed, I've just installed ffmpeg-4 instead, I'll see if it makes any difference.

User agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0 SeaMonkey/2.53.10
Build identifier: 20210807210004

Severity: -- → S4
Priority: -- → P3

Clear a needinfo that is pending on an inactive user.

Inactive users most likely will not respond; if the missing information is essential and cannot be collected another way, the bug maybe should be closed as INCOMPLETE.

For more information, please visit BugBot documentation.

Flags: needinfo?(b56girard)

I'm still monitoring this bug an I confirm the issue still occurs on Nightly ( and Firefox v89 that I'm using) build and willing to help up to my own technical abilities.
(note that the needinfo flag was not addressed to me)

(In reply to Tony Mechelynck [:tonymec] from comment #47)

I see this in SeaMonkey too. Memory use climbs up forever, slowly but surely (to 12 GiB and up). After some time uses 100% of one of my cores and skips about half of the keypresses unless I slow down to no more than 1 key every 2 or 3 seconds.
Only work around foound: restart SeaMonkey.

Session Manager extension is installed.

ffmpeg-3 was installed, I've just installed ffmpeg-4 instead, I'll see if it makes any difference.

User agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0 SeaMonkey/2.53.10
Build identifier: 20210807210004

Did installing ffmpeg-4 instead make any difference?

Flags: needinfo?(antoine.mechelynck)

(In reply to Worcester12345 from comment #50)

(In reply to Tony Mechelynck [:tonymec] from comment #47)

I see this in SeaMonkey too. Memory use climbs up forever, slowly but surely (to 12 GiB and up). After some time uses 100% of one of my cores and skips about half of the keypresses unless I slow down to no more than 1 key every 2 or 3 seconds.
Only work around foound: restart SeaMonkey.

Session Manager extension is installed.

ffmpeg-3 was installed, I've just installed ffmpeg-4 instead, I'll see if it makes any difference.

User agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0 SeaMonkey/2.53.10
Build identifier: 20210807210004

Did installing ffmpeg-4 instead make any difference?

I'm updating SeaMonkey every time I see a new version (from twice a day to, let's say, once a week, depending how often they are built); the current one is 2.53.19b1pre:
User agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.19
Build identifier: 20231102110004
downloaded from http://www.wg9s.com/comm-253/

ffmpeg-4 is still installed; my desktop computer (which I got in January 2021) has 12 CPUs (or six twin-core chips), 800 MHz, 31 GiB live memory (not counting the kernel I think), 11 GiB swap almost never used.

I have transferred my Facebook tabs (about four of them) to Firefox Nightly (currently 2023-11-02 22:28) where the problem, if present, does not happen fast enough to make itself felt between one half-daily build and the next.

In SeaMonkey where I browse Wikipedia a lot I often see memory use climb to twice or three times what it is at startup, and I notice it by checking the System Monitor "Resources" tab once SeaMonkey becomes noticebly sluggish. It goes to the point where a popup appears on top of the SeaMonkey window, telling me: "SeaMonkey is unresponsive. [ Kill ] [ Wait ]" (If I click "Wait" a few times I end up being able to restart SeaMonkey "properly" without having to force-kill it. The result is a reduction of my total RAM use to half or less of its previous value.)

Flags: needinfo?(antoine.mechelynck)

(In reply to Tony Mechelynck [:tonymec] from comment #51)

(In reply to Worcester12345 from comment #50)

(In reply to Tony Mechelynck [:tonymec] from comment #47)

I see this in SeaMonkey too. Memory use climbs up forever, slowly but surely (to 12 GiB and up). After some time uses 100% of one of my cores and skips about half of the keypresses unless I slow down to no more than 1 key every 2 or 3 seconds.
Only work around foound: restart SeaMonkey.

Session Manager extension is installed.

ffmpeg-3 was installed, I've just installed ffmpeg-4 instead, I'll see if it makes any difference.

User agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0 SeaMonkey/2.53.10
Build identifier: 20210807210004

Did installing ffmpeg-4 instead make any difference?

I'm updating SeaMonkey every time I see a new version (from twice a day to, let's say, once a week, depending how often they are built); the current one is 2.53.19b1pre:
User agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0 SeaMonkey/2.53.19
Build identifier: 20231102110004
downloaded from http://www.wg9s.com/comm-253/

Have you considered downloading it from https://www.seamonkey-project.org/ , which provides download links to https://archive.mozilla.org/ ?

[…]

In SeaMonkey where I browse Wikipedia a lot I often see memory use climb to twice or three times what it is at startup, and I notice it by checking the System Monitor "Resources" tab once SeaMonkey becomes noticebly sluggish. It goes to the point where a popup appears on top of the SeaMonkey window, telling me: "SeaMonkey is unresponsive. [ Kill ] [ Wait ]" (If I click "Wait" a few times I end up being able to restart SeaMonkey "properly" without having to force-kill it. The result is a reduction of my total RAM use to half or less of its previous value.)

This sounds like potentially a different issue, as the unresponsive message is most likely to happen due to a script which is spinning for a long time.
Would you have any addon which might cause this issue?

You might want to open a different issue, to get some fresh look at your problem.

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: