Closed Bug 1430224 Opened 8 years ago Closed 7 years ago

Memory leak in web site

Categories

(Core :: DOM: Workers, defect, P2)

58 Branch
x86_64
All
defect

Tracking

()

RESOLVED WORKSFORME
Tracking Status
firefox-esr52 --- unaffected
firefox-esr60 --- wontfix
firefox57 --- wontfix
firefox58 --- wontfix
firefox59 --- wontfix
firefox60 --- wontfix
firefox61 --- ?
firefox62 --- ?

People

(Reporter: estama, Assigned: baku)

References

Details

(Keywords: regression, Whiteboard: [MemShrink:P2])

Attachments

(1 file)

User Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0 Build ID: 20180108140638 Steps to reproduce: Opened the http://www.keeptalkinggreece.com site Actual results: Memory continuously increases (filling 16GB of memory) and performance slows down Expected results: Memory should remain stable. Tested with Chrome and it used a stable amount of memory.
I have manged to reproduce this issue using latest Firefox 57.0.4 release and latest Nightly 59.0a1 build on Windows 7 x64, Mac 10.12 and Ubuntu 14.0.4 x64. I have opened the provided page and in ~15 minutes the memory usage has increased at 5 GB. I have observed that if navigate to another link from the website, the memory is released. Here is the memory report: https://goo.gl/wWg9m2 The issue is not reproducible on Nightly 54. Considering this, using mozregression tools I have found the regression range. Here are the results: Last good revision: 564e1f5f214523adc78b1f5ee5a94428c8696343 First bad revision: 391047948db40495f68c43421bb7263d10680b97 Pushlog: https://goo.gl/VCF7p7 Looks that bug 1342060 introduced this issue. Indeed if I set the "javascript.options.wasm" boolean pref to false, the issue is no longer reproducible on latest Nightly build. The memory usage remains at ~300 BM. @Luke can you please take a loook at this?
Blocks: 1342060
Status: UNCONFIRMED → NEW
Component: Untriaged → JavaScript Engine
Ever confirmed: true
Flags: needinfo?(luke)
Keywords: regression
OS: Unspecified → All
Product: Firefox → Core
Hardware: Unspecified → x86_64
Whiteboard: [MemShrink]
So the site appears to create N workers, each with a wasm memory+module. (Maybe they are bitcoin miners?) Looking at the profiler, these workers are constantly active, with lots of postMessage events. Reloading about:memory, I can see heap-unclassified growing at a pretty constant rate, ~3.3mb/s. Clicking "Minimize Memory Usage" drops the heap-unclassified back down to it's initial value, but it will continue to rise. Although it's possible, I think wasm is not likely to be the root of the problem here; probably the workers are only enabled if wasm is enabled. The fact that the heap-unclassified is cleared by "Minimize Memory Usage" suggests that this is some cache, cleared by purging GC's or low-memory events. njn: is DMD still the best way to triage the source of large heap-unclassified?
Flags: needinfo?(luke) → needinfo?(n.nethercote)
Yes, DMD is still your best bet.
Flags: needinfo?(n.nethercote)
We think this is probably related to worker message queues backing up. baku can you take a look?
Component: JavaScript Engine → DOM: Workers
Flags: needinfo?(amarchesini)
Whiteboard: [MemShrink] → [MemShrink:P2]
Assignee: nobody → amarchesini
Priority: -- → P2
I'm trying to reproduce this issue, but with my debug build I'm not able to. I'll try again tomorrow. /me keeping the NI.
Is this still reproducible for you, Cosmin?
Attached file memory-report.json.gz
I have retested this issue on latest Firefox (60.0.2) and latest Nightly (62.0a1) build, but the issue is no longer reproducible. I have navigate to http://www.keeptalkinggreece.com website, but the memory usage oscillates between 600-900 Mb even if I wait more then 10 minutes. I have also tested this issue on using an older Nightly 59.0a1 build (from 2018-01-07) but I haven't managed to reproduce it. So probably it was fixed by the website somehow. However, I have attached the memory report for the current behavior. @estama, can you please also retest this issue and see if it's reproducible on your end?
Flags: needinfo?(cosmin.muntean) → needinfo?(estama)
I've also retested and it is *not* happening any more.
Flags: needinfo?(estama)
I forgot to add, that i'm testing with 61.0 build
Okay, lets go ahead and close this for now.
Status: NEW → RESOLVED
Closed: 7 years ago
Flags: needinfo?(amarchesini)
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: