STR: 1) Load a big book, e.g. http://stacks.math.columbia.edu/download/book.pdf 2) Start scrolling, giving some minimal time to let individual pages load. 3) Watch memory usage. 4) See browser go boom as Firefox exceeds 2Gb RAM usage. Looks like decoded/rendered pages are cached but never freed.
Maybe Firefox should discard PDF pages which are not displayed, like it does for images (if that's not the case).
Priority: -- → P4
Whiteboard: [MemShrink] → [MemShrink][pdfjs-c-performance]
An about:memory measurement would be very helpful. If you're on Nightly, please "Measure and save" and attach the resulting .json.gz file. Thanks!
Whiteboard: [MemShrink][pdfjs-c-performance] → [pdfjs-c-performance]
This is WFM using Firefox 2 and newer. Getting ~700/750 MB Ram Usage on Scrolling through the first 20 Pages.
Still trivially reproduces on currently Nightly. >Getting ~700/750 MB Ram Usage on Scrolling through the first 20 Pages. Uh, the document has 3800+ pages. Try reading more than 1% of it.
gcp, how does Nightly do now? Memory usage seems ok to me now, considering the document size -- I couldn't get Firefox's RSS above ~800 MiB -- but maybe I'm not interacting with the document in the same way you are.
I can still get it arbitrarily high by browsing through the document. It seems to raise less sharply (i.e. pdf.js memory usage is better), but still keeps going up (i.e. it's still *leaking*). I attached a memory report where it's using about 2.3G.
This is interesting: > │ ├──719.08 MB (41.94%) -- top(http://stacks.math.columbia.edu/download/book.pdf, id=121) > │ │ ├──653.60 MB (38.12%) -- active/window(http://stacks.math.columbia.edu/download/book.pdf) > │ │ │ ├──402.67 MB (23.48%) -- dom > │ │ │ │ ├──390.55 MB (22.78%) ── orphan-nodes > │ │ │ │ └───12.12 MB (00.71%) ++ (5 tiny) As is this: > 4,372.27 MB ── gpu-committed > 1,580.04 MB ── gpu-dedicated > 20.52 MB ── gpu-shared
gcp: another interesting experiment would be to append "#textLayer=off" to the URL through which you access the file. This disables the text layer, which means you won't be able to select text, but it might make pdf.js faster and use less memory. I think it will also prevent orphan nodes from accumulating, since the text layer is the only part of pdf.js (AFAIK) that involves lots of DOM nodes.
Sounds like you found it indeed. Memory usage increases by about 300M when the PDF is opened, but increases negligibly after that and barely went over 900M even after reading halfway through it.
Bug 1054161 may help substantially.
You need to log in before you can comment on or make changes to this bug.