Build peak memory usage testing framework

RESOLVED INCOMPLETE

Status

RESOLVED INCOMPLETE
4 years ago
a year ago

People

(Reporter: gwagner, Assigned: erahm)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [MemShrink:P2])

(Reporter)

Description

4 years ago
We always run into memory related issues when we about to finish a release.
Seems like we accumulate many small memory regressions along the way and its impossible to fix them at the end of a release cycle.

We need a memory profiler that takes a static web page (like a snapshot of cnn.com) and makes sure the peak memory usage doesn't regress.

Gabriele, do you have any ideas about this?
(Reporter)

Updated

4 years ago
Flags: needinfo?(gsvelto)
Whiteboard: [MemShrink]
I know :nbp tracks js perfs on a very regular basis, maybe he could have some useful inputs on this.
Flags: needinfo?(nicolas.b.pierron)
The most reliable way I found so far to find out what is peek memory used by a process was to run it under valgrind.  Especially the dhat experimental extension which gives even more information such as the usage made of the allocations.

http://valgrind.org/docs/manual/dh-manual.html
Flags: needinfo?(nicolas.b.pierron)
We always used a mix of manual techniques to address memory regressions or even only optimization. Mostly they were about peeking into about:memory (and potentially adding more detailed breakdowns to it) and using the DMD when we were dealing with memory not part of the most common pools we track.

Using about:memory for testing memory usage suffers from a significant problem: it's not really repeatable as small timing differences often mean that you'll get different measurements. To get reasonably consistent measurements one must rely on minimizing the heap first which makes it useless to track peak usage.

Nicolas suggestion in comment 2 sounds very promising and we can run B2G under Valgrind IIRC so that's something we might want to do. The only downside I can think of is that with Nuwa we care only about USS and any kind of tracking happening from within the process will always see the RSS. I think that's minor though because peak usage in an application shouldn't affect USS and we can easily ensure we don't regress it by tracking the USS of the preallocated process across commits. I doubt more memory will be shared after the point where we use the preallocated process to launch an app.
Flags: needinfo?(gsvelto)
(Assignee)

Comment 4

4 years ago
I think we could add |VmHWM| to the about:memory report on Linux/FxOS if that would help.
Whiteboard: [MemShrink] → [MemShrink:P2]
(Assignee)

Comment 5

4 years ago
Discussed w/ gwagner on IRC. It sounds like what we want is at least daily monitoring of memory usage w/ automatic regression reporting.

An option for achieving this is:
  1) Add b2g support to AWSY (this is already a goal of mine). Run against either every nightly or every b-i.
  2) Add regression reporting to AWSY (this is already a goal of mine) via the same mechanism as talos. Bug 1000268 relates to this.
  3) Add a suite of tests for specific apps. A proof of concept would be to just measure usage of system app/homescreen/nuwa/preallocated at boot time.
Assignee: nobody → erahm
Depends on: 1145007
(Assignee)

Comment 6

4 years ago
Eli, could raptor take care of this?
Flags: needinfo?(eperelman)

Comment 7

4 years ago
It's possible, but I guess that depends on what you want to measure. (Keep in mind I don't know much about memory or how it can be gathered.)

Raptor already captures USS, PSS, and RSS for every application once it is fully loaded using b2g-info. This may not represent the peak memory usage of the app, but it does represent the moment when the app has denoted everything is done.

For reference, here is Raptor's accumulation of memory information. Please let me know if there is more to discuss with regards to Raptor being able to help out here.
Flags: needinfo?(eperelman)
(Assignee)

Updated

a year ago
Status: NEW → RESOLVED
Last Resolved: a year ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.