Closed Bug 971694 Opened 6 years ago Closed 4 years ago

AWSY mobile: large increase in memory usage around Feb 8

Categories

(Firefox for Android :: General, defect)

All
Android
defect
Not set

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: kats, Unassigned)

Details

(Whiteboard: [MemShrink:P2])

Attachments

(3 files)

The data at areweslimyet.com/mobile shows an increase in memory usage around Feb 8. The data is pretty spotty here (because of bug 971001, which is now fixed) but the range in which the memory usage went up is

https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=00c0dfc4c76b&tochange=2be06e76cfd0

The data for the "before" can be found at: http://areweslimyet.mobi/data/mozilla-inbound/1391853305/ and the data for the "after" is at http://areweslimyet.mobi/data/mozilla-inbound/1391873523/
Since the TabsOpen data can sometimes be subject to variation because of tab zombification, I did a diff on the TabsClosedForceGC data, which should be pretty stable. The resident memory usage is ~11MB higher at the end of the pushlog range compared to the before, but the explicit memory usage is only ~2MB higher, and is broken down in the attachment.
Nick, do you know what "explicit/heap-overhead/waste" is and why it would go up by 4MB? The changes in the pushlog range seem pretty benign, except possibly for Nathan's (I'm not sure what that's doing).
Flags: needinfo?(n.nethercote)
Whiteboard: [MemShrink]
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #2)
> Nick, do you know what "explicit/heap-overhead/waste" is and why it would go
> up by 4MB? The changes in the pushlog range seem pretty benign, except
> possibly for Nathan's (I'm not sure what that's doing).

JFTR, my change essentially changes this:

// JSID_VOID serves as a marker indicating whether we've initialized all the jsids in the array.
static const jsid idArray[N] = { JSID_VOID };

...

// Try to initialize the array.
if (idArray[0] == JSID_VOID && !InitIds(&idArray[0])) {
  // We failed, explicitly reset the state of the array so we can try again next time.
  idArray[0] = JSID_VOID;
  return;
}

to this:

// No initializer, which was causing a static constructor.
static const jsid idArray[N];

...

// Use an explicit boolean flag for tracking initialized state.
static bool initializedFlag = false;
if (!initializedFlag) {
  if (!InitIds(&idArray[0])) {
    // We failed, but the flag is still false, so nothing to do.
    return;
  }
  initializedFlag = true;
}

I am at a loss to see how this increases memory consumption at all.  But like I said to mfinkle looking at this yesterday, I have a hard time seeing how it could be anything else in that regression range.  (Unless the APZC changes somehow extend the lifetime(s) of objects...but that seems unlikely.)
Actually looking at the raw data I'm not sure that pushlog range is right; that seems to be a large jump but it's more of a blip that appears over-represented in the graph version. Attached is the raw data and from eyeballing that I would probably pin the regression on something like the range cb2e9f9a09f7..18c901d3e4c5. Thoughts?
jlebar implemented "waste", so all I know is from the tool-tip:

> Committed bytes which do not correspond to an active allocation and which the
> allocator is not intentionally keeping alive (i.e., not 'heap-bookkeeping' or
> 'heap-page-cache').  Although the allocator will waste some space under any
> circumstances, a large value here may indicate that the heap is highly
> fragmented, or that allocator is performing poorly for some other reason.

AIUI, it's not something we have much control over, unless we're willing to rewrite jemalloc. I realize that doesn't help much when dealing with a regression; sorry.
Flags: needinfo?(n.nethercote)
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #4)
> Created attachment 8374836 [details]
> Resident memory usage for TabsClosedForceGC over a bunch of revs
> 
> Actually looking at the raw data I'm not sure that pushlog range is right;
> that seems to be a large jump but it's more of a blip that appears
> over-represented in the graph version. Attached is the raw data and from
> eyeballing that I would probably pin the regression on something like the
> range cb2e9f9a09f7..18c901d3e4c5. Thoughts?

I turned this into a chart to see if it would be easier to see trends. I'll post it.
I guessed at 3 different plateaus. Yes, it's arguable that I am way off base.

The middle plateau might start around: b54e8c328c32 (175644672)
The right-most plateau seems to start at: 2be06e76cfd0 (182337536)
Inspired by your plateau graph, I spent some time yesterday breaking down the data even further, to the individual components of the about:memory dump and graphing those over the same range. I skipped over all the graphs that were just a flat line, but in the remaining graphs there are a lot more plateaus and bi-modal or tri-modal data graphs. It's pretty interesting and I feel like I should be able to write something to auto-detect the regressions in the stabler datasets but I don't know when I'll be able to get around to it.

What I have so far is at http://areweslimyet.mobi/plotter-results/ (note that I may update this in-place as I come up with further analysis techniques and whatnot).
See Also: → 970360
See Also: 970360
> What I have so far is at http://areweslimyet.mobi/plotter-results/ (note
> that I may update this in-place as I come up with further analysis
> techniques and whatnot).

I don't understand that page at all...
Whiteboard: [MemShrink] → [MemShrink:P2]
It's graphs for all the paths inside the about:memory dump, over a range of 175 changesets. For example the first graph (top left) is the graph of "decommited" values over 175 changesets. The next one is "decommited/js-non-window" values over the same 175 changesets. The tooltip on the graph tells you what path it's for.

The interesting thing is that values that seem initial random (like "explicit") actually have sub-paths that are very stable, and you can clearly see regressions in these sub-paths.

The page might be better if it were laid out to map the tree structure of about:memory dumps but I didn't have time to figure out to do that in a sane way.
(In reply to Kartikaya Gupta (email:kats@mozilla.com) from comment #10)
> It's graphs for all the paths

By "all" I mean "all whose graph is not just a flat line"
We probably don't care about this bug anymore. The stuff in the last few comments is tracked by bug 1000268.
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.