Major memory usage regressions in v2.5

RESOLVED FIXED

Status

P2
major
RESOLVED FIXED
4 years ago
3 years ago

People

(Reporter: gsvelto, Unassigned)

Tracking

({regression})

unspecified
ARM
Gonk (Firefox OS)
regression
Dependency tree / graph

Firefox Tracking Flags

(tracking-b2g:+)

Details

(Whiteboard: [MemShrink:meta][Profile-wanted])

Attachments

(5 attachments)

175.14 KB, application/x-gnome-theme-package
Details
167.62 KB, application/x-gnome-theme-package
Details
151.98 KB, application/x-gnome-theme-package
Details
314.97 KB, application/x-gnome-theme-package
Details
258.96 KB, application/x-gzip
Details
(Reporter)

Description

4 years ago
While investigating our memory consumption I noticed that we're using quite a bit more memory in the v3.0/master branch than we used to v2.2 and even more than v2.1. As a test I launched six apps from a fresh gaia/gecko installation with empty workloads and grabbed their about:memory profiles and b2g-info output. This is a summary of the USS for every process as measured via b2g-info:

           NAME   v2.1   v2.2 master
            b2g   53.3   61.2   64.6
         (Nuwa)    2.3    5.5    8.9
     Homescreen   10.6   13.8   24.1
Built-in Keyboa   11.6   12.6   13.7
 Communications    9.9   10.8   16.7
       Messages   15.4   17.1   17.3
        Gallery    8.7    8.6   14.9
          Music   13.7    9.0   15.4
          Clock    9.0   14.6   15.4
       Settings   17.4   13.9   20.5
 (Preallocated)    5.1    5.2      -

There's been a significant increase in USS caused by bug 1151672 which points to a lot of memory not being shared anymore with the Nuwa/preallocated process. Additionally we have some genuine regressions in certain apps, I'll attach the full reports shortly.
(Reporter)

Comment 1

4 years ago
Created attachment 8602763 [details]
about:memory master branch
(Reporter)

Comment 2

4 years ago
Created attachment 8602764 [details]
about:memory v2.2 branch
(Reporter)

Comment 3

4 years ago
Created attachment 8602765 [details]
about:memory v2.1 branch
(Reporter)

Comment 4

4 years ago
Created attachment 8602766 [details]
about:memory v2.0 branch
(Reporter)

Comment 5

4 years ago
I've quickly glanced over the about:memory reports and found the following:

- The gc -- nursery committed parameter has increased across the board, this sounds like the GC was tweaked in a way that makes it consumer more memory. This accounts for a 2.5MiB increase in the system app between v2.1 and master and something less between v2.2 and master.

- heap-unclassified jumped by almost 3MiB in the system app between v2.1 and master, nasty

- The homescreen is holding on to more images than it used too, this is after minimization so we might be retaining those images accidentally.

- The system app is holding on to two large images (1.78 MiB each) the size of the screen which are decoded and retained in non-heap memory. That sounds like the wallpaper however it's bizarre that there's two of them. In the pre-master reports only one showed up.

- heap-overhead has gone up significantly, most of it being in the page-cache parameter. That should be zero after minimization (bug 805855) and in theory all background apps should madvise() away that area. I have to double-check if that's a genuine regression or not.

- then there's the death by a thousand cuts, a lot of stuff has increased a little bit, I fear that's caused by the regular feature/complexity creep and we can't do much about it

As for the non-application measurements I've found this:

- kgsl-memory has increased, this seems due to the images being locked into graphics memory by the system app as per my comment above

- The system app gained 12 MiB between v2.1 and master, explicit allocations only account for half of that with the remaining half mostly coming from libxul.so having grown from 4.26 to 5.53 MiB and /dev/ashmem usage went up from 3.73 MiB to 5.31 MiB

I've CC'd anybody I thought might be interested in these results, feel free to point more people at this bug.
Seth: could any of the recent gecko image changes caused any of the memory usage increases listed in comment #5?
Flags: needinfo?(seth)
Possibly, although I'm not yet sure whether there's a bug here.

Gabriele, am I correct in guessing that you took these measurements while the device was on the home screen?
Flags: needinfo?(seth) → needinfo?(gsvelto)
Blocks: 1080674
Blocks: 1157001
Blocks: 1157473
Blocks: 1161233
njn, are you or someone on your team interested in doing a DMD run of this?
Flags: needinfo?(n.nethercote)
Depends on: 1162678
Bug 1156611 just landed on m-i. Lets get some new numbers for trunk.
Depends on: 1156611
It's interesting that the Clock is jumping so much despite little development going on in that app between those versions.

Because of how stable Clock is, it may be a good app to catch some system-wide regressions.
(Reporter)

Comment 11

4 years ago
(In reply to Seth Fowler [:seth] from comment #7)
> Gabriele, am I correct in guessing that you took these measurements while
> the device was on the home screen?

Yes, all measures were taken in the homescreen.
Flags: needinfo?(gsvelto)
(In reply to Gabriele Svelto [:gsvelto] from comment #11)
> (In reply to Seth Fowler [:seth] from comment #7)
> > Gabriele, am I correct in guessing that you took these measurements while
> > the device was on the home screen?
> 
> Yes, all measures were taken in the homescreen.

That'd mean that the images on the homescreen would be locked (because of bug 1148696, which is present on master and not on 2.2, though I've recently requested uplift). So they would not be freed when you hit "minimize memory usage".

If this is the cause, this doesn't really represent a regression in memory usage from a practical perspective, as we'd immediately reallocate that memory and redecode those images anyway the next time we needed to paint. IOW, practically speaking we can't get away from the requirement that visible images be decoded.

I suspect that's the only reason there seems to be a regression in image memory used by the homescreen. We could verify this by switching to a different app, hitting "minimize memory usage", then grabbing another memory report and checking that the extra image memory associated with the homescreen has been freed.

Updated

4 years ago
tracking-b2g: --- → +
Depends on: 1162812
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #8)
> njn, are you or someone on your team interested in doing a DMD run of this?

erahm, could you do one? Thank you.
Flags: needinfo?(n.nethercote) → needinfo?(erahm)

Comment 14

4 years ago
(In reply to Nicholas Nethercote [:njn] from comment #13)
> (In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #8)
> > njn, are you or someone on your team interested in doing a DMD run of this?
> 
> erahm, could you do one? Thank you.

The standard offenders are in there (1.5MiB from libgles), a few that caught my eye:

- 843,776 unreported bytes from XPT. It looks like |XPTInterfaceInfoManager::CollectReports| is not being called in the main b2g process (but is in children). I'm guessing this is a regression, I'll follow up to see if it works as expected on 2.2
- 802,762 unreported bytes from |sqlite3MemMalloc|
- 159,744 unreported bytes from |mozilla::ipc::OpenDescriptor|
- 69,632 unreported bytes from |mozilla::dom::gonk::SystemWorkerManager::Init|
- 69,632 unreported bytes from |NetlinkPoller|
- 65,488 unreported bytes from |nsCSSSelector::AddAttribute|
- 49,152 unreported bytes from |js::LifoAlloc::alloc| (this is ion compilation, see bug 1156316)

So roughly we can say:
1) |XPTInterfaceInfoManager::CollectReports| isn't working, this adds 800K of unclassified
2) There's a whole lot of sqlite stuff going unreported. Main culprits appear to be |nsPermissionManager|, |DOMStorageDBThread|, |nsCookieService|, |certverifier|, 
3) There's a fair amount of IPC messaging stuff that's not being reported
4) Some CSS attributes are not being reported

...and I could go on.
Flags: needinfo?(erahm)

Comment 15

4 years ago
Created attachment 8604252 [details]
3.0 DMD for system app

Comment 16

4 years ago
(In reply to Eric Rahm [:erahm] from comment #14)
> - 843,776 unreported bytes from XPT. It looks like
> |XPTInterfaceInfoManager::CollectReports| is not being called in the main
> b2g process (but is in children). I'm guessing this is a regression, I'll
> follow up to see if it works as expected on 2.2

This problem is present in 2.2 as well.

Also note: heap-unclassified actually went down 4MiB for me from 2.2 -> 3.0. This could just be noise or the primary regression is really from 2.1 -> 2.2.

Comment 17

4 years ago
(In reply to Eric Rahm [:erahm] from comment #16)
> (In reply to Eric Rahm [:erahm] from comment #14)
> > - 843,776 unreported bytes from XPT. It looks like
> > |XPTInterfaceInfoManager::CollectReports| is not being called in the main
> > b2g process (but is in children). I'm guessing this is a regression, I'll
> > follow up to see if it works as expected on 2.2
> 
> This problem is present in 2.2 as well.

And 2.1.

My delta for 2.1 -> 3.0 looked like this:
> 5.32 MB (100.0%) -- explicit
> ├──4.70 MB (88.42%) ++ js-non-window
> ├── -1.99 MB (-37.36%) ── heap-unclassified
> ├──1.17 MB (21.91%) ++ images/content/raster/used
> ├──1.60 MB (30.08%) ++ heap-overhead
> ├──1.07 MB (20.16%) ++ dmd <----- ignore this
> ├── -0.77 MB (-14.42%) ++ window-objects
> ├── -0.37 MB (-7.02%) ++ storage

So unfortunately this goes to show YMMV when measuring memory.

Updated

4 years ago
Whiteboard: [MemShrink]
Whiteboard: [MemShrink] → [MemShrink:meta]
(Reporter)

Comment 18

4 years ago
(In reply to Seth Fowler [:seth] from comment #12)
> That'd mean that the images on the homescreen would be locked (because of
> bug 1148696, which is present on master and not on 2.2, though I've recently
> requested uplift). So they would not be freed when you hit "minimize memory
> usage".
> [...]
> If this is the cause, this doesn't really represent a regression in memory
> usage from a practical perspective, as we'd immediately reallocate that
> memory and redecode those images anyway the next time we needed to paint.
> IOW, practically speaking we can't get away from the requirement that
> visible images be decoded.


Thanks for the explanation. I've retested master and both the additional kgl memory and the extra images held by the homescreen are gone. Is this the effect of other changes related to image locking?
Can we test the 4 bugs that this bug blocks again? We backed out bug 805167 which should improve the memory situation a lot.
Keywords: qawanted
(Reporter)

Comment 20

4 years ago
FYI I've retested the current master and the numbers look like this now

           NAME   v2.1   v2.2  05/07  today
            b2g   53.3   61.2   64.6   57.2
         (Nuwa)    2.3    5.5    8.9    6.0
     Homescreen   10.6   13.8   24.1   17.0
Built-in Keyboa   11.6   12.6   13.7   13.4
 Communications    9.9   10.8   16.7   11.3
       Messages   15.4   17.1   17.3   12.4
        Gallery    8.7    8.6   14.9   14.3
          Music   13.7    9.0   15.4   10.5
          Clock    9.0   14.6   15.4   10.6
       Settings   17.4   13.9   20.5   19.5
 (Preallocated)    5.1    5.2      -    7.1

Once we exclude the increases caused by image locking things are already looking better.
(In reply to Gabriele Svelto [:gsvelto] from comment #18)
> Thanks for the explanation. I've retested master and both the additional kgl
> memory and the extra images held by the homescreen are gone. Is this the
> effect of other changes related to image locking?

That doesn't make much sense to me - if the homescreen is visible, we should be holding on to those images. If we did drop those images despite the homescreen being visible, that's probably a bug. =\
It's worth noting that there *are* some known issues with image memory usage on master as a result of bug 1124084 landing last week. I'm assuming that Gabriele's original numbers didn't include bug 1124084's changes.

The most serious issue was bug 1163878, which I pushed a fix for last night. I'm continuing to monitor the situation and will push additional fixes as necessary.
Depends on: 1163878
(Reporter)

Comment 23

4 years ago
I'll do another test ASAP and attach the about:memory reports. What I meant in my previous comment is that between v2.1 and my first test of master I found a whole bunch of images being retained. During my second test less images were being retained but still more than on v2.1. So the changes you mention might have affected my measurements.
(In reply to Gregor Wagner [:gwagner] from comment #19)
> Can we test the 4 bugs that this bug blocks again? We backed out bug 805167
> which should improve the memory situation a lot.

On today's 3.0 Flame,

Bug 1080674 reproduced 1 out of 2 attempts. Browser LMK'ed while viewing the website.
Bug 1157001 reproduced 1 out of 5 attempts. App relaunched to main page after edge gesturing.
Bug 1157473 reproduced 1 out of 5 attempts. App relaunched to main page when accessed from card view.
Bug 1161233 reproduced 1 out of 2 attempts. I count the app relaunch as a reproduction, instead of force-close like the original issue.

Device: Flame (kk, full flashed, 319mb)
BuildID: 20150514010203
Gaia: 338f66e6a96491d2f5854b188c6b141ceb690d97
Gecko: 1fab94ad196c
Gonk: 040bb1e9ac8a5b6dd756fdd696aa37a8868b5c67
Version: 41.0a1 (3.0 Master)
Firmware Version: v18D-1
User Agent: Mozilla/5.0 (Mobile; rv:41.0) Gecko/41.0 Firefox/41.0
QA Whiteboard: [QAnalyst-Triage?]
Flags: needinfo?(ktucker)
Keywords: qawanted
QA Whiteboard: [QAnalyst-Triage?] → [QAnalyst-Triage+]
Flags: needinfo?(ktucker)
Blocks: 1172167
Depends on: 1176502
QA Whiteboard: [QAnalyst-Triage+] → [QAnalyst-Triage+][qa-tracking]
(Reporter)

Updated

3 years ago
Depends on: 1204837
(Reporter)

Comment 25

3 years ago
Adjusting the bug title to reflect the current release status.
Summary: Major memory usage regressions in v3.0 → Major memory usage regressions in v2.5

Updated

3 years ago
Severity: normal → major
Keywords: regression
Priority: -- → P2
Should we nominate this for 2.5? or is being treated as a meta?
So I'll re-use this bug to share findings from bug 1197231, we've tried different Gecko and Gaia combinations and it looks like we have Gecko v2.2 vs Gecko master regression. Here are results for the Messages app (Gaia v2.2 on Gecko v2.2 and Gaia v2.2 on Gecko v2.5):

Gaia v2.2 + Gecko v2.2, 30 Raptor runs
| Metric                | Mean     | Median   | Min    | Max    | StdDev | 95% Bound |
| --------------------- | -------- | -------- | ------ | ------ | ------ | --------- |
| uss                   | 16.687   | 16.600   | 16.300 | 19.700 | 0.592  | 16.899    |
| pss                   | 20.533   | 20.500   | 20.100 | 23.600 | 0.611  | 20.752    |
| rss                   | 34.380   | 34.300   | 34     | 37.300 | 0.576  | 34.586    |

Gaia v2.2 + Gecko v2.5, 30 Raptor runs
| Metric                | Mean     | Median   | Min    | Max    | StdDev | 95% Bound |
| --------------------- | -------- | -------- | ------ | ------ | ------ | --------- |
| uss                   | 18.479   | 18.439   | 17.926 | 19.203 | 0.307  | 18.589    |
| pss                   | 22.569   | 22.526   | 22.032 | 23.303 | 0.311  | 22.680    |
| rss                   | 37.895   | 37.852   | 37.348 | 38.629 | 0.310  | 38.006    |

Also I've captured about:memory reports (10 seconds after app is fully loaded) - [1] (see the diff between "about-memory-Gaia v2.2 Gecko v2.2 Messages" and "about-memory-Gaia v2.2 Gecko v2.5 Messages"). Per profiles there is a more than 2MB difference for the same Gaia on different Gecko. I believe we should see this regression across all gaia apps.

It's definitely hard to say what regressed exactly, but there are some things that occupy memory in master Gecko that I don't see in Gecko v2.2. Especially I'm referring to these two:

* There are two new compartments: compartment([System Principal], processChildGlobal) and compartment([System Principal], Addon-SDK);
* There is new "layout ─> rule-processor-cache" that per bug 77999 should have improved memory consumption, but I see +0.4mb in diff because of that (maybe it reduced memory in another place....).

Maybe they are totally fine, we need experienced eye here :)

Nicholas could you please help us here, at least we need to figure out if we can do anything on the Gecko side and how to move this forward?

[1] https://drive.google.com/folderview?id=0B_RkmK8mrD-ITGpoSDlxUElyb1E&usp=sharing#list
Flags: needinfo?(n.nethercote)
Oleg: can you share also perf number difference from the raptor results? And if you have a chance, run raptor-compare against the metrics.ldjson with those two tests to check significance?
Oleg, did you use "-m" when getting the memory dumps with get_about_memory.py ?
(In reply to Zibi Braniecki [:gandalf][:zibi] from comment #28)
> Oleg: can you share also perf number difference from the raptor results?

Gaia v2.2 + Gecko v2.2, 30 runs
| Metric                | Mean     | Median   | Min    | Max    | StdDev | 95% Bound |
| --------------------- | -------- | -------- | ------ | ------ | ------ | --------- |
| navigationLoaded      | 961.967  | 964      | 846    | 1009   | 33.687 | 974.022   |
| willRenderThreads     | 1001.833 | 1002     | 884    | 1047   | 33.184 | 1013.708  |
| navigationInteractive | 1005.400 | 1004.500 | 892    | 1050   | 32.027 | 1016.861  |
| visuallyLoaded        | 1313.133 | 1312.500 | 1245   | 1366   | 24.930 | 1322.054  |
| contentInteractive    | 1784.833 | 1779     | 1720   | 1857   | 32.885 | 1796.601  |
| objectsInitEnd        | 1890.033 | 1887.500 | 1824   | 1958   | 32.746 | 1901.751  |
| fullyLoaded           | 3005.667 | 2972     | 2894   | 3176   | 80.993 | 3034.650  |
| uss                   | 16.717   | 16.700   | 16.300 | 19.500 | 0.547  | 16.912    |
| pss                   | 20.550   | 20.500   | 20.100 | 23.400 | 0.562  | 20.751    |
| rss                   | 34.390   | 34.350   | 34     | 37.200 | 0.547  | 34.586    |

Gaia v2.2 + Gecko v2.5, 30 runs
| Metric                | Mean     | Median   | Min    | Max    | StdDev | 95% Bound |
| --------------------- | -------- | -------- | ------ | ------ | ------ | --------- |
| navigationLoaded      | 1006.900 | 1011.500 | 917    | 1059   | 33.353 | 1018.835  |
| willRenderThreads     | 1042.800 | 1046.500 | 957    | 1095   | 32.610 | 1054.469  |
| navigationInteractive | 1045.633 | 1049.500 | 960    | 1098   | 32.468 | 1057.252  |
| visuallyLoaded        | 1356.233 | 1359.500 | 1292   | 1418   | 29.312 | 1366.722  |
| contentInteractive    | 1860.867 | 1856     | 1712   | 1965   | 48.211 | 1878.119  |
| objectsInitEnd        | 1895.400 | 1890     | 1745   | 2000   | 48.686 | 1912.822  |
| fullyLoaded           | 3137.767 | 3123     | 3063   | 3321   | 58.934 | 3158.856  |
| uss                   | 18.505   | 18.457   | 17.945 | 19.234 | 0.307  | 18.615    |
| pss                   | 22.591   | 22.538   | 22.018 | 23.314 | 0.314  | 22.704    |
| rss                   | 37.903   | 37.852   | 37.344 | 38.645 | 0.312  | 38.015    |

Gaia v2.5 + Gecko v2.5, 30 runs
| Metric                | Mean     | Median   | Min    | Max    | StdDev | 95% Bound |
| --------------------- | -------- | -------- | ------ | ------ | ------ | --------- |
| navigationLoaded      | 1004.967 | 996.500  | 960    | 1142   | 36.800 | 1018.135  |
| willRenderThreads     | 1054.033 | 1046     | 1010   | 1188   | 36.877 | 1067.230  |
| navigationInteractive | 1057.267 | 1052     | 1013   | 1190   | 36.838 | 1070.449  |
| visuallyLoaded        | 1596.067 | 1596     | 1531   | 1676   | 35.403 | 1608.736  |
| contentInteractive    | 2249.700 | 2244     | 2156   | 2331   | 40.399 | 2264.157  |
| objectsInitEnd        | 2284.433 | 2279.500 | 2194   | 2374   | 40.568 | 2298.950  |
| fullyLoaded           | 3719.967 | 3739.500 | 3544   | 3854   | 75.604 | 3747.021  |
| uss                   | 19.222   | 19.154   | 18.777 | 20.195 | 0.302  | 19.330    |
| pss                   | 23.754   | 23.683   | 23.294 | 24.755 | 0.308  | 23.864    |
| rss                   | 39.771   | 39.699   | 39.324 | 40.766 | 0.305  | 39.880    |

> And if you have a chance, run raptor-compare against the metrics.ldjson with
> those two tests to check significance?

base - is Gecko v2.2, Gaia v2.2, 1: - is Gecko v2.5, Gaia v2.2, 2: is Gecko v2.5, Gaia v2.5

sms.gaiamobile.org     base: mean  1: mean  1: delta  1: p-value  2: mean  2: delta  2: p-value
---------------------  ----------  -------  --------  ----------  -------  --------  ----------
navigationLoaded              962     1007        45      * 0.00     1005        43      * 0.00
willRenderThreads            1002     1043        41      * 0.00     1054        52      * 0.00
navigationInteractive        1005     1046        40      * 0.00     1057        52      * 0.00
visuallyLoaded               1313     1356        43      * 0.00     1596       283      * 0.00
contentInteractive           1785     1861        76      * 0.00     2250       465      * 0.00
objectsInitEnd               1890     1895         5        0.62     2284       394      * 0.00
fullyLoaded                  3006     3138       132      * 0.00     3720       714      * 0.00
uss                        16.717   18.505     1.789      * 0.00   19.222     2.505      * 0.00
pss                        20.550   22.591     2.041      * 0.00   23.754     3.204      * 0.00
rss                        34.390   37.903     3.513      * 0.00   39.771     5.381      * 0.00

You can find ldjson files at https://drive.google.com/folderview?id=0B_RkmK8mrD-ITGpoSDlxUElyb1E&usp=sharing#list
(In reply to Julien Wajsberg [:julienw] from comment #29)
> Oleg, did you use "-m" when getting the memory dumps with
> get_about_memory.py ?

Nope, should I? Does it force GC before capturing data? Wondering how we should correlate this with the way Raptor captures numbers and real usage.

Anyway, with -m option, I see smaller difference between Gecko v2.2 + Gaia v2.2 and Gecko v2.5 + Gaia v2.2 - ~1.6MB (I've tried several times,diff varies between 0.8MB and 1.7MB) and ~1MB between Gecko v2.5 + Gaia v2.2 and Gecko v2.5 + Gaia v2.5 (I've mentioned in bug 1197231 comment 33 where this comes from).

As previously results can be found at https://drive.google.com/folderview?id=0B_RkmK8mrD-ITGpoSDlxUElyb1E&usp=sharing#list ("with -m parameter" subfolder)
Yes, -m forces a GC.

I know gecko also does something when putting the app in background (eg a GC can free image data). So maybe it's useful to check this as well: press home, then fetch the about:memory data with -m.
Wow, cool! All your results are statistically significant! 1.7mb USS regression between gecko for 2.2 and 2.5 is a major regression imho.
We should also investigate 43ms visuallyLoaded and 132 fullyLoaded gecko regressions.
> * There are two new compartments: compartment([System Principal],
> processChildGlobal) and compartment([System Principal], Addon-SDK);
> * There is new "layout ─> rule-processor-cache" that per bug 77999 should
> have improved memory consumption, but I see +0.4mb in diff because of that
> (maybe it reduced memory in another place....).
> 
> Maybe they are totally fine, we need experienced eye here :)
> 
> Nicholas could you please help us here, at least we need to figure out if we
> can do anything on the Gecko side and how to move this forward?

heycam knows the most about the rule-processor-cache. I suspect it will be hard to do anything about that, and there is a reasonable chance that the increase is offset elsewhere.

I don't know anything about the new compartment, unfortunately.
Flags: needinfo?(n.nethercote)
Hey Heycam, maybe you can help us, especially looking at comment 27 and the following comments ?

Thanks !
Flags: needinfo?(cam)
Yes, any memory apportioned to layout/rule-processor-cache should be offset by (and if you've got more than one page open in the process, be more than offset by) corresponding drops in other measurements (I think it used to be allocated to each page's layout/pres-shell).

I don't know much about how processes work in Firefox OS, but if we have a process per app, then I imagine bug 77999 doesn't help like it does in Firefox itself.  Unless we have the relevant rule processor in the cache before we spawn the process from Nuwa.

(In reply to Eric Rahm [:erahm] from comment #14)
> - 65,488 unreported bytes from |nsCSSSelector::AddAttribute|
...
> 4) Some CSS attributes are not being reported

Filed bug 1216362 for that.
Flags: needinfo?(cam)
Thanks for the input Cameron!

So having in mind our no-regression FxOS policy, the question is what to do next :)

Ken, Bobby could you please help us to find someone on Gecko side who can take a deeper look on all our profiles/raptor-results (or maybe capture new profiles with more platform details), outline the things that became worse between Gecko v2.2 and Gecko v2.5 (we used the same Gaia for measurements) and maybe suggest how the things can be improved? See comment 27 and below.

We're happy to assist with anything on Gaia side if it speeds up investigation.

Thanks!
Flags: needinfo?(kchang)
Flags: needinfo?(bchien)

Comment 39

3 years ago
Kanru, anyone in your team can help.
Flags: needinfo?(kchen)

Updated

3 years ago
Whiteboard: [MemShrink:meta] → [MemShrink:meta][Profile-wanted]

Updated

3 years ago
Flags: needinfo?(kchang)
Removing bug 1176502 as a dependency since we will not block 2.5 on this.
No longer depends on: 1176502
Resolving, all dependencies are closed.
Status: NEW → RESOLVED
Last Resolved: 3 years ago
Resolution: --- → FIXED
Mmm, but we still have "Major memory usage regression in v2.5" and waiting for the Gecko help (hence [Profile-wanted] in whiteboard).

Do you have another similar bug that we should use/follow instead, if not let's reopen this one?
Flags: needinfo?(doliver)
Makes sense, re-opening pending comment from Bobby or Kanru.
Status: RESOLVED → REOPENED
Flags: needinfo?(doliver)
Resolution: FIXED → ---
We have Bug 1207355 marked 2.5+ and identified as one of the reason of memory regression seen in apps using lot of images.

See https://bugzilla.mozilla.org/show_bug.cgi?id=1211393#c18
Thanks Punam for the pointer !

I heard of this but we still use a lot moz-samplesize in the SMS app so I think we're not as badly impacted as the Gallery app.

Updated

3 years ago
See Also: → bug 1219103

Comment 46

3 years ago
I separate investigation in bug 1219103. Resolved this bug for 2.5 release.
Status: REOPENED → RESOLVED
Last Resolved: 3 years ago3 years ago
Flags: needinfo?(kchen)
Flags: needinfo?(bchien)
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.