Closed Bug 1732798 Opened 2 months ago Closed 1 month ago

Fission memory comparison tool


(Testing :: AWSY, task, P1)



(Fission Milestone:MVP)

Fission Milestone MVP


(Reporter: mccr8, Assigned: mccr8)




(3 files)

I've been working on a test that can be used with PerfHerder to compare the memory usage of individual sites with and without Fission. The basic idea is to open a web site, do memory minimization, then get a memory report, then open a new tab and close the old one. This gives you a subtest for each individual web site.

With this set up, you can do two pushes, one with e10s and one with Fission, except you have to manually enable Fission so that the push shows up like it is e10s. Then you can use PerfHerder to compare the results for the two pushes to get the different in memory usage for e10s and Fission. Hopefully I'll get around to cleaning up the test and landing it.

The main trick here, besides messing around with the way tabs open, is that you have to dynamically generate the check points for AWSY, because they depend on the set of URLs you are testing. There's a bit of hassle because you need to come up with a file name, so you can't just use the URL, and you can't just use the site, because you have to deal with duplicates (eg TP6 has both Google Docs and Gmail).

Preliminary comparison for TP6 site recordings, on Linux.

Less than 1% change (< +-0.2% mostly): bing, buzzfeed, docs-google, docs-google1, en-wikipedia-org, facebook, google, microsoft, netflix, paypal, tumblr

Under 2% increase: linkedin, mail-google

5-10% increase (rounded): instagram (5%), mail-yahoo (6%), ebay (7%), twitch-tv (7%), reddit (7%), imgur (8%), expedia (10%)

Larger increases (rounded): espn (14%), nytimes (16%), office-live (16%), outlook-live (17%), amazon (18%), youtube (18%), pinterest (19%), imdb (21%), fandom (21%), cnn (22.2%)

Decreases which must be some kind of measurement issue: marvel-fandom (-3%), twitter (-10%).

Mostly in line with what I'd expect based on how "iframe-y" the sites are, though I'm not sure why YouTube is up so much.

Fission Milestone: --- → MVP
Whiteboard: fission-soft-blocker

Here's the comparison for Windows..

I also added some code to record how many processes there are, and the increases are mostly in line with that (~11MB per process). Gmail seems to go up by 8MB despite having the same number of processes, so maybe there's something looking at there.

The numbers on Windows look better. I think this is expected. At least one of the reasons for that is that Windows deal with relocating executables in a way that is better for process overhead.

Approximately no change: Bing, Buzzfeed, Google Docs, Google Docs presentation, Wikipedia, Facebook, Google, Instagram, LinkedIn, Microsoft, Netflix, PayPal, Tumblr, Twitter.

4% or under increase: Reddit (1.9%), YouTube (2%), Marvel Fandom (2.2%), Gmail (2.2%), CNN (2.5%), Office 365 (2.6%), Yahoo Mail (2.9%), Outlook (3.4%), Imgur (3.9%), NYTimes (4%)

The rest: IMDB (4.6%), ESPN (4.9%), Twitch (5.3%), Expedia (5.3%), EBay (6.3%), Pinterest (7%), Amazon (8.2%), Fandom (8.7%)

I'm not sure why Gmail show so much of an increase. The values are kind of all over the place, so maybe it just isn't very stable.

At Hsin-Yi's suggestion, I added some code to extract the process count from the memory reports. I did a single run on Linux. I'd expect that the process counts would be stable across OSes, but I haven't checked yet.

With e10s, all but one page had the same number of processes, 5. That's the main process, one content process, the privileged about process, the extension process, and the RDD process. The exception was Outlook Live that had an extra content process with a service worker.

For Fission, the numbers varied more, of course. Many sites had 5 processes, just the same as e10s. This makes sense, because these web sites don't have any cross origin iframes.

The rest of them:
6: CNN, Yahoo Mail, Marvel Fandom, Office 365, Pinterest, Reddit, YouTube
7: Amazon, EBay, ESPN, Expedia, IMDB, Imgur, NY Times, Outlook, Twitch
8: Fandom

The extra YouTube process is for Google, and it looks like some about:neterror page relating to login.

As you'd expect, the more processes a site had, the higher the increase when enabling Fission. Sites that didn't need extra processes also don't use more memory.

I haven't looked at all of the recordings, but the Buzzfeed one is for the state of the site before you opt-in to advertising or tracking or whatever, which is why there is no Fission overhead and only one process.

Jesup pointed out that the number of processes for CNN seems awfully low. When I load the same CNN story from the live site, I get more like 14 processes. Loading up the recording, it looks like it was taken without the "Accept cookies?" dialogue being passed. So I guess we're basically getting the site without any ads, which is similar to what I saw in Buzzfeed.

I went through the TP6 websites that I have logins for and loaded the live sites and counted how many content processes there are (my previous list was counting total processes, so you'll have to add 5 to these numbers to match the previous ones.

3: Microsoft
4: Amazon, Expedia
7: Fandom, Twitch
9: EBay, NYTimes
10: BuzzFeed
15: CNN
17: ESPN, Marvel Fandom

Priority: -- → P3

Here's the rough increase in the number of content processes from the recording to the live site (for sites that increased, that I had a login for): +2: Microsoft, IMDB, +3: Fandom, +4: Twitch, +6: EBay, NYTimes, +9: BuzzFeed, +13: CNN, +14: ESPN, +15: Marvel Fandom.

On at least BuzzFeed, CNN and ESPN clicking accept on the cookie notice does not load the ads. On Marvel Fandom, I think the page got revamped since the recording, so that's likely a factor. I don't see an cookie notice on that page, but I don't see any ads either, so I'm not sure what is going on there. Surely they used to have some ads.

Severity: -- → N/A
Priority: P3 → P1
Whiteboard: fission-soft-blocker

(In reply to Andrew McCreight [:mccr8] from comment #5)

Here's the comparison for Windows..

These % increases are roughly what we'd expect, right?

Flags: needinfo?(continuation)

(In reply to Andrew Overholt [:overholt] from comment #12)

These % increases are roughly what we'd expect, right?

Sort of. The main issue with these values is that the TP6 recordings don't include ad iframes, so the values are a lot lower than we'd see on the real versions of the pages, because there are that many less processes. I'm currently trying to figure out the best way to work around that, which seems to involve setting up a build on a Windows machine so I can run tests on the live sites, and hope that the readings are stable.

Flags: needinfo?(continuation)

Here's a new version of the patch. This disables the use of recordings, so live sites are used. It also includes web process counting.

I finally got around to getting some measurements on some of the iframe-heavy sites on my Windows machine at home. This is based on 5 runs, and whatever sort of averaging or discarding the outliers that AWSY uses, but be warned that this is for the live sites, and sometimes there were videos and sometimes there weren't, so I'm not sure how bad the variance is yet.

Number of web content processes with Fission: CNN: 19. Marvel Fandom: 24. BuzzFeed: 11. ESPN: 15. Without Fission, all have 1, as you'd expect.

Here are the resident memory numbers (truncated instead of rounded because I was feeling lazy). The number to the left of the arrow is with e10s, while the number to the right of the arrow is with Fission. Then I give the % increase for total browser memory, rounded.

CNN. 610MB --> 787MB. +29%
Marvel Fandom. 548MB --> 796MB. +45%
BuzzFeed. 507MB --> 646MB. +27%
ESPN. 558MB --> 718MB. +29%

That works out to around 10 to 14MB per additional content process. Base resident unique memory on Windows is about 9.8MB, so that more or less lines up with what we've measured.

I got some additional measurements on my Windows machine. As above, these loaded the live sites, and combined the results of 5 loads+measurements for each page. These are pages that had less iframes than the previous set, and also did not require logins to get a reasonable page. I excluded Netflix because while there is a logged out page, it is very trivial. I also excluded Reddit because it kept crashing while getting a memory report with e10s (but not Fission) for some reason. The Twitch page is a little weird because the video is expired.

Number of content processes. With e10s, all pages had 1 content process, as you'd expect. Number of content processes with Fission: 1: Bing, Google, Imgur, Twitter, Wikipedia. 2: YouTube. 3: Microsoft. 4: Amazon, Expedia. 5: Twitch. 6: EBay. 8: IMDB, NYTimes.

Here's the change in memory from e10s to Fission, with the % increase in total memory, ordered by the number of content processes.

Bing (320MB --> 320MB; +0%), Imgur (431MB --> 428MB; ~0%), Twitter (399MB --> 406MB; +1%), Wikipedia (381MB --> 379MB; ~0%), Google (328MB --> 341MB; +4%), YouTube (427MB --> 443MB; +4%), Microsoft (352MB --> 370MB; +5%), Amazon (367MB --> 400MB; +9%), Expedia (445MB --> 488MB; +10%), Twitch (391MB --> 435MB; +11%), EBay (375MB --> 430MB; +15%), IMDB (482MB --> 571MB; +18%), NYTimes (433MB --> 520MB; +20%).

As you'd expect, the memory goes up a bit when there are more content processes, roughly in proportion to it.

One outlier is the increase for Google of around 13MB, despite there being no additional content processes. Looking at the about:memory logs, maybe that's from the GPU process. I don't have any explanation for that.

I think cpeterson has the numbers we need for now.

Closed: 1 month ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.