Closed Bug 1374716 Opened 7 years ago Closed 7 years ago

Surprisingly high memory usage from exporting WebIDL properties from a JSM

Categories

(Core :: XPConnect, enhancement, P2)

enhancement

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: mccr8, Assigned: mccr8)

References

Details

(Whiteboard: [MemShrink])

Attachments

(1 file)

Last week, I compared two AWSY runs with the C++ patches from bug 1186409. One has my Services.jsm changes, the other didn't. Adding the Services.jsm changes, which basically just amount to exporting a few extra properties, causes JS memory to increase by a total of 3MB to 4MB, which seems like a lot.

https://treeherder.mozilla.org/perf.html#/comparesubtest?originalProject=try&originalRevision=2efae8dd933ff25e4847b5ff9344381035fc7e88&newProject=try&newRevision=34d866e705d40c65a13f84cca09c4300a4c74180&originalSignature=314aaa2e7701c524790fc1b660ca8a16b7d241e2&newSignature=314aaa2e7701c524790fc1b660ca8a16b7d241e2&framework=4

If we approximate that there are 50 JSMs in each of 4 content processes, and 350 in the parent process (which also seems like a lot...), and assume that every JSM imports Services.jsm, that's overhead of around 7kb per JSM.

I couldn't reproduce locally with a larger number of random exports, so there may be something about these particular properties that I am exporting. I'll attach the patch for Services.jsm. I'm exporting atob, btoa, TextDecoder, TextEncoder, ThreadSafeChromeUtils, and some WebExtension things (MatchGlob, MatchPatternSet WebExtensionPolicy).

I can work around this for bug 1186409, but if we're really getting that much overhead in some cases, it would be worth investigating. Bill suggested bisecting to figure out if a particular property is responsible.
I'm going to focus more on the content process memory increase, because that affects e10s scaling. I looked at the AWSY reports from start+30 seconds, which is the most stable. There are two content processes. The numbers I'll give are a total across both content processes.

It looks like there's about 0.04MiB of additional content process overhead from xpconnect/proto-iface-cache, 0.43MiB of zone overhead, and 1MiB of nursery-committed.

The nursery-committed increase is odd. In all six memory reports without the Services.jsm changes, we have 2MiB of nursery-committed in content processes. In all six memory reports with the Services.jsm changes, we have 3MiB. Maybe the additional allocation in each zone is causing us to grow the nursery, and 30 seconds isn't long enough for it to reduce the nursery size again.

Most of the zone overhead is something like 0.02MiB per compartment, for class(Function)/objects.
(In reply to Andrew McCreight [:mccr8] from comment #1)
> I'm going to focus more on the content process memory increase, because that
> affects e10s scaling. I looked at the AWSY reports from start+30 seconds,
> which is the most stable. There are two content processes. The numbers I'll
> give are a total across both content processes.
> 
> It looks like there's about 0.04MiB of additional content process overhead
> from xpconnect/proto-iface-cache, 0.43MiB of zone overhead, and 1MiB of
> nursery-committed.
> 
> The nursery-committed increase is odd. In all six memory reports without the
> Services.jsm changes, we have 2MiB of nursery-committed in content
> processes. In all six memory reports with the Services.jsm changes, we have
> 3MiB. Maybe the additional allocation in each zone is causing us to grow the
> nursery, and 30 seconds isn't long enough for it to reduce the nursery size
> again.

You can do a try run with this tweaked (maybe 90s), I think just changing settleWait time in testvars [1] should do it. I know that some GC heuristics depend on user activity so that might impact the nursery as well; we simulate keypresses after loading pages [2] but don't do that before the initial start settled measurement [3]. You might want to try adding that as well.

[1] http://searchfox.org/mozilla-central/rev/2bcd258281da848311769281daf735601685de2d/testing/awsy/conf/testvars.json
[2] http://searchfox.org/mozilla-central/rev/2bcd258281da848311769281daf735601685de2d/testing/awsy/awsy/test_memory_usage.py#325
[3] http://searchfox.org/mozilla-central/rev/2bcd258281da848311769281daf735601685de2d/testing/awsy/awsy/test_memory_usage.py#318
Here's the patch. I did some further local bisection that indicated it is only the Services.jsm changes that affect memory usage.
Tell me if I'm wrong here assuming you're going to continue working on this, Andrew.
Assignee: nobody → continuation
Priority: -- → P2
It looks like there is an increase in memory from exporting the 6 WebIDL constructors, but not btoa or atob. Ignoring the nursery increase, there's ~0.44MiB of zone stuff. There are about 90 system compartments in the two content processes. So that's about 800 bytes per constructor. I wouldn't have thought that wrappers would show up as class(Function)/objects.

Boris, do you have a rough sense of what the expected memory cost should be for wrapping a WebIDL constructor from one compartment into another?
Flags: needinfo?(bzbarsky)
Exporting WebIDL constructors from JSMs is an odd thing to do, and skimming through the tree I can't find any other place that does it, so I don't think we need to do anything here if that's the case. I'm just trying to understand what is happening.
Summary: Surprisingly high memory usage from exporting additional properties from a JSM → Surprisingly high memory usage from exporting WebIDL properties from a JSM
> do you have a rough sense of what the expected memory cost should be for wrapping
> a WebIDL constructor from one compartment into another

There are a few sources of possible cost here:

1)  Creation of the actual constructor/prototype/function objects.  These are created lazily.  I _assume_ this is a one-time cost independent of how may things import Services.jsm but it might be one we were not paying before.  How much memory this takes depends on how many functions hang off the relevant constructors and prototypes and whatnot.  This is where class(Function) objects may come in.  

2)  Xrays for the things being exported.  These would be per-consumer, I guess, but shouldn't be very big.  Certainly < 100 bytes.

It might be worth logging how many calls to ResolveSystemBinding (or SystemGlobalResolve or BackstagePass::Resolve) there are with the relevant ids and whether it's always the same JSObject* that we're resolving on or not, just in case I'm wrong and we end up resolving this stuff on many globals.
Flags: needinfo?(bzbarsky)
The additional memory usage I am seeing is associated with compartments that import Services.jsm. (Well, Services itself uses a little more memory but that's less of an issue.) Hopefully I'll have some time to figure out exactly what this is.
I'm not 100% sure what is going on here, but people don't normally do this, and I can work around it, so I'll just close this.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: