Closed Bug 1384178 Opened 7 years ago Closed 6 years ago

assemblePing can be slow, mostly due to cloneInto

Categories

(Toolkit :: Telemetry, enhancement, P2)

enhancement

Tracking

()

RESOLVED WORKSFORME
Performance Impact low
Tracking Status
firefox57 --- fix-optional

People

(Reporter: mstange, Unassigned)

Details

(Keywords: perf)

Here's a profile of 37ms of jank due to assemblePing: https://perfht.ml/2eKVcQg

There's a GC in the reading part of the cloneInto, and the writing part seems to be slow due to CrossCompartmentWrappers.
This is from here:
https://dxr.mozilla.org/mozilla-central/rev/7d2e89fb92331d7e4296391213c1e63db628e046/toolkit/components/telemetry/TelemetryController.jsm#410

We actually had issues before where the payload object (passed in from external / not-core-telemetry call-sites) was modified after being passed into assemblePing() and before being sent out to our servers.
We need to keep making sure this doesn't happen.

Markus, do you know any alternative approaches here (like more efficient cloning or freezing this kind of object)?
Or do you know who might?
Flags: needinfo?(mstange)
Kris might.
Flags: needinfo?(mstange) → needinfo?(kmaglione+bmo)
Most of the time spent there is in proxy and GC overhead.

Using StructuredCloneHolder would definitely help with that, assuming the whole object tree you're cloning is in the same compartment, since it enters the object compartment before cloning, so generally doesn't need to create wrappers. It would help even more if you don't actually need to immediately deserialize the cloned data.
Flags: needinfo?(kmaglione+bmo)
(In reply to Kris Maglione [:kmag] from comment #3)
> Using StructuredCloneHolder would definitely help with that, assuming the
> whole object tree you're cloning is in the same compartment, since it enters
> the object compartment before cloning, so generally doesn't need to create
> wrappers. It would help even more if you don't actually need to immediately
> deserialize the cloned data.

Without larger changes, we can only say that accessing the cloned data (stringification) will happen often "nearly immediately", but can also take up to a few minutes.
There is definitely no write access anymore though.

Do you know how we could improve things from our JS code specifically?
Flags: needinfo?(kmaglione+bmo)
Whiteboard: [qf]
Georg, does this bug affect users on the release channel by default?  And what is the impact?  It wasn't obvious to us if this should be a P1 or not given where we are currently in terms of the timeline of the 57 release...
Flags: needinfo?(gfritzsche)
Release ping sizes are substantially smaller. We also did a number of other optimization for Telemetry.

I think this is not critical for the 57 release.
Flags: needinfo?(gfritzsche)
Priority: -- → P2
Whiteboard: [qf] → [qf:p3]
With the recent improvements we did, we should get some fresh data on the performance here.
Maybe this is good enough now or shows different functions to optimize.
Priority: P2 → P1
From bug 1186409 components share a common global. Does that change the landscape for cloned data?
Keywords: perf
Priority: P1 → P2
We improved various things around the Telemetry core in the QF context.
Without further data, closing this bug.
Please re-open or file new bug if this is still an issue.
Status: NEW → RESOLVED
Closed: 6 years ago
Flags: needinfo?(kmaglione+bmo)
Resolution: --- → WORKSFORME
Performance Impact: --- → P3
Whiteboard: [qf:p3]
You need to log in before you can comment on or make changes to this bug.