Closed Bug 1384178 Opened 3 years ago Closed 3 years ago
Ping can be slow, mostly due to clone Into
Here's a profile of 37ms of jank due to assemblePing: https://perfht.ml/2eKVcQg There's a GC in the reading part of the cloneInto, and the writing part seems to be slow due to CrossCompartmentWrappers.
This is from here: https://dxr.mozilla.org/mozilla-central/rev/7d2e89fb92331d7e4296391213c1e63db628e046/toolkit/components/telemetry/TelemetryController.jsm#410 We actually had issues before where the payload object (passed in from external / not-core-telemetry call-sites) was modified after being passed into assemblePing() and before being sent out to our servers. We need to keep making sure this doesn't happen. Markus, do you know any alternative approaches here (like more efficient cloning or freezing this kind of object)? Or do you know who might?
Flags: needinfo?(mstange) → needinfo?(kmaglione+bmo)
Most of the time spent there is in proxy and GC overhead. Using StructuredCloneHolder would definitely help with that, assuming the whole object tree you're cloning is in the same compartment, since it enters the object compartment before cloning, so generally doesn't need to create wrappers. It would help even more if you don't actually need to immediately deserialize the cloned data.
(In reply to Kris Maglione [:kmag] from comment #3) > Using StructuredCloneHolder would definitely help with that, assuming the > whole object tree you're cloning is in the same compartment, since it enters > the object compartment before cloning, so generally doesn't need to create > wrappers. It would help even more if you don't actually need to immediately > deserialize the cloned data. Without larger changes, we can only say that accessing the cloned data (stringification) will happen often "nearly immediately", but can also take up to a few minutes. There is definitely no write access anymore though. Do you know how we could improve things from our JS code specifically?
Georg, does this bug affect users on the release channel by default? And what is the impact? It wasn't obvious to us if this should be a P1 or not given where we are currently in terms of the timeline of the 57 release...
Release ping sizes are substantially smaller. We also did a number of other optimization for Telemetry. I think this is not critical for the 57 release.
With the recent improvements we did, we should get some fresh data on the performance here. Maybe this is good enough now or shows different functions to optimize.
Priority: P2 → P1
From bug 1186409 components share a common global. Does that change the landscape for cloned data?
We improved various things around the Telemetry core in the QF context. Without further data, closing this bug. Please re-open or file new bug if this is still an issue.
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.