Raptor hangs when instructed to collect JSTracer profiles

RESOLVED FIXED

Status

defect
P2
normal
RESOLVED FIXED
4 months ago
Last month

People

(Reporter: mgaudet, Unassigned)

Tracking

(Blocks 1 bug)

Firefox Tracking Flags

(Not tracked)

Details

STR

  1. Patch runner.js to include "jstracer" [0]
  2. Attempt to collect a profile with the JSTracer enabled using raptor: JS_TRACE_LOGGING=1 ./mach raptor-test --gecko-profile -t raptor-tp6-docs-firefox

Expected: Profile with JSTracer results is produced
Actual: Raptor test hangs at

19:52:41     INFO -  raptor-control-server received webext_status: retrieving gecko profile

CPU usage seems very low during this: it's not clear anything is happening at all.

[0] https://searchfox.org/mozilla-central/rev/0376cbf447efa16922c550da3bfd783b916e35d3/testing/raptor/webext/raptor/runner.js#323

Assignee: nobody → jporter+bmo
Status: NEW → ASSIGNED

Looking at the console in the browser, it appears there's an error here: https://searchfox.org/mozilla-central/rev/ee3905439acbf81e9c829ece0b46d09d2fa26c5c/toolkit/components/extensions/ExtensionParent.jsm#936-941

The profile being sent from the browser is a lot bigger too (~9MB without tracelogger, ~300MB with). That could be causing an issue here?

I can already see a delay of a couple of minutes when profiling webaudio benchmark tests and waiting for the symbolicated profile afterward. Those are 27MB in total for 3 cycles. Having 300MB I would expect that this process takes way longer.

In this case, it doesn't just take a long time; it actually errors out before sending the data across the wire. This appears to be because the maximum allowable size is 256MB, as defined here: https://searchfox.org/mozilla-central/rev/b59a99943de4dd314bae4e44ab43ce7687ccbbec/ipc/chromium/src/chrome/common/ipc_channel.h#50.

I'm not sure why this is the maximum (or if we should even have a maximum?) though. The easy way is just to bump up the maximum size, but we could also look at splitting up the message or trying to reduce the size of the tracelogger data (not sure if this is possible though).

With bug 1550702 should should become way faster.

Depends on: 1550702

This is a raptor issue and not related to the profiler. So fixing the component.

Assignee: jporter+bmo → nobody
Status: ASSIGNED → NEW
Component: Performance → Raptor
Product: Core → Testing

(In reply to Henrik Skupin (:whimboo) [⌚️UTC+1] from comment #4)

With bug 1550702 should should become way faster.

Cool, this looks like a better plan than what I was tinkering with (gzipping profile data before sending it over the wire). I could keep working on that, but I don't know if it would make sense in light of bug 1550702...

Lets wait what bug 1550702 brings to us here. We can always have even more improvements then.

See Also: → 1551992
Priority: -- → P2

Hey Matthew mind trying again now that bug 1550702 is fixed?

Flags: needinfo?(mgaudet)

Just tested, works great.

Status: NEW → RESOLVED
Closed: Last month
Flags: needinfo?(mgaudet)
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.