Open Bug 1322558 Opened 6 years ago Updated 4 months ago

add actual profiler markers to generated ipdl code (mainly for sync messages)

Categories

(Core :: IPC, defect, P3)

defect

Tracking

()

People

(Reporter: froydnj, Unassigned)

References

Details

Discussion during the profiler workshop today indicated that we don't have Gecko profiler markers for IPC messages, particularly sync ones.  We have PROFILER_LABEL, but that's mostly useful for stack unwinding purposes, not for figuring out where interesting IPC messages are.  We should add actual profiler markers so the profiler can display a richer set of information rather than humans or cleopatra attempting to infer where bad message-passing is taking place.
Do we want just markers here, or do we want something like GeckoProfilerTracingRAII so we can see when sync messages start and stop?  Or do we just want to drop markers for async messages, and leave something like GeckoProfilerTracingRAII for bug 1322553?
Flags: needinfo?(mstange)
Separate start + stop markers for async messages seem unnecessary. A simple marker would suffice for them.
I don't know how useful they'd be, though. Async messages themselves aren't really a performance problem until you have lots and lots of them, but then they'll show up in the samples; and if you're interested in finding out why something was called on a different thread or in a different process, TaskTracer is probably the better way forward.

GeckoProfilerTracingRAII seems like the right thing to use for sync messages. We can leave that to bug 1322553, sure.
Flags: needinfo?(mstange)
A simple marker for async messages can be very valuable when you're looking for cross-process (or cross-thread) latency. You have the one marker when the message was sent, and the other marker when it's received.
(In reply to Benjamin Smedberg [:bsmedberg] from comment #3)
> A simple marker for async messages can be very valuable when you're looking
> for cross-process (or cross-thread) latency. You have the one marker when
> the message was sent, and the other marker when it's received.

True, but I think this is something that TaskTracer is supposed to be used for. My understanding is that TaskTracer's raison d'être is to be able to examine the life-cycle of a runnable (including IPC messages), from the point of instantiation, to dispatch, to receipt, to execution. We seem to be putting some elbow grease into making TaskTracer work correctly for Desktop (example: bug 1167634), and to have the new Cleopatra be able to display a visualization for it (https://github.com/mstange/cleopatra/issues/30). I suspect we should double-down on that effort rather than adding new markers, unless I'm missing a use case that TaskTracer won't satisfy.
TaskTracer currently only records one "dispatch" timestamp, it does not record separate timestamps for "sent on one side" vs "received and enqueued on the other side". And I don't actually know which one of these "dispatch" is.
There was a big project to capture timing for all different phases of IPC dispatch in bug 853864, but it looks like that has stalled.
(In reply to Markus Stange [:mstange] from comment #5)
> TaskTracer currently only records one "dispatch" timestamp, it does not
> record separate timestamps for "sent on one side" vs "received and enqueued
> on the other side". And I don't actually know which one of these "dispatch"
> is.

I don't remember but doesn't it record separated tasks for sending, receiving and dispatching? Thinker?

New IPC labeling was added in bug 1319669
Flags: needinfo?(tlee)
For sync IPC, TaskTracer has no another timestamp for reply messages since reply messages do not create runnables/tasks.  So, one of solutions is to add labels for the sending and receiving reply messages in MessageChannel, instead of ipdl code generator.
Flags: needinfo?(tlee)
Priority: -- → P2
Moving to p3 because no activity for at least 1 year(s).
See https://github.com/mozilla/bug-handling/blob/master/policy/triage-bugzilla.md#how-do-you-triage for more information
Priority: P2 → P3
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.