Consider raising the priority of network IPC messages
Categories
(Core :: Networking: HTTP, enhancement, P3)
Tracking
()
Tracking | Status | |
---|---|---|
firefox87 | --- | affected |
People
(Reporter: acreskey, Unassigned)
References
Details
(Whiteboard: [necko-triaged])
Attachments
(1 file)
376.21 KB,
image/png
|
Details |
Bas had suggested an experiment where we increase the priority of select network IPC messages.
We had put together a small change where we added prio(mediumhigh)
to OnStartRequest
, OnTransportAndData
and OnStopRequest
messages.
In this case it was messages from the backgroundChannel to the content process.
In pageload tests we couldn't discern any performance improvements.
Reporter | ||
Comment 1•4 years ago
|
||
We have better tooling now, and so I compared the same change but measured the time from nsHttpChannel::asyncOpen
to nsHttpChannel::OnStopRequest
.
But I again found no measurable difference. (Live sites, Moto G5).
Reporter | ||
Comment 2•4 years ago
|
||
I tried one more experiment, this time increasing the priority of those messages from the httpTransaction
(socket thread, I believe) to the parent process. Change here.
However, this didn't yield any measurable improvements.
Reporter | ||
Comment 3•4 years ago
|
||
If anyone has other ideas, please let me know.
It might be useful to try the combined changes, but perhaps I'm missing bottlenecked messages elsewhere?
Updated•4 years ago
|
Comment 4•4 years ago
|
||
(In reply to Andrew Creskey [:acreskey] [he/him] from comment #1)
Created attachment 9203972 [details]
Comparing the prio(mediumhigh) change with finer-grained metrics.We have better tooling now, and so I compared the same change but measured the time from
nsHttpChannel::asyncOpen
tonsHttpChannel::OnStopRequest
.
I think you might want tomeasure the time from HttpChannelChild::AsyncOpen
to HttpChannelChild::OnStopRequest
, since what you did is affecting the priority of IPC messages from parent to content process. The time spent in nsHttpChannel
has nothing to do with IPC.
Comment 5•4 years ago
|
||
(In reply to Andrew Creskey [:acreskey] [he/him] from comment #2)
I tried one more experiment, this time increasing the priority of those messages from the
httpTransaction
(socket thread, I believe) to the parent process. Change here.However, this didn't yield any measurable improvements.
PHttpTransaciton
is only used when networking over socket process is enabled. Unfortunately, this function is not turned on currently, so this change should have no effect.
Reporter | ||
Comment 6•4 years ago
|
||
(In reply to Kershaw Chang [:kershaw] from comment #4)
(In reply to Andrew Creskey [:acreskey] [he/him] from comment #1)
Created attachment 9203972 [details]
Comparing the prio(mediumhigh) change with finer-grained metrics.We have better tooling now, and so I compared the same change but measured the time from
nsHttpChannel::asyncOpen
tonsHttpChannel::OnStopRequest
.I think you might want tomeasure the time from
HttpChannelChild::AsyncOpen
toHttpChannelChild::OnStopRequest
, since what you did is affecting the priority of IPC messages from parent to content process. The time spent innsHttpChannel
has nothing to do with IPC.
Thanks Kershaw.
I had thought that measuring at the nsHttpChannel
level would still capture the delays involved in the parent-to-child initiation and data transfer.
But this is better -- let me re-measure from HttpChannelChild::AsyncOpen
to HttpChannelChild::OnStopRequest
Reporter | ||
Comment 7•4 years ago
|
||
I measured the impact of mediumhigh
priority on the HttpBackgroundChannel network messages.
(from HttpChannelChild::AsyncOpen
to HttpChannelChild::OnStopRequest
on a Moto G5).
------------------
baseline
count 3820.000000
mean 823.979319
std 1180.953931
min 36.000000
10% 130.900000
25% 215.750000
50% 403.000000
75% 961.500000
90% 2138.100000
95% 3074.650000
max 15732.000000
------------------
------------------
test
count 3767.000000
mean 878.641625
std 1273.867372
min 43.000000
10% 134.000000
25% 223.000000
50% 400.000000
75% 991.000000
90% 2399.800000
95% 3284.400000
max 14917.000000
------------------
I don't see any evidence of an improvement.
Ideally, I'd like to compare this on different hardware. And when we have Bug 1690373 ready, I think we can land such tests in CI.
But until then, I think we can close this one unless anyone has other ideas.
Reporter | ||
Comment 8•4 years ago
|
||
Using the same process that is described in these bugs ( 1690402 1690629) I've evaluated the impact of increasing the priority of PHttpBackgroundChannel
and PHttpTransaction
messages. Patch here.
The test measures total http channel completion time as the user visits 5 popular sites (cold page load), and then revisits them (warm pageload).
Results:
Linux: 5.51% improvement in channel completion time (from network) but not found to be statistically significant
MacOS: 3.36% improvement in channel completion time - both from network and from cache - but not found to be statistically significant
Windows: no change
So I find those to be interesting.
We've already seen many cases where the T-test that's used by perfherder fails to detect significant changes because of bimodal or otherwise noisy date. In Bug 1689373, the more appropriate Mann-Whitney-U test will be run alongside our current T-test.
I think I'm just going to run this test again and see what happens.
If we can keep reproducing these results then I think this has merit.
Reporter | ||
Comment 9•4 years ago
|
||
Unfortunately, the possible gains that I saw in Comment 8 did not reproduce.
Linux: overall network 5.23% slower
MacOS: overall network 0.99% slower
Windows: overall network 1.02% faster
If folks have ideas on other IPC messages to raise the priority of, let me know.
Otherwise I suggest closing this one.
Reporter | ||
Comment 10•2 years ago
|
||
We expanded on this experiment:
- also increasing the priority of events headed to the parent process.
- also trying the new event priority
renderblocking
They seemed to regress a good number of pages and improve a few.
Didn't pick up anything statistically significant on Speedometer2.
But the binaries are there for manual profiling if interested.
Description
•