Closed
Bug 1410077
Opened 8 years ago
Closed 1 year ago
LabeledEventQueue::GetEvent is slow
Categories
(Core :: XPCOM, defect, P2)
Core
XPCOM
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: smaug, Unassigned)
References
(Blocks 1 open bug)
Details
In Zoom profile GetEvent takes 3% of the time in main thread mentioned in https://bugzilla.mozilla.org/show_bug.cgi?id=1254240#c0.
85% of the GetEvent goes to RemoveEntry.
![]() |
||
Comment 1•8 years ago
|
||
(In reply to Olli Pettay [:smaug] (work week Oct 16-20) from comment #0)
> In Zoom profile GetEvent takes 3% of the time in main thread mentioned in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1254240#c0.
> 85% of the GetEvent goes to RemoveEntry.
Is this the mLabeled.Remove(group) call here:
http://dxr.mozilla.org/mozilla-central/source/xpcom/threads/LabeledEventQueue.cpp#229
?
Flags: needinfo?(bugs)
Reporter | ||
Comment 2•8 years ago
|
||
er, in the testcase mentioned...
Reporter | ||
Comment 3•8 years ago
|
||
I assume so, since I don't see other hashtable remove calls there.
Flags: needinfo?(bugs)
![]() |
||
Comment 4•8 years ago
|
||
Bug 1410575 will help with some of the hashtable overhead, though it doesn't get rid of the Remove(). I wonder if we'd be better off always RawRemove'ing empty queues from the hashtable, effectively never compacting it? With current PLDHashTable strategies, that might not be a good idea, but changes to PLDHashTable might make that a reasonable strategy. We wouldn't expect to have a huge maximum number of SchedulerGroups during a browser session, maybe several hundred or so.
Depends on: 1410575
Priority: -- → P2
Reporter | ||
Comment 5•8 years ago
|
||
Do we need to remove group all the time synchronously, or could we do cleanups occasionally?
Though RunnableEpochQueue* queue = mLabeled.Get(group); is equally worrisome.
![]() |
||
Comment 6•8 years ago
|
||
(In reply to Olli Pettay [:smaug] (work week Oct 16-20) from comment #5)
> Do we need to remove group all the time synchronously, or could we do
> cleanups occasionally?
We could clean up every so often, I guess. That would require extra checks elsewhere, though.
> Though RunnableEpochQueue* queue = mLabeled.Get(group); is equally worrisome.
What is worrisome about that? The Queue is owned by the hashtable...
Reporter | ||
Comment 7•8 years ago
|
||
Hashtable lookups are slow. Many of Quantum Flow bugs were about reducing hashtable usage.
![]() |
||
Comment 8•8 years ago
|
||
(In reply to Olli Pettay [:smaug] from comment #7)
> Hashtable lookups are slow. Many of Quantum Flow bugs were about reducing
> hashtable usage.
Well, reducing dumb hashtable usage, like looking up entries multiple times in hashtables, once to examine and once to remove, yes.
We could make the hashtable usage go away entirely by just baking in the queue to SchedulerGroup itself. This would have some overhead when we're not actually using the scheduling, but the cost should be minimal. Then we wouldn't need the hashtable at all: we'd just ask the scheduler group for its queue.
Actually, we could improve things further if we did that. IIUC, right now, we have the following steps on Dispatch() for the content process:
1. Allocate a runnable that knows what SchedulerGroup it was dispatched to.
2. Dispatch said runnable to a thread.
3. Thread dispatches to the internal queue.
4. Queue asks "Do you have an associated SchedulerGroup?" and handles appropriately.
Olli has filed bugs on step 1 being slow, I think, and step 4 requires a QI, which is not super-speedy. But shouldn't we be able to make those steps go away entirely by being smarter about how Dispatch works? Bill, WDYT?
Flags: needinfo?(wmccloskey)
Reporter | ||
Comment 9•8 years ago
|
||
I wasn't profiling in a level which would show QI being slow. It isn't that slow comparing to allocating extra runnables or doing hashtable lookups.
I think we could probably eliminate the extra runnable now. We would have to pass the SchedulerGroup down through various levels, and then pass it back up again to the nsThread code when it calls GetEvent.
Getting rid of the hashtable operations seems a little bit easier, though, if we stick everything on SchedulerGroup. The one tricky part is that there are multiple queues (one per priority level). But we could index everything on SchedulerGroup by the priority.
Flags: needinfo?(wmccloskey)
Updated•3 years ago
|
Severity: normal → S3
LabeledEventQueue doesn't exist any more.
Status: NEW → RESOLVED
Closed: 1 year ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•