Open
Bug 994031
Opened 11 years ago
Updated 2 years ago
[Tarako] Gecko clogs up when sending multiple keystrokes
Categories
(Core :: DOM: Core & HTML, defect, P5)
Tracking
()
NEW
tracking-b2g | backlog |
People
(Reporter: janjongboom, Unassigned)
References
Details
(Keywords: perf, Whiteboard: [c=progress p= s= u=tarako])
So we have fixed most of the Gaia issues in this regard, but there is clogging in Gecko when sending multiple keystrokes from keyboard -> content process.
This is on v1.3t (no OOP).
I send some keystrokes to a content process via the app manager:
'Jan is awesome'.split('').forEach(function(f) { sendKey(f.charCodeAt(0)) })
And we receive them in the content process (in this case the browser):
document.querySelector('#url-input').onkeydown = function() { console.log(+new Date() + ' keyup') }
Here are the results, you see that the time from sendKey->keyup event goes up per keystroke.
0 68
1 79
2 104
3 119
4 140
5 162
6 178
7 193
8 216
9 229
10 259
11 287
12 307
13 324
Reporter | ||
Updated•11 years ago
|
blocking-b2g: --- → 1.3T?
Reporter | ||
Comment 1•11 years ago
|
||
Here are some numbers for sending 'abc...xyz'.
First char, from MozKeyboard.js -> Keyboard.jsm: 36ms., from Keyboard.jsm -> forms.js: 58ms.
Last char, from MozKeyboard.js -> Keyboard.jsm: 191ms., from Keyboard.jsm -> forms.js: 262ms.
So the IPC overhead gets bigger. I don't know if there is an easy fix for that. Maybe buffering or something.
Flags: needinfo?(xyuan)
Comment 2•11 years ago
|
||
Jan, does that also happen if you don't do that in a tight loop? You're not going back to the event loop in your forEach()...
Comment 3•11 years ago
|
||
Jan, would you mind investigate this further?
Component: Gaia::Keyboard → DOM
Flags: needinfo?(janjongboom)
Product: Firefox OS → Core
Comment 4•11 years ago
|
||
I don't know much about the IPC message implementation. But if we want to send multiple displayable characters at once, using |replaceSurroundingText| instead can avoid sending multiply keystrokes.
Flags: needinfo?(xyuan)
Comment 5•11 years ago
|
||
(In reply to Jan Jongboom [:janjongboom] from comment #0)
> This is on v1.3t (no OOP).
No OOP but you talk about IPC in comment 1? I'm missing something here.
Anyhow, can we run gecko profiler (with native stack) on Tarako?
Comment 6•11 years ago
|
||
(In reply to Olli Pettay [:smaug] from comment #5)
> (In reply to Jan Jongboom [:janjongboom] from comment #0)
> > This is on v1.3t (no OOP).
> No OOP but you talk about IPC in comment 1? I'm missing something here.
The keyboard app is not OOP, but the app that holds the input field is OOP.
> Anyhow, can we run gecko profiler (with native stack) on Tarako?
Yes we can profile.
Comment 7•11 years ago
|
||
ni? Thinker, is it something your team can help looking into? Thanks
Flags: needinfo?(tlee)
Comment 8•11 years ago
|
||
I think this is an invalid bug. B2G send and handle keystrokes one by one. So sender here sending keystrokes in a very fast pace, more fast than receiver's through-put. All tasks are in the queues of the receiver waiting for processing. It is very reasonable that receiver takes more time than sender to process keystrokes. The incremental increasing latency is queuing latency. I don't think it is a valid bug.
The right way to read the log is study if the list of [Time(recv_key(n+1))-Time(recv_key(n)) for n = 0..12] is stable.
Flags: needinfo?(tlee)
Reporter | ||
Comment 10•11 years ago
|
||
Well, the thing is that we have seen the clogging up in the UI. This is a real problem. See also 857907
Flags: needinfo?(janjongboom)
Comment 11•11 years ago
|
||
If know we're about to send many key events to child process, we sure should do that in one batch
and let the child process then deal with them one by one.
Comment 12•11 years ago
|
||
It should be done by the kernel with buffering of sockets if key strokes are sent very fast. Linux kernel don't do context switching immediately for IO (IPC), it would keep the current thread/process until the given time slice is up, except threads with real time priority.
The problem here is that the handler is more slower than sender. Sending events in batch don't make handler faster, and kernel already do buffering for us.
For native widgets, ex. text field, they could process key strokes in batch. But for JS handler, I don't see how to improve it with batching.
Comment 13•11 years ago
|
||
So we should speed up the handler then.
(But also sending less IPC messages would be good.)
Is there some minimal testcase for this, something which one could use for profiling.
Updated•11 years ago
|
Priority: -- → P3
Whiteboard: [c=progress p= s= u=]
Updated•10 years ago
|
See Also: → 921299
Whiteboard: [c=progress p= s= u=] → [c=progress p= s= u=tarako]
Comment 14•10 years ago
|
||
See Bug 921299. It looks like the "clogging up of UI" is primarily caused by the synchronous call to RequestNativeKeyBinding. Now that the reflow issues with the keyboard are fixed, that's the next big pain point to be resolved.
Assignee | ||
Updated•10 years ago
|
blocking-b2g: backlog → ---
tracking-b2g:
--- → backlog
Assignee | ||
Updated•6 years ago
|
Component: DOM → DOM: Core & HTML
Updated•2 years ago
|
Severity: normal → S3
You need to log in
before you can comment on or make changes to this bug.
Description
•