Open Bug 994031 Opened 11 years ago Updated 2 years ago

[Tarako] Gecko clogs up when sending multiple keystrokes

Categories

(Core :: DOM: Core & HTML, defect, P5)

x86
macOS
defect

Tracking

()

tracking-b2g backlog

People

(Reporter: janjongboom, Unassigned)

References

Details

(Keywords: perf, Whiteboard: [c=progress p= s= u=tarako])

So we have fixed most of the Gaia issues in this regard, but there is clogging in Gecko when sending multiple keystrokes from keyboard -> content process. This is on v1.3t (no OOP). I send some keystrokes to a content process via the app manager: 'Jan is awesome'.split('').forEach(function(f) { sendKey(f.charCodeAt(0)) }) And we receive them in the content process (in this case the browser): document.querySelector('#url-input').onkeydown = function() { console.log(+new Date() + ' keyup') } Here are the results, you see that the time from sendKey->keyup event goes up per keystroke. 0 68 1 79 2 104 3 119 4 140 5 162 6 178 7 193 8 216 9 229 10 259 11 287 12 307 13 324
blocking-b2g: --- → 1.3T?
Keywords: perf
Here are some numbers for sending 'abc...xyz'. First char, from MozKeyboard.js -> Keyboard.jsm: 36ms., from Keyboard.jsm -> forms.js: 58ms. Last char, from MozKeyboard.js -> Keyboard.jsm: 191ms., from Keyboard.jsm -> forms.js: 262ms. So the IPC overhead gets bigger. I don't know if there is an easy fix for that. Maybe buffering or something.
Flags: needinfo?(xyuan)
Jan, does that also happen if you don't do that in a tight loop? You're not going back to the event loop in your forEach()...
Jan, would you mind investigate this further?
Component: Gaia::Keyboard → DOM
Flags: needinfo?(janjongboom)
Product: Firefox OS → Core
I don't know much about the IPC message implementation. But if we want to send multiple displayable characters at once, using |replaceSurroundingText| instead can avoid sending multiply keystrokes.
Flags: needinfo?(xyuan)
(In reply to Jan Jongboom [:janjongboom] from comment #0) > This is on v1.3t (no OOP). No OOP but you talk about IPC in comment 1? I'm missing something here. Anyhow, can we run gecko profiler (with native stack) on Tarako?
(In reply to Olli Pettay [:smaug] from comment #5) > (In reply to Jan Jongboom [:janjongboom] from comment #0) > > This is on v1.3t (no OOP). > No OOP but you talk about IPC in comment 1? I'm missing something here. The keyboard app is not OOP, but the app that holds the input field is OOP. > Anyhow, can we run gecko profiler (with native stack) on Tarako? Yes we can profile.
ni? Thinker, is it something your team can help looking into? Thanks
Flags: needinfo?(tlee)
I think this is an invalid bug. B2G send and handle keystrokes one by one. So sender here sending keystrokes in a very fast pace, more fast than receiver's through-put. All tasks are in the queues of the receiver waiting for processing. It is very reasonable that receiver takes more time than sender to process keystrokes. The incremental increasing latency is queuing latency. I don't think it is a valid bug. The right way to read the log is study if the list of [Time(recv_key(n+1))-Time(recv_key(n)) for n = 0..12] is stable.
Flags: needinfo?(tlee)
triage: let's not block with this bug
blocking-b2g: 1.3T? → backlog
Well, the thing is that we have seen the clogging up in the UI. This is a real problem. See also 857907
Flags: needinfo?(janjongboom)
If know we're about to send many key events to child process, we sure should do that in one batch and let the child process then deal with them one by one.
It should be done by the kernel with buffering of sockets if key strokes are sent very fast. Linux kernel don't do context switching immediately for IO (IPC), it would keep the current thread/process until the given time slice is up, except threads with real time priority. The problem here is that the handler is more slower than sender. Sending events in batch don't make handler faster, and kernel already do buffering for us. For native widgets, ex. text field, they could process key strokes in batch. But for JS handler, I don't see how to improve it with batching.
So we should speed up the handler then. (But also sending less IPC messages would be good.) Is there some minimal testcase for this, something which one could use for profiling.
Priority: -- → P3
Whiteboard: [c=progress p= s= u=]
See Also: → 921299
Whiteboard: [c=progress p= s= u=] → [c=progress p= s= u=tarako]
See Bug 921299. It looks like the "clogging up of UI" is primarily caused by the synchronous call to RequestNativeKeyBinding. Now that the reflow issues with the keyboard are fixed, that's the next big pain point to be resolved.
blocking-b2g: backlog → ---
Bulk priority change, per :mdaly
Priority: P3 → P5
Component: DOM → DOM: Core & HTML
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.