[messages] More perf improvements

RESOLVED WONTFIX

Status

defect
RESOLVED WONTFIX
5 years ago
2 years ago

People

(Reporter: julienw, Unassigned)

Tracking

({meta})

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [p(2.2S5)=1][p(2.2S4)=1])

Attachments

(3 attachments, 1 obsolete attachment)

Reporter

Description

5 years ago
In this meta bug, I want to investigate possible improvements we can make solely in Gaia::SMS.
Reporter

Comment 1

5 years ago
I've made a simple script to automatically insert time/timeEnd calls at some places. You can find it here: https://github.com/julienw/gaia-juju-tools/blob/master/perf-instrument
Reporter

Comment 2

5 years ago
Posted file One run on Buri (obsolete) —
Lines starting with "filename:" show how much time it takes to merely execute the script.
Lines starting with "settings.js: init:" show how much time the "init" method takes.
Reporter

Updated

5 years ago
Blocks: 1074783
Reporter

Comment 3

5 years ago
So, what do we especially see?

These files take too much time to merely exccute:
* notify.js takes more than 50ms 
* possibly utils.js (more than 1ms)
* possibly app.js (more than 1ms)

These files have a long init that's executed at startup:
* settings.js: 26ms
* time_headers.js: 37.28ms
* thread_list_ui.js: nearly 35ms -- from this, instanciating StickyHeader might be a culprit
* possibly activity_handler.js: nearly 9ms


Note that the script did not instrument shared libraries, I think we should measure this too.
Reporter

Comment 4

5 years ago
Posted file another run on buri
Attachment #8509466 - Attachment is obsolete: true
Reporter

Comment 5

5 years ago
Here is the code I pushed to measure the times
Reporter

Comment 6

5 years ago
I fixed Compose and ThreadUI because the primitive script couldn't understand properly where to put timeEnd.
I also instrumented shared.

What do we see then?

These additional files take time to execute:
* gaia-header takes 353ms
* l10n takes 10ms
* date_time_helper takes 40ms
* fb_reader_utils takes 7ms

This file takes time to init:
* thread_ui.js takes 80ms
Reporter

Comment 7

5 years ago
Note: gaia-header in Settings takes only 6ms.
Reporter

Updated

5 years ago
Depends on: 1087981
Reporter

Updated

5 years ago
Depends on: 1089145
Reporter

Updated

5 years ago
Depends on: 1089154
Reporter

Updated

5 years ago
Depends on: 1086529
Reporter

Updated

5 years ago
Depends on: 1109754
We should consider making it a feature blocker if this improves launch time.
feature-b2g: --- → 2.2?
feature-b2g: 2.2? → 2.2+
Reporter

Updated

4 years ago
Whiteboard: [p=1]
Reporter

Comment 9

4 years ago
p=1 to investigate more
Reporter

Updated

4 years ago
Depends on: 1118214
Reporter

Updated

4 years ago
Depends on: 1118215
Hey Bevis,

Currently we're trying to squeeze more performance out of SMS app and we played a bit with "maxReadAheadEntries" thing introduced in patch for bug 1057915.

As you probably remember we dispatch "moz-app-visually-complete" event once we loaded and rendered 9 threads (per comment in code this number was taken from number of thread visible on Peak), Flame can have up to 8 threads (8 for reference workload where we have only one date group/header, and 6 when every thread on first page inside its own date group). And obviously we spent much more time to retrieve last 9th thread (~200 ms for my Flame & engineering debug build) probably because of the fact that we read ahead next chunk of threads.

So it would be nice to have more insight from you on that to understand how we can tune this :) 

* What if we increase "maxReadAheadEntries" default value to 9 or 10, will we have a noticeable increase in a memory pressure (eg. for Flame 319mb)?

* What is the right way to tune this setting/preference? Kind of per device/operator config? Or SMS app can calculate appropriate value on the first launch and save it to mozSettings (if possible at all of course)? 

Thanks!
Flags: needinfo?(btseng)
(In reply to Oleg Zasypkin [:azasypkin] from comment #10)
> Hey Bevis,
> 
> Currently we're trying to squeeze more performance out of SMS app and we
> played a bit with "maxReadAheadEntries" thing introduced in patch for bug
> 1057915.
> 
> As you probably remember we dispatch "moz-app-visually-complete" event once
> we loaded and rendered 9 threads (per comment in code this number was taken
> from number of thread visible on Peak), Flame can have up to 8 threads (8
> for reference workload where we have only one date group/header, and 6 when
> every thread on first page inside its own date group). And obviously we
> spent much more time to retrieve last 9th thread (~200 ms for my Flame &
> engineering debug build) probably because of the fact that we read ahead
> next chunk of threads.
> 
> So it would be nice to have more insight from you on that to understand how
> we can tune this :) 
> 
> * What if we increase "maxReadAheadEntries" default value to 9 or 10, will
> we have a noticeable increase in a memory pressure (eg. for Flame 319mb)?
It should be fine because the properties of a message thread is simple, so it wouldn't increase too much IMHO.
In additional, I expect that the device memory is larger if the resolution of the device is larger.

> * What is the right way to tune this setting/preference? Kind of per
> device/operator config? Or SMS app can calculate appropriate value on the
> first launch and save it to mozSettings (if possible at all of course)? 

The only thing I can imagine is to ensure the scroll rate in gaia <= the retrieval rate from DB (items/per seconds).

> Thanks!
Flags: needinfo?(btseng)
removing feature-b2g- flag for the meta bug
feature-b2g: 2.2+ → ---
Reporter

Comment 13

4 years ago
1 more point to investigate the read ahead setting and async scripts.
Whiteboard: [p=1] → [p(2.2S5)=1][p(2.2S4)=1]
Posted file Link to github
Hi Julien, it's a test patch that including some small refinement in renderThread path:
- Remove some unnecessary class innerHTML setup
- Refine the list/container insertion logic. In the original logic, the new node will check the nodes' timestamp from the latest to the oldest. It is good when inserting a new thread node, but really bad for inserting the threads in renderThread(Because the timestamp of node is always the oldest in the list, which mean it always travse all the list for finding the correct position). I change the checking from the oldest first, and it should reduce the checking for insertion. The overall time reduced ~50ms on my flame.
Attachment #8564061 - Flags: feedback?(felash)
Reporter

Comment 15

4 years ago
(In reply to Steve Chung [:steveck](OOO 2/18 - 2/23) from comment #14)
> Created attachment 8564061 [details] [review]
> Link to github
> 
> Hi Julien, it's a test patch that including some small refinement in
> renderThread path:
> - Remove some unnecessary class innerHTML setup
> - Refine the list/container insertion logic. In the original logic, the new
> node will check the nodes' timestamp from the latest to the oldest. It is
> good when inserting a new thread node, but really bad for inserting the
> threads in renderThread(Because the timestamp of node is always the oldest
> in the list, which mean it always travse all the list for finding the
> correct position). I change the checking from the oldest first, and it
> should reduce the checking for insertion. The overall time reduced ~50ms on
> my flame.

Is it 50ms for the first panel or 50ms for the whole thread list?

Because we're inserting 9 threads only, I doubt that the insertion logic change wins 50ms... Can you try the 2 changes separately?

Also maybe we should use a data model in JavaScript instead of relying on the DOM to find the correct location to insert a thread...
Reporter

Updated

4 years ago
Attachment #8564061 - Flags: feedback?(felash)
Reporter

Updated

4 years ago
Depends on: 1136170
Reporter

Comment 16

2 years ago
Mass closing of Gaia::SMS bugs. End of an era :(
Status: NEW → RESOLVED
Last Resolved: 2 years ago
Resolution: --- → WONTFIX
Reporter

Comment 17

2 years ago
Mass closing of Gaia::SMS bugs. End of an era :(
You need to log in before you can comment on or make changes to this bug.