Closed Bug 865747 Opened 12 years ago Closed 7 years ago

Contacts main view takes too long for seeking

Categories

(Firefox OS Graveyard :: Gaia::Contacts, defect, P2)

defect

Tracking

(blocking-b2g:-, b2g18+)

RESOLVED WONTFIX
blocking-b2g -
Tracking Status
b2g18 + ---

People

(Reporter: vingtetun, Assigned: kgrandon)

References

Details

(Keywords: perf, Whiteboard: c= p=5 ,)

Attachments

(2 files)

One of the main use case for contacts is when you open it and seek for one of your contacts. It takes way to long with the heavy workload. Blocking since this is the main use case for this app.
Attached patch pocSplinter Review
Here is a little poc. It does not synchronize correctly contacts but makes it much (way much) faster for seeking with 2000 contacts. I feel like someone should take over this code and go ahead.
Attachment #741915 - Flags: feedback?(alberto.pastor)
Hi there, I already did a poc for caching (and already synchronising with contacts changes), but after lasts improvements in the Backend I didn't feel it was needed anymore. In the Frontend side, we had a performance review in Madrid, and the only small point to be fixed was the "renderOrg" method, that obviously is too expensive for what it does. Could you please upload a video or give more details about STR (number of contacts, contact to be accessed, and times), as I'm not having those problems with my 500 contacts dogfooding agenda. Regarding the code, where can I find cache.js and visibility_monitor? Is important knowing how long do we spend retrieving the cache. At the other hand, are we sure retreiving the whole cache is not more expensive for showing the first chunk, that doing it with cursors? Thanks for the poc! Let me reproduce your use case and see how we can solve it. Regards
Attached patch cache.jsSplinter Review
(In reply to Alberto Pastor [:albertopq] from comment #2) > Hi there, > I already did a poc for caching (and already synchronising with contacts > changes), but after lasts improvements in the Backend I didn't feel it was > needed anymore. In the Frontend side, we had a performance review in Madrid, > and the only small point to be fixed was the "renderOrg" method, that > obviously is too expensive for what it does. > The backend returns all the informations for the contacts even if you need only some of them. Having a local cache will free you of this limitation by having a view that fits exactly your needs whatever you need to displayed. Also the contacts APIs convert all the contacts from raw data to Contacts object on the process of the child which cost more than 2 seconds for 2000 contacts. So if you have already some code for synchronizing I would recommand to do use it combine with caching + visibility_monitor.js. That would let the contacts app to scale to any number of contacts. (funnier cache algorithm could be applied one day if this simple map proves to not be enough or to consume too much memory. The simplest would be to have a second cache that keeps 6 contacts for each letter. That would make it incredibly fast to seek and when the user release the finger then the first cache can be accessed to returns the target letter). > Could you please upload a video or give more details about STR (number of > contacts, contact to be accessed, and times), as I'm not having those > problems with my 500 contacts dogfooding agenda. > I'm using the x-heavy reference workload and it takes seconds for me to be able to seek to the end of my contacts list. I don't have what is necessary here to take a video but this is obvious when you use the good workload! STR are: - load the x-heavy workload - start the contacts app and then try to seek for 'V' because you want to call me! Actual results: - The list of contacts for 'V' is not displayed immediately. I have to wait several seconds. Expected results: - The list of contacts for 'V' is displayed immediately. > Regarding the code, where can I find cache.js and visibility_monitor? Is > important knowing how long do we spend retrieving the cache. At the other > hand, are we sure retreiving the whole cache is not more expensive for > showing the first chunk, that doing it with cursors? > visibility_monitor.js is already in shared and this is what is used for the gallery app to be able to display thousands of images quickly and without busting the memory. It would be trivial to implement 2 caches. One for showing the first chunk and all the 6 firsts contacts of all letters for instant seeking. And a second that contains more informations for when the user has choose a letter or when it starts to scroll. I forgot to add cache.js in the patch sigh. Let's do it. Also using visibility_monitor.js will let us free tons of memory. It would be probably very instructive to launch b2g-procrank with and without the proposed impl. Also I would like to apply this to the sms main list and the call log. Those are much simpler than contacts because there is no seeking and that would make them render very fast for whatever number of rows.
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #3) > Created attachment 743582 [details] [diff] [review] > cache.js > visibility_monitor.js is already in shared and this is what is used for the > gallery app to be able to display thousands of images quickly and without > busting the memory. > the ImageLoader component I believe does the same work as visibility monitor, at least for images and FB information. In any case I would like to see Vivien's proposal in the form a PR even if that PR does not merge with current code. Otherwise it is very difficult, at least for me, to read the .diff format attached.
(In reply to Vivien Nicolas (:vingtetun) (:21) from comment #3) > So if you have already some code for synchronizing I would recommand to do > use it combine with caching + visibility_monitor.js. That would let the I love the idea of the visibility_monitor.js, it reminds me to the view holder pattern in android, and I totally agree that will save tons of memory. > contacts app to scale to any number of contacts. (funnier cache algorithm > could be applied one day if this simple map proves to not be enough or to > consume too much memory. The simplest would be to have a second cache that > keeps 6 contacts for each letter. That would make it incredibly fast to seek > and when the user release the finger then the first cache can be accessed to > returns the target letter). > > > > Could you please upload a video or give more details about STR (number of > > contacts, contact to be accessed, and times), as I'm not having those > > problems with my 500 contacts dogfooding agenda. > > > > I'm using the x-heavy reference workload and it takes seconds for me to be > able to seek to the end of my contacts list. I don't have what is necessary > here to take a video but this is obvious when you use the good workload! > > STR are: > - load the x-heavy workload > - start the contacts app and then try to seek for 'V' because you want to > call me! > > Actual results: > - The list of contacts for 'V' is not displayed immediately. I have to wait > several seconds. > > Expected results: > - The list of contacts for 'V' is displayed immediately. > > > Regarding the code, where can I find cache.js and visibility_monitor? Is > > important knowing how long do we spend retrieving the cache. At the other > > hand, are we sure retreiving the whole cache is not more expensive for > > showing the first chunk, that doing it with cursors? > > > > visibility_monitor.js is already in shared and this is what is used for the > gallery app to be able to display thousands of images quickly and without > busting the memory. > > It would be trivial to implement 2 caches. One for showing the first chunk > and all the 6 firsts contacts of all letters for instant seeking. And a > second that contains more informations for when the user has choose a letter > or when it starts to scroll. If I understood correctly you want one cache for quick displaying and a second one with all the information. But I guess that in this second cache you want to index by letter and seek by letter there isnt? If not, if we need to get this second cache for all the contacts, that may take a while as well (of course less than fetching it from the backend), should we mix this with using cursors? One thing that I don't like is that we need to care about cache synchronization, hopefully I guess it will be just take it or regenerate it. But that with things like Facebook synchronization could happen frequently (photos, company name ...). > > I forgot to add cache.js in the patch sigh. Let's do it. > > > Also using visibility_monitor.js will let us free tons of memory. It would > be probably very instructive to launch b2g-procrank with and without the > proposed impl. Totally agree. > > > Also I would like to apply this to the sms main list and the call log. Those > are much simpler than contacts because there is no seeking and that would > make them render very fast for whatever number of rows. Why don't we sit one day together to discuss this in person (or via vydio) ?
(In reply to Francisco Jordano [:arcturus] from comment #5) > > It would be trivial to implement 2 caches. One for showing the first chunk > > and all the 6 firsts contacts of all letters for instant seeking. And a > > second that contains more informations for when the user has choose a letter > > or when it starts to scroll. > > If I understood correctly you want one cache for quick displaying and a > second one with all the information. But I guess that in this second cache > you want to index by letter and seek by letter there isnt? > > If not, if we need to get this second cache for all the contacts, that may > take a while as well (of course less than fetching it from the backend), > should we mix this with using cursors? > The ideal solution would be what you suggest. Though I wonder how long it takes to fetch 2000 contacts from a database. This will be obviously way faster than the current API since it requires fetching only a specific set of informations and it does not require any conversion to a Contact object. Combining them with cursors could be nice too. In fact it could be awesome if we have an index by first letter in the cache! > One thing that I don't like is that we need to care about cache > synchronization, hopefully I guess it will be just take it or regenerate it. > But that with things like Facebook synchronization could happen frequently > (photos, company name ...). > I agree that synchronization is an issue. But in the middle term I have heard some plans about having APIs returns what has been changed since a specific 'changeset'. That would make synchronization way much easier and that will totally validate the local cache. Jonas any opinions ? > > > > > > Also I would like to apply this to the sms main list and the call log. Those > > are much simpler than contacts because there is no seeking and that would > > make them render very fast for whatever number of rows. > > Why don't we sit one day together to discuss this in person (or via vydio) ? Sure. I will reach you on IRC tomorrow if possible (and followup on Vydio if needed :)). Today is not really possible since it seems like my mood is buggy and you will r- me if we discussed :)
Flags: needinfo?(jonas)
Comment on attachment 743582 [details] [diff] [review] cache.js (The use of JSON.stringify/parse should probably be eliminated since IndexedDB's use of structured clone is more featureful and efficient. Unless the goal is just to get toJSON called to get something that can be structured cloned at all...) Anywho, in the e-mail app we just landed caching changes yesterday, and we're cramming that data in a (not pretty!) cookie cache, at least for now. I'd only consider it if it's the only way to get your numbers down. See saveCookieCache/getCookieCache: https://github.com/mozilla-b2g/gaia-email-libs-and-more/blob/master/data/lib/mailapi/mailapi.js#L21
If we decide that a cache is needed, why do we actually want to cache the contact info instead the already generated HTML? Would a System Message when the app is closed and a contact changes (in order to re-build the cache) help? Let's talk tomorrow about all of this, as I'm not sure requesting the cache + the real contacts when needed + painting is going to be that fast. At the other hand, is 2000 contacts the average number we think we are going to have? I think we should only block on numbers we expect to be the target. For higher numbers, just tracking it should be enough. (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #6) > (In reply to Francisco Jordano [:arcturus] from comment #5) > > > It would be trivial to implement 2 caches. One for showing the first chunk > > > and all the 6 firsts contacts of all letters for instant seeking. And a > > > second that contains more informations for when the user has choose a letter > > > or when it starts to scroll. > > > > If I understood correctly you want one cache for quick displaying and a > > second one with all the information. But I guess that in this second cache > > you want to index by letter and seek by letter there isnt? > > > > If not, if we need to get this second cache for all the contacts, that may > > take a while as well (of course less than fetching it from the backend), > > should we mix this with using cursors? > > > > The ideal solution would be what you suggest. Though I wonder how long it > takes to fetch 2000 contacts from a database. This will be obviously way > faster than the current API since it requires fetching only a specific set > of informations and it does not require any conversion to a Contact object. > > Combining them with cursors could be nice too. In fact it could be awesome > if we have an index by first letter in the cache! > > > One thing that I don't like is that we need to care about cache > > synchronization, hopefully I guess it will be just take it or regenerate it. > > But that with things like Facebook synchronization could happen frequently > > (photos, company name ...). > > > > I agree that synchronization is an issue. But in the middle term I have > heard some plans about having APIs returns what has been changed since a > specific 'changeset'. That would make synchronization way much easier and > that will totally validate the local cache. > > Jonas any opinions ? > > > > > > > > > > Also I would like to apply this to the sms main list and the call log. Those > > > are much simpler than contacts because there is no seeking and that would > > > make them render very fast for whatever number of rows. > > > > Why don't we sit one day together to discuss this in person (or via vydio) ? > > Sure. I will reach you on IRC tomorrow if possible (and followup on Vydio if > needed :)). Today is not really possible since it seems like my mood is > buggy and you will r- me if we discussed :)
(In reply to Andrew Sutherland (:asuth) from comment #7) > Comment on attachment 743582 [details] [diff] [review] > cache.js > > (The use of JSON.stringify/parse should probably be eliminated since > IndexedDB's use of structured clone is more featureful and efficient. That makes sense. There is no good reasons for the code to use JSON.* methods. I was just hacking. > Anywho, in the e-mail app we just landed caching changes yesterday, and > we're cramming that data in a (not pretty!) cookie cache, at least for now. > I'd only consider it if it's the only way to get your numbers down. See > saveCookieCache/getCookieCache: > https://github.com/mozilla-b2g/gaia-email-libs-and-more/blob/master/data/lib/ > mailapi/mailapi.js#L21 I'm not agressive on using indexedDB vs the rest of the world. But it could be fun to experiment with the cursor idea for indexes over letters.
(In reply to Alberto Pastor [:albertopq] from comment #8) > If we decide that a cache is needed, why do we actually want to cache the > contact info instead the already generated HTML? I would say it it easier to maintain a cache of informations than a cache of generated html. Also you will enter into an out-of-date cache each time the app css / html structure changes. > Would a System Message when the app is closed and a contact changes (in > order to re-build the cache) help? I wish Jonas has a better solution for us. But in the meantime that sounds an interesting idea to make the sync code way much easier. > Let's talk tomorrow about all of this, as I'm not sure requesting the cache > + the real contacts when needed + painting is going to be that fast. It is if you try the poc. Seeking is instant. > At the > other hand, is 2000 contacts the average number we think we are going to > have? I think we should only block on numbers we expect to be the target. Remember that we can import contacts from many places.... My GF has ~750 contacts in her phone and there is no import feature built-in. But that's right. I would be happy to have precise numbers of how many contacts we expect. In the meantime (because I bet it would takes months to have a real value that is not a guess - i wish i'm wrong here) it would be good to write this code and to see the impact on memory too. Less memory == less killed == less cold restart. > (In reply to Vivien Nicolas (:vingtetun) (:21) from comment #6) > > (In reply to Francisco Jordano [:arcturus] from comment #5) > > > > It would be trivial to implement 2 caches. One for showing the first chunk > > > > and all the 6 firsts contacts of all letters for instant seeking. And a > > > > second that contains more informations for when the user has choose a letter > > > > or when it starts to scroll. > > > > > > If I understood correctly you want one cache for quick displaying and a > > > second one with all the information. But I guess that in this second cache > > > you want to index by letter and seek by letter there isnt? > > > > > > If not, if we need to get this second cache for all the contacts, that may > > > take a while as well (of course less than fetching it from the backend), > > > should we mix this with using cursors? > > > > > > > The ideal solution would be what you suggest. Though I wonder how long it > > takes to fetch 2000 contacts from a database. This will be obviously way > > faster than the current API since it requires fetching only a specific set > > of informations and it does not require any conversion to a Contact object. > > > > Combining them with cursors could be nice too. In fact it could be awesome > > if we have an index by first letter in the cache! > > > > > One thing that I don't like is that we need to care about cache > > > synchronization, hopefully I guess it will be just take it or regenerate it. > > > But that with things like Facebook synchronization could happen frequently > > > (photos, company name ...). > > > > > > > I agree that synchronization is an issue. But in the middle term I have > > heard some plans about having APIs returns what has been changed since a > > specific 'changeset'. That would make synchronization way much easier and > > that will totally validate the local cache. > > > > Jonas any opinions ? > > > > > > > > > > > > > > Also I would like to apply this to the sms main list and the call log. Those > > > > are much simpler than contacts because there is no seeking and that would > > > > make them render very fast for whatever number of rows. > > > > > > Why don't we sit one day together to discuss this in person (or via vydio) ? > > > > Sure. I will reach you on IRC tomorrow if possible (and followup on Vydio if > > needed :)). Today is not really possible since it seems like my mood is > > buggy and you will r- me if we discussed :)
What's the metric we'll use to measure success, and what target are we trying to hit here?
Whiteboard: u=user c=performance
Also, is this a regression from v1.0.1?
Not easy to provide metrics, it's about usability. Just tried with and without the patch: Video is made with 2000 Contacts. Vivien patched version is on the right current master is on the left: http://bit.ly/1122AcS The patched version is usable, the other is painfull. I recommend leo+
It's not a regression from v1.0.1 - the video shows the win here but it could still be useful to set a performance target here in time measurement. Product team can you weigh in on whether this would be desired for v1.1 blocking otherwise we can track and reason on an uplift nomination when the patch has had a chance to be dogfooded on master.
Flags: needinfo?(ffos-product)
The POC is a marked improvement. I'll defer to Sandip for the performance target, but given the focus on performance (across all partners) I would block once we have the target specified.
Flags: needinfo?(ffos-product) → needinfo?(skamat)
Anyone actively working this right now? I can take it on, but it will take me some time to spin up.
Assignee: nobody → bkelly
Status: NEW → ASSIGNED
Contacts app performance has been a big issue from most partners (via Qcom). I would like to get this in for 1.1. I would normally ask for measurements on such cases, but the video shows painfully slow behavior that we would obviously need to fix. What are the risks?
Flags: needinfo?(skamat)
Flags: needinfo?(jonas)
We're tracking this and would take an approval (based upon risk), but need to better understand if this should be a blocker. If it is a blocker, we need a target.
blocking-b2g: leo? → -
tracking-b2g18: --- → +
Whiteboard: u=user c=performance → u=user c=
Our competitive target is 1500ms for 1000 contacts. This is what Qcom has been benchmarking us against for a while now. Since above is with 2000 contacts, can we get measurements for 1000 contacts?
I can try a 1000 contact load on v1-train later today. Is the 1500ms target for a particular operation? For example, initial load, jumping from A->X, etc? Thanks!
that benchmark is marked for initial app launch (full load).
Here is a movie on v1-train using 1000 contacts: https://www.dropbox.com/s/puisiapxhxh6ubr/20130510_b2g_v1train_1000_contacts_scroll.mov The scrolling is not fully functional until close 20 seconds after launching contacts. Note, however, once the contact list has been fully loaded the app functions quickly. The slow down does not occur again until the app is killed and relaunched. I've started working from Vivien's poc in this branch: https://github.com/wanderview/gaia/tree/bug-865747-poc So far I've addressed the review feedback from asuth in comment 7.
We were talking about adding a "generation" property to the contacts API which would allow the contacts app to see if any changes had been made since last time the contact list was changed. This might help knowing when the local cache should be invalidated.
(In reply to Jonas Sicking (:sicking) from comment #23) > We were talking about adding a "generation" property to the contacts API > which would allow the contacts app to see if any changes had been made since > last time the contact list was changed. FWIW, that already landed in bug 864556. The API is navigator.mozContacts.getRevision().onsuccess = function(e) { dump("revision: " + e.target.result); }
I did some rough profiling of the backend on Friday and over the weekend. I wanted to see if there was any hope of avoiding the app caching approach. It would be nice not to duplicate data on a device with constrained resources if we can help it. The tl;dr version is that ContactsAPI does not seem to currently meet our goal of loading 1000 contacts in 1500ms. It currently takes ~5300ms to load and even with aggressive tuning still takes ~3800ms. This is pretty far from our goal. Therefore I will dig into Vivien's caching approach now. (Sorry if everyone already knew this, but it was helpful for me to understand the system dynamics.) Longer details here for those interested: Measuring at the cursor callback from the ContactsAPI I found that it took ~5300ms to load 1001 contacts (see http://bit.ly/17l0LZO). In all of these tests the ContactsAPI is accessing its query cache. Under the hood this broke down as follows: 1) ~2000ms spent calling splice() 52 times in ContactDB.jsm sendChunk() to retrieve the next chunk of 20 contacts to return. See: http://mzl.la/17kY3DC 2) ~1250ms spent in IPC sending the chunk from ContactService.jsm to ContactManager.js. 3) About ~2050ms spent passing the contacts from ContactManager.js to contacts_list.js. This includes IPC costs to ask ContactService.jsm to send the next chunk. Its not 100% clear to me why the splice() in (1) is so costly. I assume that the splice() is actually operating on some kind of fancy IDB proxy object that is doing extra work when you first access a value. I tried playing with the CHUNK_SIZE and CONTACTS_SENDMORE_MINIMUM. Increasing CHUNK_SIZE to 1001 in order to return all contacts in one trip did reduce overall time to load by ~1500ms, but the total was still ~3800ms. This savings mainly came from (3) and partly from (1). Interestingly, the IPC time increased. (Of course, this approach also increases the delay to showing the first contact above the fold.) Changes to CONTACTS_SENDMORE_MINIMUM did not have any appreciable effect. Increasing the value does increase process parallelism, but I think in this case we are just paying more context switching costs with little benefit. With the query cache it does not seem that the CPU is waiting on external I/O and the time is more spent doing copying within memory. Therefore reducing context switches increases overall performance. CONTACTS_SENDMORE_MINIMUM is already quite low, though, so this is already the case. Full results are in a google doc here: http://bit.ly/125yjoH I also try hacking in an API modification to return all contacts in a chunk to contacts_list.js as an array. This did not seem to have much impact with this measurement since it really just saved some function calls with no copying, etc. It seems the app code could possibly be more intelligent with an array of contacts, however, since it might be able to update larger parts of the DOM with fewer reflows. So overall, it does not seem possible to meet our goal of loading 1000 contacts in 1500ms with solely the ContactsAPI at the moment. The IPC costs alone come close to our total time budget. Perhaps if there was some kind of "getSummary()" API that returned just given and family names it could be returned faster. Therefore I will dig into the app and Vivien's caching approach now.
Another idea that might be worth investigating: Add a "starts with" filter to ContactsAPI. This would allow the app to load the first 6 contacts in each letter group at start. This might be fast enough to allow letter group scrolling "immediately" without caching. Of course, the remaining contacts in each group would still need to be loaded to allow normal scrolling after the letter jump.
(In reply to Ben Kelly [:bkelly] from comment #25) > > The tl;dr version is that ContactsAPI does not seem to currently meet our > goal of loading 1000 contacts in 1500ms. It currently takes ~5300ms to load > and even with aggressive tuning still takes ~3800ms. This is pretty far > from our goal. Therefore I will dig into Vivien's caching approach now. > (Sorry if everyone already knew this, but it was helpful for me to > understand the system dynamics.) > > ... > > Under the hood this broke down as follows: > > 1) ~2000ms spent calling splice() 52 times in ContactDB.jsm sendChunk() to > retrieve the next chunk of 20 contacts to return. See: http://mzl.la/17kY3DC > > 2) ~1250ms spent in IPC sending the chunk from ContactService.jsm to > ContactManager.js. > > 3) About ~2050ms spent passing the contacts from ContactManager.js to > contacts_list.js. This includes IPC costs to ask ContactService.jsm to send > the next chunk. > > Its not 100% clear to me why the splice() in (1) is so costly. I assume > that the splice() is actually operating on some kind of fancy IDB proxy > object that is doing extra work when you first access a value. > > ... > > Full results are in a google doc here: http://bit.ly/125yjoH > > ... The IPC costs > alone come close to our total time budget. Perhaps if there was some kind > of "getSummary()" API that returned just given and family names it could be > returned faster. > Excellent investigation Ben. Now that you've got a clear understanding of where time's being spent please provide a brief prioritized list of possible changes ranging from ideal (refactoring, etc.) to bandaid. Based on your findings it looks like we need to solve the performance issues in items #1 & #3 for the biggest gain. For #1: Slow splice(): ContactDispatcher.sendNow(): sendChunk() + Have you looked into not using splice but instead moving the chunk of contacts into a new array then sending that to aCallback()? + aContacts appears to be the array of cached contacts and the splice() call looks like it's only being used as a convenient way to create chunks. + aContacts.splice() may be spending extra time deleting the removed elements from and resetting indices on the large aContacts[]. + It may be less work to create a new chunk array and move the elements from aContacts[]'s chunkStart index to CHUNK_SIZE into it, simultaneously setting those indices in aContacts[] to null. For #2: ContactService.jsm => ContactManager.js + Is there a way to share the cache array between them without passing it? If so maybe instead of sending chunks we could send chunk start and end indices and ContactsManager could take care of creating the actual chunk...
(In reply to Ben Kelly [:bkelly] from comment #25) > I did some rough profiling of the backend on Friday and over the weekend. I > wanted to see if there was any hope of avoiding the app caching approach. > It would be nice not to duplicate data on a device with constrained > resources if we can help it. > > The tl;dr version is that ContactsAPI does not seem to currently meet our > goal of loading 1000 contacts in 1500ms. It currently takes ~5300ms to load > and even with aggressive tuning still takes ~3800ms. This is pretty far > from our goal. Therefore I will dig into Vivien's caching approach now. > (Sorry if everyone already knew this, but it was helpful for me to > understand the system dynamics.) The leo goal is displaying the contacts for the start screen in 1500ms. There is no defined number for loading all contacts. The problem is responsiveness afterwards. If you create too much workload you will starve the main process. Our scrolling logic is is implemented in JS and once your work takes too long you get noticeable pauses during scrolling. So the goal afaik is 1.5 sec for first contacts and then you need 50fps for scrolling. So more aggressive contacts loading will hurt the 50fps for scrolling. So the leo testing is starting the contacts app and once you see the first contacts start scrolling. If you see pauses you don't have 50fps :) A huge problem with contacts is localized sorting and indexedDB. idb doesn't support localized sorting and we have to fake it. Our approach is to store a sorted list of contacts for common queries in the parent and iterate over them. So this makes it very complicated but there is no other way until idb supports localized sorting. Reuben did a lot of benchmarking with contacts and he can tell you why certain constants are what they are.
(In reply to Ben Kelly [:bkelly] from comment #25) > > 1) ~2000ms spent calling splice() 52 times in ContactDB.jsm sendChunk() to > retrieve the next chunk of 20 contacts to return. See: http://mzl.la/17kY3DC > If you see the splice call in sendChunk you measure the wrong use-case or we have a bug in our code :) This is only used the first time we start the contacts app and we have to sort all existing contacts. The common use-case is that we have a cached "sorting-array" and we use the sendChunk function without the splice. > 2) ~1250ms spent in IPC sending the chunk from ContactService.jsm to > ContactManager.js. This is the IPC and the time here depends on the overall workload on the phone. > > 3) About ~2050ms spent passing the contacts from ContactManager.js to > contacts_list.js. This includes IPC costs to ask ContactService.jsm to send > the next chunk. > Here we have to create the interface object. We already use a "faster" version of creating interface objects but our xpcom environment takes some time to create secure objects we can hand out to content. > > I tried playing with the CHUNK_SIZE and CONTACTS_SENDMORE_MINIMUM. > Increasing CHUNK_SIZE to 1001 in order to return all contacts in one trip > did reduce overall time to load by ~1500ms, but the total was still ~3800ms. > This savings mainly came from (3) and partly from (1). Interestingly, the > IPC time increased. (Of course, this approach also increases the delay to > showing the first contact above the fold.) Sending all contacts at once was our old approach but this doesn't scale at all. That's why we had to introduce cursors. > > Changes to CONTACTS_SENDMORE_MINIMUM did not have any appreciable effect. > Increasing the value does increase process parallelism, but I think in this > case we are just paying more context switching costs with little benefit. > With the query cache it does not seem that the CPU is waiting on external > I/O and the time is more spent doing copying within memory. Therefore > reducing context switches increases overall performance. > CONTACTS_SENDMORE_MINIMUM is already quite low, though, so this is already > the case. The sendmore_minimum shouldn't effect the overall time. It should be small enough that the child doesn't have to wait for contacts from the parent. If we increase it we create bigger workloads that hurt responsiveness.
Hi, I agree with Gregor, in order to have a quick response time when showing the first contacts we needed to go for the cursor solution. I've also been playing with the POC, and we will need the 'generation' property that Jonas comments as well as listen to the 'oncontactchange' api (despite this will trigger lately a new 'generation') since we can have two applications open working with the contacts (this is happening already with contacts withing the dialer, and the contacts app). So far I checked than removing a contact leaves the list in an dirty state. With that said, and not being a great fan of caching data, must admit that is the perfect solution to achieve partners requirements (by the way, I've seen requirements on first contacts displayed, but not whole list rendered). The tricky parts avoided in this POC could be a bit of a hell: - Facebook sync: Right now we have a facebook contacts synchronizasation that will introduce changes that WONT got to the contacts API, but data that will be stored directly on a private DB, so we will need to keep the cache sync not just between the contacts api but this database as well. - Other sources of importing (not sync): shouldn't be a problem since we could regenerate the cache even on the fly. I understand this is out of the scope for current planed versions, but for future ones we should consider other kinds of synchronization, and how this will affect to the impact of this cache. Wouldn't like to do an incredible and amazing job if we need to drop it lately cause we modify contacts that much that makes the cache unusable. But again, we should (me the first, and other contacts peer) put more attention to this solution as seems its working with our current requirements. Thanks! F.
(In reply to Gregor Wagner [:gwagner] from comment #28) > The leo goal is displaying the contacts for the start screen in 1500ms. > There is no defined number for loading all contacts. > > The problem is responsiveness afterwards. If you create too much workload > you will starve the main process. Our scrolling logic is is implemented in > JS and once your work takes too long you get noticeable pauses during > scrolling. So the goal afaik is 1.5 sec for first contacts and then you need > 50fps for scrolling. So more aggressive contacts loading will hurt the 50fps > for scrolling. > So the leo testing is starting the contacts app and once you see the first > contacts start scrolling. If you see pauses you don't have 50fps :) And jumping to arbitrary letter groups should also operate at 50fps? Currently I don't see a way to do this with the current ContactsAPI besides a cache like Vivien's POC proposes. As far as I can tell there is no way to prioritize the load of the start of the letter groups. So we basically have to load the entire contact list in order to quickly jump to W. X. etc. So I think we are mostly in agreement that the quickest path forward is an app contacts cache. For longer term, though, it seems we might be able to implement the fast jumping without a cache if the ContactsAPI could be changed to: a) Support queries to filter based on the start of a name. This would let the contacts app ask for the first N contacts in each letter group at start. b) Support returning a summary of the contact that only includes its ID, name, etc. I guess my question is, is it worth investigating this approach now? From the sounds of it we need a solution ASAP, so the cache would be a higher priority.
(In reply to Gregor Wagner [:gwagner] from comment #29) > (In reply to Ben Kelly [:bkelly] from comment #25) > > > > 1) ~2000ms spent calling splice() 52 times in ContactDB.jsm sendChunk() to > > retrieve the next chunk of 20 contacts to return. See: http://mzl.la/17kY3DC > > > > If you see the splice call in sendChunk you measure the wrong use-case or we > have a bug in our code :) This is only used the first time we start the > contacts app and we have to sort all existing contacts. The common use-case > is that we have a cached "sorting-array" and we use the sendChunk function > without the splice. You are of course correct. I misread my output and its hitting the other branch. So (1) is ~2000ms to load contacts using store.get(). Sorry for my confusion.
(In reply to Ben Kelly [:bkelly] from comment #26) > Add a "starts with" filter to ContactsAPI. This would allow the app to load > the first 6 contacts in each letter group at start. This might be fast > enough to allow letter group scrolling "immediately" without caching. Of > course, the remaining contacts in each group would still need to be loaded > to allow normal scrolling after the letter jump. So after speaking with mlee and gwagner on irc I'm going to investigate this idea further before switching to caching. I realize many people would like to go straight there, but since the bug is not currently blocking I'd like to see if this ContactsAPI change might be a reasonable long term solution.
(In reply to Ben Kelly [:bkelly] from comment #32) > You are of course correct. I misread my output and its hitting the other > branch. So (1) is ~2000ms to load contacts using store.get(). Sorry for my > confusion. Looks like this could still be ~2x faster using .slice() instead of .splice() see: http://jsperf.com/splice-vs-new-null
As a first step, I looked into optimizing the loading of contacts from IDB in the cached query path. See bug 872741 for a proof-of-concept which improves the ContactsDB load time by about 10%.
Depends on: 872741
Whiteboard: u=user c= → c= p=5
Step two is a proof-of-concept enhancement to ContactsAPI to support only retrieving the fields needed for our list. See bug 873873. The combination of patches from bug 872741 and bug 873873 provides a ~27% improvement in load times for 1000 contacts in the reference-workload-heavy case. Next, I'll investigate adding a "starts with" filter to ContactsAPI. This will allow us to then modify the contacts app to prioritize the loading of contacts at the start of each letter group. Again, the goal here is to see if we can enhance the API enough to allow us to avoid implementing a cache in every app.
Blocks: 873873
No longer blocks: 873873
Depends on: 873873
From talking with gwagner, a major obstacle to speeding up the ContactsAPI backend is support for locale based sorting. Therefore I am also going to include that in my investigation. See bug 871846. (Adding as a dependency for cross-linking, although its not strictly necessary if we go with app caching.) Also, note that the coming DataStore API is expected to provide a more standard app caching approach. If contacts app caching is implemented, it should probably integrate with this new API.
Depends on: 871846
(In reply to Ben Kelly [:bkelly] from comment #36) > Next, I'll investigate adding a "starts with" filter to ContactsAPI. This > will allow us to then modify the contacts app to prioritize the loading of > contacts at the start of each letter group. Unfortunately, it looks like there might be some thorny API issues to resolve to pursue this approach. While ContactsAPI does support a "start with" behavior via the "contains" filterOp (see bug 874462), l10n makes it difficult to take advantage of this in the app. The app list has groups A to Z regardless of locale. When a contact is loaded its name is normalized in order to find the correct group to put the contact. For example, an umlaut will end up under letter "U". This means there are multiple first letters that map to each group. Currently the ContactsAPI does not allow you to filter based on multiple criteria like this. If we performed separate searches for each possible letter then we would have to re-implement sorting in the app and also pay for more round-trips. We could propose some kind of "search on normalized value" operation to ContactsAPI, but that seems problematic. For example, see some of the fallout for doing this for the tel field in bug 874501. It seems like there may be a higher level API issue that needs to be discussed regarding data normalization and where/how it should be performed. For these reasons I chose to punt on this idea for now.
No longer depends on: 874462
tl;dr: Incremental backend improvements are inadequate. Recommend waiting for bug 871445 to fully implement app-side cache with DataStore. Recommend waiting for bug 865750 for visibility_monitor improvements. Integrating visibility_monitor without the cache may be a reasonable interim solution if DataStore is delayed. To recap the problem: Currently the app must load contacts from gecko and render them to the DOM before you can jump to the letter group for those contacts. If you attempt to jump prior to the group being loaded you will see the behavior shown in the videos above. The baseline time for the contacts to fully load and be rendered in mozilla-central and gaia/master is 12000ms. The proof-of-concept provided by Vivien seems to provide two main improvements: - app-side cache of the contacts data - visibility_monitor integration that reduces DOM construction costs at load With this POC I measure the full load time to be 1200ms. I also hacked together an adaption of Vivien's POC to use vibility_monitor without an app-side cache. This was quite crude since we do not yet have bug 865750. The full load time improved to 7700ms. I then added in my contact summary backend change from bug 873873. This reduced the full load time to 7000ms. Finally, adding the IndexedDB inList() support from bug 872741 reduced the load time to 6100ms. While this is about a 50% improvement over the baseline, its still only an incremental improvement that doesn't solve the full problem. Based on this I agree an app-side cache is necessary. (Sorry for taking the long road to join you all here!) From what I understand a new standard caching mechanism, DataStore, is currently being implemented. See bug 871445. It seems prudent to wait for this to materialize to avoid duplicated effort implementing cache sync mechanisms, etc. Also, as I mentioned above, to better integrate visibility_monitor we need bug 865750. This is also being actively worked on. If DataStore is viewed as a longer term change, it seems like it would be possible to integrate the visibility_monitor first. It seems to provide about 40% of the overload load improvements from the original POC and might be a reasonable interim solution.
Depends on: 871445, 865750
No longer depends on: 871846, 872741, 873873
I'm happy to work this once bug 865750 or bug 871445 land, but unassigning for now since its blocked. I will also be out next week.
Assignee: bkelly → nobody
Status: ASSIGNED → NEW
Keywords: perf
Whiteboard: c= p=5 → c= p=5 ,
Priority: -- → P2
Blocks: 896835
No longer blocks: 893145
Blocks: 914913
A concern about the progress of this bug was raised over in bug 914913 comment 44. I wanted to provide an update of where things currently stand, but it seemed a bit off topic for over there. Therefore I'm going to respond here and cross link. (In reply to Henrik Skupin (:whimboo) from comment #44) > It's not that bad as the time needed for the initial search for a person. On > my Unagi this takes about 15s! Sadly bug 865747 hasn't gotten that much > traction. Not sure if it will, but I thought it's worth mentioning here, > given that you state 450ms as too slow. I agree total load time is frustratingly slow as well. We clearly haven't made the progress we would like, but its not for lack of attention. First, we have been actively working to fix bug 865747. Mainly we landed bug 865750 and bug 879299 to use the visibility monitor in contacts. This helped a bit by lowering the full load time from around 12 seconds for 1000 contacts on a buri to about 8 seconds. Clearly not adequate, but some improvement. If we can fix 918179 this should improve a bit more, although still probably not enough. In hindsight I think the visibility monitor is probably not the most scalable solution. Due to the way the monitor works it really wants to have some DOM built over the entire list for it to perform its visibility calculations on. This starts to scale poorly because a large DOM, even with just placeholder elements, can trigger reflows and drive up the cost of style calculations. On the client side I think Kevin's cursor-based approach of only maintaining the DOM for elements actually on the screen is probably the best, most scalable approach. See bug 909935. Unfortunately this is essentially a re-write of large parts of the contacts app. Since the DOM is used for multiple purposes in contacts it will touch more parts of the app than you would expect. Even if can make the client take zero time during load, we still need to wait for the time to get the contacts data out of the gecko API layer. As shown in bug 888666 this is currently slower than we like, taking 3.5 seconds to load 1000 contacts on a buri. Currently features required by the app (name normalization, etc) that are not directly supported by the API require the app to load all of the contacts instead of streaming from an API cursor. Kevin has some bugs written against those issues at bug 909454, bug 909948, and bug 912776. Another factor to consider is that many of these changes would require modifying the webapi. Long term we think that the DataStore API may help address most of the API issues by allowing the app to easily work directly with IDB. See bug 871445. To try to move things forward we met as a group during our Oslo work week. Notes here: https://etherpad.mozilla.org/qwiresWpj9 Our plan coming out of that meeting is to: 1) Continue making incremental changes to the contacts API where appropriate. 2) Trial run DataStore with FB contacts. See bug 871445 and bug 918827. 3) Pursue fixing 871846 which we believe is the root problem slowing down contacts app. It will be needed even if apps are using IDB directly. 4) Continue to prototype and investigate app improvements such as the cursor-based solution.
Ben - great response. I think that your four points are probably the best course of action for the time being. A cache may also be a viable approach and I was actually trying to do something in between what we have now, and a full streaming based list which is similar to this. As I've been looking into the initial contacts rendering a lot over the last few weeks, I'd like to take this bug if no one objects.
Assignee: nobody → kgrandon
Status: NEW → ASSIGNED
Blocks: 894751
No longer blocks: 914913
Firefox OS is not being worked on
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: