Closed Bug 997420 Opened 10 years ago Closed 7 years ago

[Messages] Store a deleted threads list in an indexedDB to improve message deletion performance

Categories

(Firefox OS Graveyard :: Gaia::SMS, defect, P2)

defect

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: drs, Unassigned)

References

Details

(Keywords: perf, Whiteboard: [c=progress p= s= u=])

Blocks: 893838
Currently, deleting messages and threads is fairly slow. Deleting a single message, from the starting point of confirming it to the ending point of being ready to scroll around in the thread(s) list, takes about 3-5 seconds depending on the device. Deleting an entire reference workload can take a minute or two.

We have investigated ways to reduce this time, such as cleaning up the CSS of the app (bug 996776), cleaning up StickyHeader and other minor things (bug 994818), and doing profiling to identify other possible sources of this slowness (bug 893838). None of them have given us any serious benefits, other than the profiling. It turns out that most our time is spend on the main B2G thread doing the deletion of the messages, and also churning the IPC overhead associated with this. Suffice it to say, once we wander in here, we're going to be spending a lot of engineering time and probably for little benefit.

(In reply to Rick Waldron [:rwaldron] from bug 893838 comment 26)
> Corey and I just discussed the possibility of storing a "deleted threads"
> list in IndexedDB (along with a last updated timestamp) which will allow us
> to prevent threads from re-rendering when the app is closed and re-opened.
> If a previously deleted thread re-appears and hasn't been updated since the
> delete flag was set, don't render it and queue a task to delete it (again)

In my opinion, this is pretty much our best idea at this point. The benefit of doing this is that deleting the small and medium reference workloads would probably take 5-10 seconds instead of a minute or two (at least perceptibly). Deleting a single message should take about a second. The drawbacks are the following:

* The phone will still be doing work in the background for a minute or two, and I believe it will be perceptibly slower during this time.
* This will add complexity, since we'll be wrestling with two databases instead of one.
* If the SMS app is killed, the thread deletion will stop, and we'll have to start it again when the user next opens it. In this case, it'll be slower than normal the next time it's used, and if any other app reads the database, some of the deleted messages will still be there.

ni? Omega to consider UX for this. I'm of the opinion that we shouldn't provide any visual indication to the user. I believe that the only benefit the user gets by being notified of this work is that the weaker performance will be more understandable. But I also think that people don't delete their entire message database very often, and if they're only deleting a message or single thread, they should barely notice the background task.
Flags: needinfo?(ofeng)
I don't think we should focus on the "delete the entire database case", but rather "delete 1 to 10 threads" and "delete 1 to 10 messages in 1 thread".
When deleting messages, the UI should reflect the result immediately. The real database deleting in background is acceptable, and it's also fine about the paused deleting due to the Messages app is killed.
Flags: needinfo?(ofeng)
Keywords: perf
Priority: -- → P2
Whiteboard: [c=progres p= s= u=]
Whiteboard: [c=progres p= s= u=] → [c=progress p= s= u=]
The observation is based on Flame device with 2.2 version.

Time taken for deleting 500 messages.
Iteration 1 (MS): 5440
Iteration 2 (MS): 5550
Iteration 3 (MS): 5750
Thanks;

we'll likely not do anything more here, but we'll definitely work on a DB-based version of the SMS app in the coming months in the context of the NGA. So maybe this bug will be resolved after this.
Mass closing of Gaia::SMS bugs. End of an era :(
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
Mass closing of Gaia::SMS bugs. End of an era :(
You need to log in before you can comment on or make changes to this bug.