Closed Bug 823582 Opened 12 years ago Closed 3 years ago

Cache empty search for awesomescreen

Categories

(Firefox for Android Graveyard :: General, defect)

ARM
Android
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED INCOMPLETE

People

(Reporter: snorp, Unassigned)

Details

Attachments

(1 file)

Right now it takes some time for the initial results to show up when the awesomescreen first appears. We should cache those results so we don't actually have to perform any query, which should make things a bit snappier.
See bug 790277 and bug 721104 where we've played with this before.
(In reply to James Willcox (:snorp) (jwillcox@mozilla.com) from comment #0)
> Right now it takes some time for the initial results to show up when the
> awesomescreen first appears. We should cache those results so we don't
> actually have to perform any query, which should make things a bit snappier.

Do you still feel like things could be more snappy? We have been making other changes, even recently in Nightly, that affect the initial Awesomebar speed.
(In reply to Mark Finkle (:mfinkle) from comment #2)
> (In reply to James Willcox (:snorp) (jwillcox@mozilla.com) from comment #0)
> > Right now it takes some time for the initial results to show up when the
> > awesomescreen first appears. We should cache those results so we don't
> > actually have to perform any query, which should make things a bit snappier.
> 
> Do you still feel like things could be more snappy? We have been making
> other changes, even recently in Nightly, that affect the initial Awesomebar
> speed.

I still think it would be worth caching the top sites query results on disk with some expiration policy to avoid running the empty query all the time on startup and when the awesome screen opens. That would especially improve performance on lower end phones.
Caching in memory is fast but hogs memory. Caching on disk is kinda like reading from the DB anyway.

Could we do a hybrid approach? Right now we use a complicated SQL query to pull from DB. Once we do this, could we create a much simpler SQL query containing the known IDs of the rows?

I mean initial SQL is complicated, but subsequent SQL is not:

SELECT <whatever the join is> WHERE history.id IN (<list of id's we already have>)
(In reply to Mark Finkle (:mfinkle) from comment #4)
> Caching in memory is fast but hogs memory. Caching on disk is kinda like
> reading from the DB anyway.

Well, my assumption here is that reading a text file from disk is definitely not the same as running a rather complex query on a DB.

> Could we do a hybrid approach? Right now we use a complicated SQL query to
> pull from DB. Once we do this, could we create a much simpler SQL query
> containing the known IDs of the rows?
> 
> I mean initial SQL is complicated, but subsequent SQL is not:
> 
> SELECT <whatever the join is> WHERE history.id IN (<list of id's we already
> have>)

Interesting idea. However:

1. This doesn't improve our first run performance though (which I think it's our worst case right now).
2. We'd still need a table join to fetch bookmarks *and* history entries while still handling duplicates. So, the query would not be that simple.
Attached patch WIPSplinter Review
I was playing with an approach to this the other day and thought I'd save my place. This caches the query to disk (and in an in-memory MatrixCursor, not sure we want that memory hit...).

Subsequent queries always return the MatrixCursor (and then update it in the background). That means this query is always one run behind, which, with a large db shouldn't be bad. With a very small db its noticable...

I'm sure this could be optimized a lot, but it was a simple starting point to see if we could improve things without much work.
(In reply to Lucas Rocha (:lucasr) from comment #5)
> (In reply to Mark Finkle (:mfinkle) from comment #4)
> > Caching in memory is fast but hogs memory. Caching on disk is kinda like
> > reading from the DB anyway.
> 
> Well, my assumption here is that reading a text file from disk is definitely
> not the same as running a rather complex query on a DB.

This is probably a good assumption. I'd also like to know if reading from a simple file uses less power than reading via SQLite, which does spike the power.
The 'caching' here has two parts:
1. Caching across the multiple times we display about:home during a single browsing session
2. Caching across startups i.e. avoid using the query altogether on startup

For 1, I think the best approach is to move the TopSites loader to be bound to BrowserApp instead of the fragment. This way we can just let the Loader control the life cycle of the Cursor for us (plus, auto-refreshing with no extra code). This would mean we don't re-run the query every time we display the top sites panel.

As for 2, I think we need to be very careful with how and when we expire the cached state on disk to avoid unexpected behaviour especially when interacting with pinned sites (and the upcoming 'tiles').

I think 1 can potentially give us some great responsiveness and power saving improvements. Yes, this would involve increasing our memory footprint a bit, we need know how much memory this cursor would take.
We have completed our launch of our new Firefox on Android. The development of the new versions use GitHub for issue tracking. If the bug report still reproduces in a current version of [Firefox on Android nightly](https://play.google.com/store/apps/details?id=org.mozilla.fenix) an issue can be reported at the [Fenix GitHub project](https://github.com/mozilla-mobile/fenix/). If you want to discuss your report please use [Mozilla's chat](https://wiki.mozilla.org/Matrix#Connect_to_Matrix) server https://chat.mozilla.org and join the [#fenix](https://chat.mozilla.org/#/room/#fenix:mozilla.org) channel.
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → INCOMPLETE
Product: Firefox for Android → Firefox for Android Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: