Closed
Bug 616265
Opened 14 years ago
Closed 14 years ago
Short-term efficiency improvements for large downloads on mobile
Categories
(Firefox :: Sync, defect)
Firefox
Sync
Tracking
()
RESOLVED
FIXED
People
(Reporter: rnewman, Assigned: rnewman)
References
Details
(Whiteboard: [has patch][has reviews])
Attachments
(2 files, 5 obsolete files)
13.81 KB,
patch
|
mconnor
:
review+
|
Details | Diff | Splinter Review |
16.54 KB,
patch
|
mconnor
:
review+
|
Details | Diff | Splinter Review |
(Forked from Bug 615284.)
First syncs on Fennec can be problematic: too much memory is consumed, and too many GET requests are needed. OOM kills the process, and the first sync never completes.
(It completes on devices with lots of memory, but takes a long time.)
There are long-term improvements that we'd like to do for this; I'm using Bug 615284 to track these.
We're also waiting for places work to land, which will reduce memory usage and allow us to back out some of these changes.
This bug is for the following short-term work:
* Bump batch size: reduce number of requests needed for a sync.
* Cap the number of records fetched per sync (to 5,000 for now).
* Reduce the scope of a Places batch to that of a mobile batched fetch, which will reduce memory usage.
Updated•14 years ago
|
tracking-fennec: --- → 2.0b3+
Assignee | ||
Comment 1•14 years ago
|
||
13:32:11 <@mconnor> @sdwilsh> mconnor: so, you could force a flush by getting a handle to nsPlacesDBFlush, QIing it to nsINavBookmarkObserver, and then calling onEndUpdateBatch, and then again onBeginUpdateBatch
blocking2.0: --- → ?
tracking-fennec: 2.0b3+ → ?
Updated•14 years ago
|
blocking2.0: ? → beta8+
tracking-fennec: ? → 2.0b3+
Assignee | ||
Comment 2•14 years ago
|
||
First cut at this. I verified that batching takes place, and doesn't error on either 3.6.12 XULrunner or Minefield xpcshell-tests, but I don't know if it does any good yet.
Thoughts welcome.
Attachment #494859 -
Flags: review?(mconnor)
Comment 3•14 years ago
|
||
Comment on attachment 494859 [details] [diff] [review]
Add batching to History engine.
Some drive by feedback :)
>+var historyURL = "resource://gre/components/nsPlacesDBFlush.js";
>+var places = {};
>+var loader = Components.classes["@mozilla.org/moz/jssubscript-loader;1"]
>+ .getService(Components.interfaces.mozIJSSubScriptLoader);
>+loader.loadSubScript(historyURL, places);
>+
>+
...
>- _sync: Utils.batchSync("History", SyncEngine),
>+ _flusher: null,
>+ _flushCount: 0,
>+ _prepareFlusher: function _prepareFlusher() {
>+ // Apparently this is supposed to be QIed as an nsINavBookmarkObserver,
>+ // but I don't see a way to get hold of the right class object to use
>+ // createInstance. This way appears to work...
>+ let flush = new places.nsPlacesDBFlush();
Yuck :) You want
flush = Components.classesByID[Components.ID("c1751cfc-e8f1-4ade-b0bb-f74edfb8ef6a")]
.getService(Ci.nsINavBookmarkObserver);
(By QIing sdwilsh meant flush = flush.QueryInterface(...), but not necessary here with the getService call.)
>diff --git a/services/sync/tests/unit/test_syncengine_sync.js b/services/sync/tests/unit/test_syncengine_sync.js
I think these tests should go into a separate test_history_engine.js file.
Assignee | ||
Comment 4•14 years ago
|
||
Philipp filled in a gap in my knowledge, so here's a better patch.
Attachment #494859 -
Attachment is obsolete: true
Attachment #494871 -
Flags: review?(mconnor)
Attachment #494859 -
Flags: review?(mconnor)
Assignee | ||
Comment 5•14 years ago
|
||
… and with tests appropriately rejiggered. 90 passes!
Attachment #494871 -
Attachment is obsolete: true
Attachment #494882 -
Flags: review?(mconnor)
Attachment #494871 -
Flags: review?(mconnor)
Assignee | ||
Comment 6•14 years ago
|
||
Re bumping the batch size:
The batch size is currently 50, with a note that it's limited because of URL length restrictions. GUIDs can be up to 64 chars long.
Philipp's bookmark GUID work will drop that to 12 chars, which should allow us to go north of 200.
For mconnor's 17,000 history items, that would drop us from 340 GETs to 85: a far more pleasing number. (Adding a cap of 5,000 records would put the max number of fetches at 25, of course.)
Of course, the payload for each request would be four times the size...
mconnor, any thoughts on this?
The patch is trivial, so nothing attached.
Assignee | ||
Comment 7•14 years ago
|
||
Attachment #494882 -
Attachment is obsolete: true
Attachment #494887 -
Flags: review?(mconnor)
Attachment #494882 -
Flags: review?(mconnor)
Assignee | ||
Comment 8•14 years ago
|
||
Add a downloadLimit property to engines. History is set to 5000.
Also lift the magic constant for history upload into constants.js.
Attachment #494897 -
Flags: review?(philipp)
Assignee | ||
Updated•14 years ago
|
Attachment #494897 -
Flags: review?(philipp) → review?(mconnor)
Assignee | ||
Comment 9•14 years ago
|
||
More sophisticated handling (e.g., server-sort), and only apply to mobile devices -- we don't want the hit of sorting for every user.
Test.
Attachment #494897 -
Attachment is obsolete: true
Attachment #494921 -
Flags: review?(mconnor)
Attachment #494897 -
Flags: review?(mconnor)
Comment 10•14 years ago
|
||
Comment on attachment 494887 [details] [diff] [review]
Enable test_history_engine everywhere.
Please make sure to file a bug on not doing this extra work once temp tables are gone. We already have the _hasTempTables getter we could use, but that's always true for all current versions.
Attachment #494887 -
Flags: review?(mconnor) → review+
Assignee | ||
Comment 11•14 years ago
|
||
(In reply to comment #10)
> Please make sure to file a bug on not doing this extra work once temp tables
> are gone. We already have the _hasTempTables getter we could use, but that's
> always true for all current versions.
Bug 616556.
Re evaluating this change: I plan to migrate my 30,000-history profile to storage version 4 this afternoon, then try three different Fennec first syncs (control, then one for each patch). That should give us a good idea of how much improvement we can expect, if any.
I will report back.
Assignee | ||
Comment 12•14 years ago
|
||
Try build kicked off with batching patch, based on current sync-central repo:
http://tbpl.mozilla.org/?tree=MozillaTry&rev=62f9120345ff
Comment 13•14 years ago
|
||
Comment on attachment 494921 [details] [diff] [review]
Add download limit for history, lift magic number. v2
>+ // Sort and limit so that on mobile we only get the last X records.
>+ guidColl.limit = fetchLimit;
>+ guidColl.sort = guidSort;
You can just set this directly to "index" here, since this is a mobile-only path (unless handled.length == Infinity), and save the previous step. We really don't need guidSort except in this specific case.
Attachment #494921 -
Flags: review?(mconnor) → review+
Updated•14 years ago
|
Whiteboard: [has patch][has reviews]
Assignee | ||
Comment 14•14 years ago
|
||
dougt pointed me to this method for soft-limiting the sqlite heap:
http://people.mozilla.org/~dougt/sqlite_soft_limit_heap
We'd need a JS equivalent, of course, but it's a possibility for the future.
Assignee | ||
Comment 15•14 years ago
|
||
My current running sync has mapped 218,103,808. Memory in use is no lower than the Fennec build that preceded it. (See Bug 615167.)
Sync is still running.
It looks rather like flushing the places cache does almost nothing for our memory usage.
I forgot (again) to turn logging on before starting this sync, so I'll do another to verify that flushes are occurring… but if they are, we're going to need to look elsewhere for the memory hungry culprit.
Comment 16•14 years ago
|
||
When you rerun the test, please make sure you use a clean profile, not just Reset Sync. It makes a difference whether Sync has been used on a profile before and then reset or whether it's a brand new profile, mostly because there are no existing places annotations to reuse so those have to be created. This could make a difference.
Assignee | ||
Comment 17•14 years ago
|
||
(In reply to comment #16)
> When you rerun the test, please make sure you use a clean profile, not just
> Reset Sync. It makes a difference whether Sync has been used on a profile
> before and then reset or whether it's a brand new profile, mostly because there
> are no existing places annotations to reuse so those have to be created. This
> could make a difference.
I actually used the Android app manager to wipe all data and uninstall. Clean as a whistle. (Had to type everything in again, too, which makes me beg for J-PAKE.)
Comment 18•14 years ago
|
||
We should get these landed in fx-sync, and punt the batching entirely, since temp tables are going away post b8.
Assignee | ||
Comment 19•14 years ago
|
||
(In reply to comment #18)
> We should get these landed in fx-sync, and punt the batching entirely, since
> temp tables are going away post b8.
Do you mean "punt the flushing entirely" -- i.e., I should land the history limit?
Or "punt the batch size increase" -- i.e., I should land these two changes?
Or something else?
Comment 20•14 years ago
|
||
punt the flushing entirely. We should still do the history limit, IMO, but let's actually leave the batch size until after b8. Split that to a new bug?
Assignee | ||
Comment 21•14 years ago
|
||
I reworked the approved patch to remove flushing from the test (and elsewhere, of course).
Now just includes the download limit and some tidying.
Let me know and I'll drop this into hg.
Attachment #494921 -
Attachment is obsolete: true
Attachment #495709 -
Flags: review?(mconnor)
Updated•14 years ago
|
Attachment #495709 -
Flags: review?(mconnor) → review+
Assignee | ||
Comment 22•14 years ago
|
||
Landed history limit:
http://hg.mozilla.org/services/fx-sync/rev/095b98e852f8
Status: ASSIGNED → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Assignee | ||
Comment 23•14 years ago
|
||
Bug 617210 filed to bump download limit.
Updated•6 years ago
|
Component: Firefox Sync: Backend → Sync
Product: Cloud Services → Firefox
You need to log in
before you can comment on or make changes to this bug.
Description
•