[meta] Incremental two-phase bookmark sync for Android Sync

NEW
Unassigned

Status

Android Background Services
Android Sync
P3
normal
5 years ago
3 months ago

People

(Reporter: rnewman, Unassigned, Mentored)

Tracking

(Depends on: 2 bugs, Blocks: 5 bugs, {meta})

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] 2)

(Reporter)

Description

5 years ago
See Bug 769476 Comment 18, Bug 812348, Bug 730142.

Sync has an intrinsic design issue: records are applied as they are downloaded. That's fine when records are independent (like history items), but it's bad if records are interrelated (like a bookmark tree).

This design issue is the direct cause of the pages and pages of fragile reparenting and repositioning code edge cases that we maintain.

If a download is incomplete for any reason -- missing server data, interrupted connection, Android termination, client error, or a hard limit being hit -- then the client's local store can be left in an inconsistent state.

The client has a choice as it syncs: "fix up" structure *in the canonical local store* as it proceeds, or allow the client to end up in an inconsistent state. Neither is correct.

We should implement a scheme which incrementally maintains a local structured replica of server state, only syncing locally when that replica is determined to be up-to-date and consistent. As an optimization, subtrees of the bookmark data can be synced when they are 'complete', even if the full tree has not yet been downloaded.

It's expensive -- both in terms of device storage and developer time -- but it will finally allow us to handle more data than we can process in a single five-minute Android sync, to incrementally process individual folders, and also to perform more accurate reconciling by holding an approximate server changelog on disk.

Syncing will be multi-phase: download and fetch server changes, then (directly or indirectly) compose and reconcile "delta streams" from both server and client. The output of this reconciling is a pair of consistent, reconciled changesets that can be applied to each repository.

This would all be much easier if we took a dependency on Clojure's persistent data structures; that's a natural design fit for this problem.

Note for future me or future followers in my footsteps: this is a very thorny problem. There are incremental kinda-fixes that will mitigate chunks of the current known issues (e.g., use straight-up download batching, disable upload until no remaining batches exist), but these will introduce consistency issues, have blind spots, and are otherwise Hacky As Hell®. I think it's time to do this right.
(Reporter)

Updated

5 years ago
Blocks: 745408
Whiteboard: [sync:bookmarks]
(Reporter)

Updated

5 years ago
Blocks: 655722
(Reporter)

Updated

5 years ago
Blocks: 655728
(Reporter)

Updated

5 years ago
Blocks: 647605
(Reporter)

Comment 1

5 years ago
Additionally relevant: syncing bookmarks is really, really inefficient. Bug 655722 is one reason why.

"Instant sync" means that for some users, *every time they bookmark a page* their browser hangs for ten seconds while Sync hammers the places DB, perhaps triggering GCs and being generally awful.

Even if we don't do it "instantly", we're causing harmful stalls during browsing.

In the course of addressing the correctness issues here, we have an opportunity to be much more efficient about persisting state, which will reduce the amount of work we need to do to construct structured representations in memory.
Whiteboard: [sync:bookmarks] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale]
(Assignee)

Updated

4 years ago
Component: Android Sync → Android Sync
Product: Mozilla Services → Android Background Services
(Reporter)

Comment 2

4 years ago
I think I just found a user who experienced a different variant of this underlying issue.

The symptom: bookmarks in a different order on the phone than on the desktop, but both stable (no change in two weeks).

Here's what I wrote:

---
• You set up your two clients close together in time.
• Your desktop had uploaded only some of its bookmarks. (With 3,000 that's very likely.)
• Your phone downloaded and applied those. It corrected them as best it could to achieve consistency — gaps in numeric ordering due to a partial write, for example, would be fixed by compacting.
• Your phone uploaded those corrected records and finished its first sync.
• Your desktop finished uploading, and fast-forwarded its last-sync timestamp accordingly.
• And so on.
---

This is a special case of the generic "you're screwed" problem, with Sync's time-gap flaw actually saving his desktop bookmarks from propagated corruption.
(Reporter)

Updated

4 years ago
Duplicate of this bug: 961503
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?]
Priority: P1 → --
(Reporter)

Updated

3 years ago
Blocks: 1085128

Updated

3 years ago
Duplicate of this bug: 1125603

Updated

a year ago
Blocks: 1288099
Hey Richard,

This bug sounds like the beginnings of what you implemented on iOS for the bookmark syncing engine (buffer/mirror/local tables). We don't have this two-phase mechanism for history/logins/tabs/etc though. Is this something you'd like to see implemented on iOS as well or is not an issue since problems are mostly related to bookmarks?
Flags: needinfo?(rnewman)
(Reporter)

Comment 6

a year ago
Speaking to iOS:

Tabs isn't necessary — each record stands alone, and we don't do multiple fetches to grab the records, so either the GET succeeds or it fails. And the records are very independent.

History is fairly robust from the perspective of individual records, so redownload is safe.

We already do a kind of two-phase application of logins. You'll see we don't batch-download, so we might see another client's partial writes, but we will never get a partial read. We don't upload anything until incoming records are successfully applied, and we don't apply until the download is successful. If we ever batch-download logins, or start to worry about interrupting a write (not a concern when all deployed clients use batch uploads!) then we should buffer them before application.

Desktop and Android stream incoming records, download in batches, and otherwise leave themselves vulnerable to interruption, so they would probably benefit from two-phase application of logins and form data as well as bookmarks.


Speaking more broadly: independent records are safer to partially apply. There can be cross-record inconsistencies -- you might see *some* history deletions, or *some* password changes -- but there's nothing structural to break, and the client should recover with re-download. "Eventual kinda consistency" is the name of the game.

Bookmarks are inherently structural, and so benefit the most from two-phase application.
Flags: needinfo?(rnewman)

Updated

a year ago
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.1]

Updated

a year ago
Assignee: nobody → dlim
(Reporter)

Comment 7

a year ago
Part 1: buffer incoming records, then apply contents of buffer if we think we have enough time. Similar to Bug 1201108.

This alone will allow us to remove (or significantly raise) the existing 5000 bookmark limit, and will remove some interruption-based errors: the meat of syncing will occur only when we're confident that we've downloaded everything.


Part 2: add some amount of consistency checking to the buffer before application.

When we've downloaded records but not yet applied them, we have an opportunity to verify that incoming (and outgoing!) data is complete and safe to apply. This is still not a true multi-phase application, but adding heuristic checks (like on iOS[1]) will buy us yet more safety.


Part 3: structured record application, similar to the current state of iOS. This is true multi-phase application: we completely compute a final merged state from the incoming records and local state, using that to atomically apply changes to remote, buffer, and local.

This is the hardest step, and so while it buys the most safety and confidence (and makes the storage layer itself very simple), it's also the end of the journey, if we ever get there.


Let me know if you'd like to discuss more background about this bug, or otherwise have questions.


[1] <https://github.com/mozilla/firefox-ios/blob/master/Storage/SQL/SQLiteBookmarksSyncing.swift#L679>
Mentor: rnewman@mozilla.com
Status: NEW → ASSIGNED
Hardware: ARM → All

Comment 8

a year ago
(In reply to Richard Newman [:rnewman] from comment #7)
> Part 1: buffer incoming records, then apply contents of buffer if we think
> we have enough time. Similar to Bug 1201108.
> 
> This alone will allow us to remove (or significantly raise) the existing
> 5000 bookmark limit, and will remove some interruption-based errors: the
> meat of syncing will occur only when we're confident that we've downloaded
> everything.
> 
> 
> Part 2: add some amount of consistency checking to the buffer before
> application.
> 
> When we've downloaded records but not yet applied them, we have an
> opportunity to verify that incoming (and outgoing!) data is complete and
> safe to apply. This is still not a true multi-phase application, but adding
> heuristic checks (like on iOS[1]) will buy us yet more safety.
> 
> 
> Part 3: structured record application, similar to the current state of iOS.
> This is true multi-phase application: we completely compute a final merged
> state from the incoming records and local state, using that to atomically
> apply changes to remote, buffer, and local.
> 
> This is the hardest step, and so while it buys the most safety and
> confidence (and makes the storage layer itself very simple), it's also the
> end of the journey, if we ever get there.
> 
> 
> Let me know if you'd like to discuss more background about this bug, or
> otherwise have questions.
> 
> 
> [1]
> <https://github.com/mozilla/firefox-ios/blob/master/Storage/SQL/
> SQLiteBookmarksSyncing.swift#L679>

Thanks or the outline, Richard!

How far do you think we can get with just Part 1 as a starting point? It seems to me that from the "source of problems" point of view, network partitioning is probably the top culprit. Moving the meat of syncing to be purely local is probably the biggest win here, and it should significantly reduce number of problems at this scope (but certainly won't eliminate them). For bonus points, it's also the simplest (and the very first) step.

Updated

a year ago
Flags: needinfo?(rnewman)

Comment 9

a year ago
Thinking more about this, and chatting with nalexander, I think a good tractable first step would be the Part 1 by itself. If I say this quickly, we'll be adding a new "buffer repository" for storing encrypted records we receive them from the remote source, and modifying repository synchronization flow to be something like:

remote -> buffer, buffer -> local, local -> remote.

I think with some upfront thinking this should be doable in a way that all data types benefit from this, not just bookmarks.
Another consideration is doing the above work in a way that doesn't hinder future efforts around Part 2/3 outlined in Comment 7.
(Reporter)

Comment 10

a year ago
Network interruption is likely to be a big source of failures, but the codebase simply doesn't have enough telemetry to know for sure. We do know that there's a lot of server-side corruption from desktop Sync inadequacies, so I would be hesitant to say that it's okay to lose steam after part 1.

I'd turn this into a meta bug and file three parts, then work and measure. Buffering is necessary regardless of how much further we go. 

I'm not sure that I would buffer encrypted records. It's certainly simplest, but it makes further work more difficult. Having the buffer be fully unpacked requires a little thinking (what do you do if an incoming record is malformed?) but it makes validation and future record application tractable.
Flags: needinfo?(rnewman)
(Reporter)

Comment 11

a year ago
That's not to say that buffering encrypted records has no value; it does. But you'd still have to do some bookkeeping for when to drop stuff from the buffer, and you might be pushing yourself into a migration later. Think about it.

Updated

a year ago
Depends on: 1291821

Updated

a year ago
Summary: Incremental two-phase bookmark sync → [meta]Incremental two-phase bookmark sync

Updated

a year ago
Depends on: 1291822

Updated

a year ago
Depends on: 1291823
(Reporter)

Updated

a year ago
Keywords: meta
Summary: [meta]Incremental two-phase bookmark sync → [meta] Incremental two-phase bookmark sync for Android Sync
(Reporter)

Updated

a year ago
No longer depends on: 730142
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.1] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.2]

Updated

a year ago
Assignee: dlim → nobody
Status: ASSIGNED → NEW
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS s1.2] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS backlog]

Updated

11 months ago
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS backlog] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS]

Updated

11 months ago
Priority: -- → P3

Updated

10 months ago
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS] → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS] 2

Updated

10 months ago
Priority: P3 → P2

Updated

10 months ago
Assignee: nobody → gkruglov

Updated

10 months ago
Priority: P2 → P1
See Also: → bug 1305563

Updated

10 months ago
Depends on: 730142

Updated

10 months ago
Assignee: gkruglov → nobody

Updated

10 months ago
Whiteboard: [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] [MobileAS] 2 → [sync:bookmarks][sync:perf][sync:rigor][sync:scale][qa?] 2

Updated

5 months ago
Mentor: gkruglov@mozilla.com

Updated

5 months ago
Depends on: 1343726

Comment 12

5 months ago
Removing a P flag from a meta bug.
Priority: P1 → --

Updated

3 months ago
Priority: -- → P3
You need to log in before you can comment on or make changes to this bug.