Open Bug 1515790 Opened 5 years ago Updated 2 years ago

Consider grouping bookmark structure changes (moves, new children) into batches

Categories

(Firefox :: Sync, enhancement, P3)

enhancement

Tracking

()

People

(Reporter: lina, Unassigned)

References

(Blocks 1 open bug)

Details

Currently, we upload bookmark records in no particular order. This is OK for small changes, like renaming a bookmark, moving a bookmark between two folders, or even first syncs with small libraries. These changes will always fit into a single batch, so other clients will download and apply them atomically. (Exception: if a single record exceeds the maximum size, we won't be able to sync it at all, as in bug 1321021. And, since we don't allow skipping records, we won't be able to sync any of your bookmarks. ¯\_(ツ)_/¯).

However, for large first syncs, we may need to split the records into multiple batches. This invites consistency issues for bookmarks, since another client might sync before we've finished uploading everything. It's the same problem that batch uploads were created to solve, just less likely, because it can only happen after a batch and not after every record. This also makes bug 1515784 riskier to do: we don't know if another client failed to upload a missing record entirely, or was just interrupted, and will do so on the next sync.

Further, if we end up with a record we can't sync in a later batch, we'll leave the server in an inconsistent state. That batch will fail, but (IIUC) everything we've committed so far will stay.

We could try to group hierarchical changes into batches, as https://github.com/mozilla-mobile/firefox-ios/blob/0894c97640e86847adce1d74959d2c43d6c7f018/Sync/Synchronizers/Bookmarks/BookmarksSynchronizer.swift#L424-L430 suggests. So, for a new folder, we'll need to upload the folder and all its children in that batch. This also isn't a panacea: what happens if a _folder_ exceeds the batch size (like unfiled on a first sync), or if we have grandchildren that won't fit? But it might be an improvement.

Either way, I'm not sure how much of an issue this is in practice. We're still seeing structure issues in telemetry, though that might be for existing users, not new ones who have been using batch uploads from the beginning. Collecting telemetry for the number of batches per collection per sync seems like a good place to start, especially if we can compare first syncs.

Another option may be to increase the batch size on the server.

Priority: -- → P3
Blocks: 1544656
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.