Closed Bug 1289287 Opened 8 years ago Closed 8 years ago

Send tabs to devices sends the same tabs multiple times

Categories

(Firefox :: Sync, defect, P1)

defect

Tracking

()

VERIFIED FIXED
Firefox 51
Tracking Status
firefox51 --- verified

People

(Reporter: eoger, Assigned: eoger)

References

Details

(Whiteboard: [send-tab])

Attachments

(1 file)

1. Client A sends a tab to Client B
2. Client B displays the tab then syncs
3. Client A sends another tab
4. Client B displays 3 tabs: the one received in 1. twice, plus the one sent in 3.

Video/Steps to reproduce:
http://recordit.co/LC4kvDwwvv

I believe that this is caused by the merging strategy here:
https://dxr.mozilla.org/mozilla-central/source/services/sync/modules/engines/clients.js#563

A solution could be to keep an array of commands to send that we flush after every Sync.
Yeah, that's borked.

We put outgoing commands in our copy of the remote record, which is insane. If you queue up a new command, then the blank incoming record doesn't hit the conditional on 555, so we merge the two.

Regardless we should be persisting some or all outgoing commands per-client, and dropping them as soon as the record is uploaded -- otherwise you'll send a tab, quit Firefox, and it'll just get lost. That's bad.

Another short-term fix is to remove sent commands from our local copy of a remote client record immediately after upload, and always follow the same download-merge-upload approach that other platforms take, using XIUS to make the operation safe.
Priority: -- → P2
Priority: P2 → P1
Richard, does the approach taken in this patch looks OK to you?
I will add persistence using Util.jsonSave if I can continue on this way.
Flags: needinfo?(rnewman)
Attachment #8777156 - Flags: review+
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review65796

::: services/sync/modules/engines/clients.js:238
(Diff revision 1)
>    },
>  
>    _uploadOutgoing() {
>      this._clearedCommands = null;
>      SyncEngine.prototype._uploadOutgoing.call(this);
> +    this._outgoingCommands = null;

I agree with this general approach.

It's possible for the upload to take an arbitrary amount of time, and we spin the event loop within. So one could send a tab during uploadOutgoing. That tab would be lost.

The correct approach is a two-stage thing: only discard the commands that were uploaded, which you can do by slurping the queue before the upload and putting it back on failure, or by tagging commands with a generation number. You might have a different approach in mind when you think about persistence.

::: services/sync/modules/engines/clients.js:346
(Diff revision 1)
>      this._tracker.addChangedID(this.localID);
>    },
>  
> +  _addOutgoingCommand(clientId, command) {
> +    if (!this._outgoingCommands) {
> +      this._outgoingCommands = new Map();

Much as I love `Map`, it makes this code way more verbose than using `{}`.

::: services/sync/modules/engines/clients.js:380
(Diff revision 1)
> -    // It must be a dupe. Skip.
> -    else {
> -      return;
> -    }
>  
>      this._log.trace("Client " + clientId + " got a new action: " + [command, args]);

This trace log should move inside the method to avoid incorrect logging.
Assignee: nobody → eoger
Status: NEW → ASSIGNED
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/68782/diff/1-2/
Attachment #8777156 - Flags: review+
Thank you Richard, I added persistence and a two-stage upload. It was a bit more complicated that expected since the clients engine wipes everything on startup.
Flags: needinfo?(rnewman)
Attachment #8777156 - Flags: review?(rnewman)
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review67428

::: services/sync/modules/engines/clients.js:59
(Diff revision 2)
>  this.ClientEngine = function ClientEngine(service) {
>    SyncEngine.call(this, "Clients", service);
>  
>    // Reset the client on every startup so that we fetch recent clients
>    this._resetClient();
> +  this._loadOutgoingCommands();

Strictly speaking, this is wrong. There's no guarantee that the load will have completed by the time this engine is synced -- `jsonLoad` takes a callback.

::: services/sync/modules/engines/clients.js:195
(Diff revision 2)
>        this.lastRecordUpload = Date.now() / 1000;
>      }
> +
> +    // We delete every client on startup, which means on the first
> +    // sync the store will be empty therefore we will not upload any command.
> +    // This is why we pick commands depending if the clientId is in the store yet.

This comment is confusingly worded, and wouldn't apply if you moved this to `_uploadOutgoing`.

::: services/sync/modules/engines/clients.js:197
(Diff revision 2)
> +
> +    // We delete every client on startup, which means on the first
> +    // sync the store will be empty therefore we will not upload any command.
> +    // This is why we pick commands depending if the clientId is in the store yet.
> +    this._outgoingCommandsUpload = {};
> +    const storeIds = Object.keys(this._store.getAllIDs());

This is pretty confusing.

Why aren't you doing this in `_uploadOutgoing`?

::: services/sync/modules/engines/clients.js:199
(Diff revision 2)
> +    // sync the store will be empty therefore we will not upload any command.
> +    // This is why we pick commands depending if the clientId is in the store yet.
> +    this._outgoingCommandsUpload = {};
> +    const storeIds = Object.keys(this._store.getAllIDs());
> +    const uploadedIds = [];
> +    for (var id in this._outgoingCommands) {

`let` not `var`, throughout.

::: services/sync/modules/engines/clients.js:285
(Diff revision 2)
>      SyncEngine.prototype._syncFinish.call(this);
>    },
>  
> +  _syncCleanup() {
> +    // Restore failed outgoing commands so we upload them next time
> +    for (let [id, when] in Iterator(this._modified)) {

This syntax is being removed; see Bug 1293002.

Use

```
for (let [id, when] of Object.entries(this._modified)) {
```

::: services/sync/modules/engines/clients.js:375
(Diff revision 2)
> +  _loadOutgoingCommands() {
> +    // Initialize to an empty object if there's no file.
> +    this._outgoingCommands = {};
> +    Utils.jsonLoad("outgoingCommands/" + this.name, this, function(outgoingCommands) {
> +      if (outgoingCommands) {
> +        this._outgoingCommands = outgoingCommands;

Imagine it takes 20 minutes to load the file.

::: services/sync/modules/engines/clients.js:388
(Diff revision 2)
> +        this._log.error("Failed to save JSON outgoing commands", error);
> +      }
> +    }
> +    Utils.namedTimer(function () {
> +      Utils.jsonSave("outgoingCommands/" + this.name, this, this._outgoingCommands, cb);
> +    }, 1000, this, "_lazyOutgoingCommandsSave");

Why wait a second?
Attachment #8777156 - Flags: review?(rnewman) → review-
Comments addressed.

We remove the _clearCommands hackery we added in a bug 1262021 by leveraging the new commands/ file we introduced in the previous patch.
The file saving/loading operations are synchronous, it makes the code simpler to reason about. Mark said it's okay for now.
Attachment #8777156 - Flags: feedback?(markh)
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

Let's get Mark's eyes on this first.
Attachment #8777156 - Flags: review?(rnewman)
Attachment #8777156 - Flags: review?(markh)
Attachment #8777156 - Flags: feedback?(markh)
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review68598

This is heading in the right direction but still needs some work. In particular, we need to ensure the tests are covering all the edge-cases we care about - eg, multiple downloads, each with new commands, before we can succeed with uploading - and when that final successful upload is after a restart (which we'd simulate in tests by tearing down the engine and creating a new one) - and in both directions, eg:

* incoming record for a different device that already has commands queued from a different device, and we want to add new commands to it - however, before we've successfully uploaded a record with our commands we've actually managed to download multiple times, each with different existing commands removed - ie:

** We are device A and want to write a command to device B. On download of B, we find existing commands queued for B by (say) device C.
** We fail to upload the new device B for some reason and the browser restarts.
** We now download device B and find the record no longer has the commands from device C (as device B successfully applied them and re-wrote with the commands cleared).
Our command must successfully still end up on device B.

and in reverse:
** We are device A, download our record and find 3 commands from device B.
** We process the commands, but fail to upload our record.
** Browser restarts, we download and find our record still have the records from device B we've already processed, but also some new records from device C.
We must only process the commands from device C.

See my comments below about the unrelated test changes - they make it difficult for me to see exactly what cases you have changed the test to handle, but a quick scan implies these aren't covered (and a quick read of the patch implies they probably will not work)

::: services/sync/modules/engines/clients.js:236
(Diff revision 3)
>        this._incomingClients = null;
>      }
>    },
>  
>    _uploadOutgoing() {
> -    this._clearedCommands = null;
> +    this._clearRemoteCommands();

I don't see how this will work correctly if the upload fails. IIUC, this destroys the persisted values and relies on this._outgoingCommands having the commands we want to send - so if this fails it seems we will lose them. We need to ensure we do the right thing both when the upload fails (eg, we need to handle that we successfully downloaded multiple times, each with new commands, before we succeed with upload), and we restart the browser before successfully updating.

I was thinking that this._outgoingCommands should die and the object returned by _getChanges() is the canonical store of outgoing commands for remote records, and for already-processed commands for incoming records.

thus, createRecord looks something like the following pseudo-code:

    // record is a new blank record we are preparing for upload.
    let record = new ClientsRec(collection, id);

    // _remoteClients is an unmodified version of the record we
    // previously read from the server.
    let remoteRecord = this._remoteClients[id];

    changes = this._getCommands()
    if (id == this.engine.localID) {
      // so anything in changes[id] are incoming commands that we
      // have previously applied. IOW, changes[id] should be a subset
      // of remoteRecord.commands.
      record.changes = remoteRecord[changes] - changes[id]; // logical "subtraction"
    } else {
      // the remote record - anything in changes[id] are outgoing commands for
      // the device.
      record.changes = remoteRecord[changes] + changes[id]; // logical addition.
    }

Note we haven't persisted anything here. If we fail to upload we do nothing - the next upload should do the exact same thing and come up with the exact same output records (unless there was an intermediate successful fetch that changed them.

Then, on successful upload for each of the records, we can wipe changes[] for that ID and update this._remoteClients with the record we just uploaded, as it is now identical to the version on the server, and _remoteClients should *always* be an accurate reflection of what we know is on the server.

Then, processIncomingCommands() looks at _remoteClients[localid], and ignores any command in changes[localid] as they correspond to commands we've previously executed but have failed to upload our record with them cleared. On execution, we add the command to changes[localid] and persist changes out.

Adding a new command for another client is even simpler - it just adds the command to changes[remoteid] and persists it (after de-duping etc).

In both these cases, we later end up in createRecord above, which is where we reconcile the records with the commands and clear them on successful upload.

::: services/sync/modules/engines/clients.js:273
(Diff revision 3)
> +      if (this._outgoingCommands[id]) {
> +        this._addRemoteCommands(id, this._outgoingCommands[id]);
> +      }
> +    }
> +    this._outgoingCommands = null;
> +    SyncEngine.prototype._syncFinish.call(this);

I think you mean _syncCleanup here :)

::: services/sync/modules/engines/clients.js:645
(Diff revision 3)
>        // record.formfactor = "";        // Bug 1100722
>      } else {
>        record.cleartext = this._remoteClients[id];
> +
> +      // Pack up the commands we have to upload
> +      if (this.engine._outgoingCommands && this.engine._outgoingCommands[id]) {

it looks like this is still modifying this._remoteClients[id] via a reference? This is wrong if we stick with the rule that _remoteClients[] is always a copy of the record as stored in the server (ie, that we previously read or wrote)

::: services/sync/modules/engines/clients.js:648
(Diff revision 3)
> +
> +      // Pack up the commands we have to upload
> +      if (this.engine._outgoingCommands && this.engine._outgoingCommands[id]) {
> +        const recordCommands = record.cleartext.commands || [];
> +        const newCommands = this.engine._outgoingCommands[id]
> +                              .filter(command => !hasDupeCommand(recordCommands, command));

this indentation looks odd.

::: services/sync/tests/unit/test_clients_engine.js:34
(Diff revision 3)
>      rec.IV = payload.IV;
>  
>      let cleartext = rec.decrypt(Service.collectionKeys.keyForCollection("clients"));
>  
>      _("Payload is " + JSON.stringify(cleartext));
> -    do_check_eq(Services.appinfo.version, cleartext.version);
> +    equal(Services.appinfo.version, cleartext.version);

While I'm generally supportive of modernizing code (or, eg, fixing whitespace etc), I generally prefer that there be a stand-alone patch (possibly in the same bug) for that which doesn't change any semantics. The way this patch is written it's quite difficult to work out what test semantics have actually changed.
Attachment #8777156 - Flags: review?(markh)
> Then, on successful upload for each of the records, we can wipe changes[] for that ID and update 
> this._remoteClients with the record we just uploaded, as it is now identical to the version on the
> server, and _remoteClients should *always* be an accurate reflection of what we know is on the server.

Although as Richard mentions above, a slight (hah!) complication here is that we may have queued a new command between preparing the records for upload and the upload actually succeeding. If we make the format of changes.json a little richer we could keep a flag, or counter, or something else to handle this case - and we will want tests that cover it too.
Thanks for the thorough feedback Mark. I added an explanation of the engine (and what I did in this patch) at the top of the clients.js file.
Sorry about the mass change of the test functions, it was because the patch was rebased against another patch that changed them.
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review69670

This is heading in the right direction, although the whole "generation" thing seems wrong (as we discussed, a flag would work), but as you said, even a flag seems hacky and error prone.

While I suspect you will curse me for this, here's an idea that I think is "fairly simple" (hah) to implement and offers close to transactional capabilities without generations or counters, and should be fairly efficient with IO. It doesn't really change the logic for "merging" records, just the storage.

Let me know what you think!

Roughly: We use 2 files - commands.json and commands-syncing.json, and use the file system to help us. The "steady state" is that neither of these files exist.

* At sync upload time, we attempt a rename of commands.json to commands-syncing.json, and *ignore errors*.
* We load commands-syncing.json (gracefully handling the file not existing) and stash the contents in, say, this._currentlySyncingCommands (or a better name!) that lives for the duration of the upload process.
* We use this._currentlySyncingCommands to build the outgoing records we care about - ie, we read commands-syncing.json exactly once during a Sync.
* Immediately after successful upload, we delete commands-syncing.json from disk (and clear this._currentlySyncingCommands)
* Any time we need to "save" a command for future syncs, we load commands.json, update it, and write it back out. This is the only time we actually read commands.json, and we never bother keeping the file contents in memory. Indeed, a future optimization might be to define this file as having one JSON object per line, so a new command could just append. I don't think this is worthwhile at this stage though (and that would prevent us de-duping until actual Sync time)

Some scenarios:

Normal Syncs when there's no commands:
======================================

* We attempt the rename that fails due to file-not-found.
* We attempt to open commands-syncing.json, find nothing there, meaning no outgoing records (although we still do the "periodically upload our record" thing, even when no command changes exist)
* Done!

We want to send a command to another device:
============================================

* We load commands.json (which may or may not exist), update JSON and re-write it. The score is bumped.
* Sync starts, we rename and process as above.
* Let's say during a Sync we want to send another command:
    + We load commands.json - which does *not* exist due to the rename we did during Sync.
    + We write a new commands.json with this second command, and bump the score.
    + Sync completes, commands-syncing.json is deleted, and finds the score incremented. Next Sync starts immediately.
    + Next Sync starts, process repeats (now existing commands.json is successfully renamed, etc)
* Done

POST of records all fail:
=========================

* We have 2 choices here - either treat as it as crash during a Sync (see below), or treat it the same as a 200 response with every ID listed as failing. While the former will be less IO, the latter probably gives a better result as we can correctly de-dupe commands waiting to be Synced (as our duping logic will only consider the contents of commands.json, not commands-syncing.json)

We crash during a Sync:
=======================

* The crash leaves commands-syncing.json on disk.
* Upon first Sync, the move of commands.json to commands-syncing.json fails due to commands-syncing.json existing - we ignore that.
* We open commands-syncing.json - this is the version of the file from the previously crashed session.
* We reapply the commands and sync as normal, delete commands-syncing.json.
* At the end of the Sync, we may or may not have commands.json, but no longer have commands-syncing.json.
* Next Sync works as normal.

Issues:
=======

* This obviously isn't truly transactional, but so long as we delete commands-syncing.json as soon as the POST completes, the window is as small as we can make it.
* We need to handle the case of individual records failing in the POST - this just means that we can copy the failed commands for that ID from our in-memory this._currentlySyncingCommands, and re-add them back to commands.json (ie, treat those failing records as though they where normal commands that appeared while we are syncing.) We expect this to be *extremely* rare.
* We need to be careful re Sync loops, particularly in the individual records failing case:
    * Stuff added to commands.json will *usually* cause a score increment, meaning that if commands are added while we are Syncing, we should Sync again immediately.
    * Except when we are adding stuff to commands.json due to individual commands failing during the Sync - this should *not* increment the score as the next Sync is almost certainly going to fail those same IDs.

::: services/sync/modules/engines/clients.js:355
(Diff revision 4)
> +    this._markCommandsWithGenerationId(this._generationId);
>      SyncEngine.prototype._uploadOutgoing.call(this);
>    },
>  
>    _onRecordsWritten(succeeded, failed) {
> +    // Reconcile the status of the local records with what we think is now on the

s/what we think is now on/what we just wrote to/ just to make it a little clearer that we aren't guessing :)

::: services/sync/modules/engines/clients.js:510
(Diff revision 4)
> -      return;
> -    }
> -
>      this._log.trace("Client " + clientId + " got a new action: " + [command, args]);
> +    this._addClientCommand(clientId, action);
>      this._tracker.addChangedID(clientId);

It looks like we don't actually need to addChangedID here, as next Sync we seem guaranteed to add it? (I don't really care that we do it twice, but I'm just making sure I understand things correctly)

::: services/sync/modules/engines/clients.js:687
(Diff revision 4)
>      } else {
>        this._updateRemoteRecord(record);
>      }
>    },
>  
>    _updateLocalRecord(record) {

given both of these are only 1 line, I'd be inclined to roll them up directly into update()
Attachment #8777156 - Flags: review?(markh)
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review70504

You are right - I did like it alot :) I think this is looking awesome, but I think the _onRecordsWritten() logic in particular could do with a little tightening up. I'm confident the next version will be perfect :)

::: services/sync/modules/engines/clients.js:13
(Diff revision 5)
> + * - We use 2 files - commands.json and commands-syncing.json.
> + *
> + * - At sync upload time, we attempt a rename of commands.json to
> + *   commands-syncing.json, and ignore errors (helps for crash during sync!).
> + * - We load commands-syncing.json and stash the contents in
> + *   _currentlySyncingCommands whuch lives for the duration of the upload process.

s/whuch/which/

::: services/sync/modules/engines/clients.js:231
(Diff revision 5)
> +      cb();
> +    });
> +    cb.wait();
> +  },
> +
> +  _markCommandsForUpload() {

Unless it's important for testing, I'd be inclined to roll these 2 functions into, say, _prepareCommandsForUpload()

::: services/sync/modules/engines/clients.js:243
(Diff revision 5)
> +    let cb = Async.makeSpinningCallback();
> +    Utils.jsonLoad("commands-syncing", this, commands => cb(null, commands));
> +    this._currentlySyncingCommands = cb.wait() || {};
> +  },
> +
> +  _deleteCommandsToUpload() {

nit: this would read better as something like _deleteUploadedCommands()

::: services/sync/modules/engines/clients.js:266
(Diff revision 5)
> +    allCommands[clientId] = clientCommands.concat(command);
> +    this._saveCommands(allCommands);
> +  },
> +
>    _syncStartup: function _syncStartup() {
> +    this._markCommandsForUpload();

I think these could be called in _uploadOutgoing()? If so, it might mean you could maybe have _prepareCommandsForUpload return the commands to load and assign it here - ie, _uploadOutgoing could maybe start with this._currentlySyncingCommands = this._prepareCommandsForUpload() - it keeps the "prepare" function somewhat stateless.

::: services/sync/modules/engines/clients.js:338
(Diff revision 5)
> +      if (!commandChanges) {
> +        continue;
> +      }
> +      if (id == this.localID) {
> +        if (this.localCommands) {
> +          this.localCommands = this.localCommands.filter(command => !hasDupeCommand(commandChanges, command));

It's not clear to me that the localCommands handling here is correct. Isn't it true that this.localCommands should always be empty at this point (as IIUC, this.localCommands are the commands we received from the server in the download, and given we must have executed them, at upload time I'd expect it to be empty)

It looks like this should be written something like:

```javascript
for (let id of succeeded) {
  if (id == this.localID) {
    this.localCommands = [];
  else {
    const commandChanges = this._currentlySyncingCommands[id];
    const clientRecord = this._store._remoteClients[id];
    if (!commandChanges || !clientRecord) {
      // should be impossible, else we wouldn't have been writing it.
      log.warn("No command changes for a client we uploaded");
      continue;
   }
   // fixup the client record, so our copy of _remoteClients matches what we uploaded.
```

And that last comment is important - we must be confident \_remoteClients does match - it might also be possible to store the record we actually created somewhere and just switch \_remoteClients[id] to that (and thus probably don't need to refer to \_currentlySyncingCommands here at all)

::: services/sync/modules/engines/clients.js:434
(Diff revision 5)
>    },
>  
>    _wipeClient: function _wipeClient() {
>      SyncEngine.prototype._resetClient.call(this);
>      delete this.localCommands;
>      this._store.wipe();

I think this should probably wipe both json files, discarding any queued commands. A complication is that this is called directly in the client constructor, but I'm pretty sure we could replace that with ```this.resetLastSync();``` - the intent of that was just to ensure we read each record as we startup.

::: services/sync/modules/engines/clients.js:668
(Diff revision 5)
>      this.update(record)
>    },
>  
>    update: function update(record) {
>      if (record.id == this.engine.localID) {
>        this._updateLocalRecord(record);

I still think it makes this easier to understand if we roll the 2 x 1-line function below directly into here (it made sense to habe them split before when they were both largish, but 1-line helper functions with exactly 1 caller just add noise IMO)

But if your tastes prefer it this way, I'm OK with that.

::: services/sync/modules/engines/clients.js:704
(Diff revision 5)
>        record.type = this.engine.localType;
> -      record.commands = this.engine.localCommands;
>        record.version = Services.appinfo.version;
>        record.protocols = SUPPORTED_PROTOCOL_VERSIONS;
>  
> +      // Substract the commands we cleared

s/cleared/we recorded that we've already executed/ might make this a little clearer.

::: services/sync/modules/util.js:416
(Diff revision 5)
>  
> +  /**
> +   * Move a json file in the profile directory.
> +   *
> +   * @param aFrom
> +   *        Current path to the JSON file saved on disk, .json will be appended

these docs look a little misleading - I think they should clarify aFrom and aTo are expected to be paths relative to {profiledir}/weave.

They should also indicate the noOverwite behaviour (ie, that this will fail if the target already exists.

Even though it's not consistent with the other JSON functions here, I'd be fine if you chose to implement this as a promise, but I'll leave that to you.

::: services/sync/tests/unit/test_clients_engine.js:380
(Diff revision 5)
>  
> -  let command = newRecord.commands[0];
> +  let command = clientCommands[0];
>    equal(command.command, action);
>    equal(command.args.length, 2);
> -  equal(command.args, args);
> +  deepEqual(command.args, args);
>  

this test is no longer testing they end up in the outgoing record. I think it would be nice if we could jump through a few hoops and end up calling store.createRecord(), after which the added commands should be in the outgoing record.

::: services/sync/tests/unit/test_clients_engine.js:430
(Diff revision 5)
>  
>        notEqual(engine._tracker, undefined);
>        notEqual(engine._tracker.changedIDs[remoteId], undefined);
>      } else {
>        _("Ensuring command is scrubbed: " + action);
>        equal(newRecord.commands, undefined);

I think you want to check clientCommands here - IIUC, nothing ends up in newRecord.commands at this time, valid or not.
Attachment #8777156 - Flags: review?(markh)
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review70504

> It's not clear to me that the localCommands handling here is correct. Isn't it true that this.localCommands should always be empty at this point (as IIUC, this.localCommands are the commands we received from the server in the download, and given we must have executed them, at upload time I'd expect it to be empty)
> 
> It looks like this should be written something like:
> 
> ```javascript
> for (let id of succeeded) {
>   if (id == this.localID) {
>     this.localCommands = [];
>   else {
>     const commandChanges = this._currentlySyncingCommands[id];
>     const clientRecord = this._store._remoteClients[id];
>     if (!commandChanges || !clientRecord) {
>       // should be impossible, else we wouldn't have been writing it.
>       log.warn("No command changes for a client we uploaded");
>       continue;
>    }
>    // fixup the client record, so our copy of _remoteClients matches what we uploaded.
> ```
> 
> And that last comment is important - we must be confident \_remoteClients does match - it might also be possible to store the record we actually created somewhere and just switch \_remoteClients[id] to that (and thus probably don't need to refer to \_currentlySyncingCommands here at all)

> It's not clear to me that the localCommands handling here is correct. Isn't it true that this.localCommands should always be empty at this point (as IIUC, this.localCommands are the commands we received from the server in the download, and given we must have executed them, at upload time I'd expect it to be empty)

I think assigning an empty array to localCommands is wrong here, because we haven't processed any of the new commands we  just received in _processIncoming yet (they are processed after the sync).
Comments addressed, thanks for the review Mark.
Comment on attachment 8777156 [details]
Bug 1289287 - Store Sync clients outgoing commands in a different collection until uploaded.

https://reviewboard.mozilla.org/r/68782/#review70970

A few minor comments below, but it looks great, thanks!

::: services/sync/modules/engines/clients.js:76
(Diff revision 6)
>  
>  
>  this.ClientEngine = function ClientEngine(service) {
>    SyncEngine.call(this, "Clients", service);
>  
> -  // Reset the client on every startup so that we fetch recent clients
> +  // Reset the last sync timestamp on every startup so that we fetch recent clients

nit: while this text already existed, I think we should s/recent/all/ here.

::: services/sync/modules/engines/clients.js:232
(Diff revision 6)
> +    });
> +    cb.wait();
> +  },
> +
> +  _prepareCommandsForUpload() {
> +    let cb = Async.makeSpinningCallback();

I think you should just use promiseSpinningly here:

Async.promiseSpinningly(() => {
  return Utils.jsonMove(...)....
}

::: services/sync/modules/engines/clients.js:242
(Diff revision 6)
> +    return cb.wait() || {};
> +  },
> +
> +  _deleteUploadedCommands() {
> +    delete this._currentlySyncingCommands;
> +    let cb = Async.makeSpinningCallback();

and here

::: services/sync/modules/engines/clients.js:337
(Diff revision 6)
> +          // should be impossible, else we wouldn't have been writing it.
> +          this._log.warn("No command/No record changes for a client we uploaded");
> +          continue;
> +        }
> +        // fixup the client record, so our copy of _remoteClients matches what we uploaded.
> +        if (this._store._remoteClients[id]) {

you've already assigned this to clientRecord and checked it's not falsey, so I don't think you need this if, and should use clientRecord in the block itself.

::: services/sync/modules/engines/clients.js:433
(Diff revision 6)
>    _wipeClient: function _wipeClient() {
>      SyncEngine.prototype._resetClient.call(this);
>      delete this.localCommands;
>      this._store.wipe();
> +    const logRemoveError = err => this._log.warn("Could not delete json file", err);
> +    Utils.jsonRemove("commands", this).catch(logRemoveError);

while highly unlikely to be a problem, these promises are running in the background so might complete after we've tried to add a new comment or similar. IOW, I think we should promiseSpinningly here (and as above, can probably do them both in a single promiseSpinningly)

::: services/sync/modules/engines/clients.js:702
(Diff revision 6)
>  
> +      // Substract the commands we recorded that we've already executed
> +      if (commandsChanges && commandsChanges.length &&
> +          this.engine.localCommands && this.engine.localCommands.length) {
> +        const newCommands = this.engine.localCommands.filter(command => !hasDupeCommand(commandsChanges, command));
> +        if (newCommands.length) {

I'm surprised this check exists - I'd have thought we should update record.commands with an empty array if newCommands.length == 0?

::: services/sync/modules/util.js:415
(Diff revision 6)
>    }),
>  
> +  /**
> +   * Move a json file in the profile directory. Will fail if a file exists at the
> +   * destination.
> +   *

here and below, comment that it returns a promise<undefined> on success, or rejects on failure.

::: services/sync/modules/util.js:433
(Diff revision 6)
> +    let pathTo = OS.Path.join(OS.Constants.Path.profileDir, "weave",
> +                              ...(aTo + ".json").split("/"));
> +    if (that._log) {
> +      that._log.trace("Moving " + pathFrom + " to " + pathTo);
> +    }
> +    yield OS.File.move(pathFrom, pathTo, { noOverwrite: true });

you could just return the promise here instead of using Task.async (here and below). I don't really care too much, but a task function with a single yield seems overkill.
Attachment #8777156 - Flags: review?(markh) → review+
Thank you Mark!
Keywords: checkin-needed
Pushed by ryanvm@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/80e37d9a9623
Store Sync clients outgoing commands in a different collection until uploaded. r=markh
Keywords: checkin-needed
Sorry about this Wes,

Try is green for the fix:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=888e45a3a8a0
Flags: needinfo?(eoger)
Keywords: checkin-needed
Pushed by ryanvm@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/c982394c73fd
Store Sync clients outgoing commands in a different collection until uploaded. r=markh
Keywords: checkin-needed
https://hg.mozilla.org/mozilla-central/rev/c982394c73fd
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 51
Flags: qe-verify+
I've reproduced the initial issue on Nightly 50.0a1 (2016-07-20). 

Verified fixed on Windows 7 x64, Ubuntu 14.04 x64 and Mac 10.12 using Firefox 51 Beta 8 (buildID: 20161219140923) - with "services.sync.sendTabToDevice.enabled" pref set to true.
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: