Closed Bug 1253740 Opened 8 years ago Closed 7 years ago

[kinto] Implement storage.sync

Categories

(Toolkit :: General, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla52
Tracking Status
firefox52 --- fixed

People

(Reporter: andy+bugzilla, Assigned: glasserc)

References

Details

(Whiteboard: [storage]triaged)

Attachments

(9 files, 12 obsolete files)

58 bytes, text/x-review-board-request
kmag
: review+
Details
58 bytes, text/x-review-board-request
markh
: review+
Details
58 bytes, text/x-review-board-request
Details
58 bytes, text/x-review-board-request
Details
58 bytes, text/x-review-board-request
Details
58 bytes, text/x-review-board-request
rnewman
: review+
Details
58 bytes, text/x-review-board-request
markh
: review+
Details
58 bytes, text/x-review-board-request
markh
: review+
Details
58 bytes, text/x-review-board-request
kmag
: review+
Details
Port over the relevant code from https://github.com/Kinto/kinto-chrome/ so that we can have chrome.storage.sync working in Firefox.
Blocks: 1220494
Summary: Port over the code from the prototype → [kinto] Port over the code from the prototype
Attached patch 0001-Bug-1253740-Implement-storage.sync.patch (obsolete) — — Splinter Review
Attachment #8742743 - Flags: review-
Attachment #8742743 - Flags: feedback?
Assignee: nobody → mbdejong
Hi Bob,

I've written the first part of the code, using git. Can you give me some feedback?
Flags: needinfo?(bob.silverberg)
Thanks for the patch, Michiel. It's a lot of code, but I will endeavour to give it an initial feedback review this week. Would it be possible for you to submit it to Mozreview? I think it would make the reviewing easier, especially with such a large patch.
Flags: needinfo?(mbdejong)
Michiel, I tried applying your patch locally, to see if I could run the tests, but it doesn't apply cleanly. Do you think you could rebase and provide a new patch?
Flags: needinfo?(bob.silverberg)
Oops, I didn't mean to clear my needinfo.
Flags: needinfo?(bob.silverberg)
sorry for late reply, was on pto. Thanks for taking the time to give feedback! OK, I'll rebase on master and submit through MozReview.
Flags: needinfo?(mbdejong)
Whiteboard: [storage] → [storage]triaged
Managed (finally) to rebase my PR on current gecko-dev master: https://github.com/michielbdejong/gecko-dev/compare/master...1253740-kinto-storage-sync?expand=1

Will now work on installing MozReview on my machine, learning how to use it, and adding this in there.

There are a few new tests for storing `function() {}`, `/regexp/` and `window` in storage.local, and they fail in storage.sync since Kinto.js does not support storing such things. I commented those tests out for now, will try to work out a reasonable way of handling that.
Assignee: mbdejong → eglassercamp
Finally managed to install and configure MozReview, locally and remotely; now just have to convince MozReview that my commit is not a merge commit, but that seems to be solvable, hopefully soon.
Hi Mike, ni-ing you since you helped me a lot before on irc. :) I'm running out of ideas of what else I can try to resolve this:

After rebasing on branches/default/tip, MozReview/git-cinnabar still thinks my commit is a merge:

      Michiels-Laptop:mozilla-central Michiel$ git mozreview push
      abort: error performing cinnabar push: Pushing merges is not supported yet
      Michiels-Laptop:mozilla-central Michiel$ git status
      On branch branches/default/tip
      Your branch is ahead of 'origin/branches/default/tip' by 1 commit.
        (use "git push" to publish your local commits)
      nothing to commit, working directory clean
      Michiels-Laptop:mozilla-central Michiel$ git log | head -15
      commit 5b1dfe02f402688d6f5bae77e81da6aa006e3a5d
      Author: Michiel de Jong <mbdejong@mozilla.com>
      Date:   Thu May 19 14:15:27 2016 +0200

          Bug 1253740 - Implement storage.sync, r?bsilverberg
          
          MozReview-Commit-ID: BsVg5u0bUNU

      commit 4124f4b6e3415fb1ef60b8e51d8db48e6bff4601
      Merge: c327e18 2f5b16f
      Author: Phil Ringnalda <philringnalda@gmail.com>
      Date:   Sun May 15 20:37:48 2016 -0700

          Merge f-t and m-i to m-c, a=merge

      Michiels-Laptop:mozilla-central Michiel$ 


Is it because I created this commit using `git cherry-pick`, maybe? I tried `git reset HEAD~ ; git add . ; git commit -am"Bug 1253740 - Implement storage.sync, r?bsilverberg"` to make sure my commit is not a cherry-pick, but still getting the same error.
Flags: needinfo?(mh+mozilla)
The commit you're based on is one from gecko-dev. If you want to push from a gecko-dev clone, you need to follow the instructions on https://github.com/glandium/git-cinnabar/wiki/Mozilla:-Using-a-git-clone-of-gecko%E2%80%90dev-to-push-to-mercurial
Flags: needinfo?(mh+mozilla)
Sorry that I (still) didn't manage to submit my commit to mozreview, I really tried (spent multiple days on it); in the meantime here is a hg patch rebased on mozilla-central default tip from last Wednesday.

If you could give me some feedback then I can at least continue working on the code, and then maybe during the London all-hands I can find someone to help me get mozreview running, so I can submit the review request to you properly.
Attachment #8742743 - Attachment is obsolete: true
Attachment #8742743 - Flags: feedback?
Attachment #8754797 - Flags: feedback?(bob.silverberg)
Finally managed to push to mozreview by exporting my patch from git, importing it into mercurial, and using `hg push review`. Sorry that this took me more than a month! ;)
https://reviewboard.mozilla.org/r/54584/#review51180

My own comments

::: toolkit/components/extensions/ExtensionStorageSync.jsm:37
(Diff revision 1)
> +  var coll;
> +  if (!Kinto) {
> +    return Promise.reject(new Error('Not supported'));
> +  }
> +  return Promise.resolve().then(() => {
> +    //TODO: implement sync process

This will be done by @glasserc, based on https://github.com/michielbdejong/gecko-dev/commit/edab4c620ceab400c790915824549cfa76bed866

::: toolkit/components/extensions/ExtensionStorageSync.jsm:53
(Diff revision 1)
> +  });
> +}
> +
> +var md5 = function(str) {
> +  // In a deterministic way, make sure string is long enough:
> +  // str = '-----------------------------' + str;

Will delete these two lines, comment no longer applies (it's now using `'' + str`, not `'-----------------------------' + str`).

::: toolkit/components/extensions/ExtensionStorageSync.jsm:81
(Diff revision 1)
> +}
> +
> +function keyToId(key) {
> +  let md5Str = md5(key);
> +  const parts = [];
> +  [8,4,4,4,12].map(numChars => {

Admittedly this is a bit of an ugly way to generate a UUID-like string, but don't really know of a better way.

::: toolkit/components/extensions/ext-storage.js:15
(Diff revision 1)
>  Cu.import("resource://gre/modules/ExtensionUtils.jsm");
>  var {
>    EventManager,
>  } = ExtensionUtils;
>  
> +function wrapRejection(err) {

This is needed because the exception objects from the gecko layer don't survice passing them to the JavaScript runtime (they're not serializable, whereas `{ message: '...' }` is).

::: toolkit/components/extensions/ext-storage.js:26
(Diff revision 1)
> +
>  extensions.registerSchemaAPI("storage", "storage", (extension, context) => {
>    return {
>      storage: {
>        local: {
> -        get: function(keys) {
> +        get: function(spec) {

Renaming `keys` to the more generic term `spec`, since it can be an Array of keys, or one key, or an object whose keys are the keys you're after.

::: toolkit/components/extensions/ext-storage.js:44
(Diff revision 1)
> +
> +      sync: {
> +        get: function(spec) {
> +          return ExtensionStorageSync.get(extension.id, spec).catch(wrapRejection);
>          },
>          set: function(items) {

diff is confusing `storage.local` and `storage.sync` here, sorry if the diff is not very readable.

::: toolkit/components/extensions/ext-storage.js:64
(Diff revision 1)
> +        let listenerSync = changes => {
> +          fire(changes, 'sync');
>          };
>  
> -        ExtensionStorage.addOnChangedListener(extension.id, listener);
> +          ExtensionStorage.addOnChangedListener(extension.id, listenerLocal);
> +          try {

Will get rid of this `try-catch` since it should never fail at runtime, just when `ExtensionStorageSync.jsm` has syntax errors and/or wasn't built.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:61
(Diff revision 1)
>  
>    /* eslint-disable dot-notation */
> -
> +  function runTests(areaName) {
> +    expectedAreaNameForEvents = areaName;
> +    let storage = browser.storage[areaName];
> -  // Set some data and then test getters.
> +    // Set some data and then test getters.

Ah, nice how this diff tool understands indents! :)

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:145
(Diff revision 1)
> -        bool: true,
> +          bool: true,
> -        undef: undefined,
> +          undef: undefined,
> -        obj: {},
> +          obj: {},
> -        arr: [1, 2],
> +          arr: [1, 2],
> -        date: new Date(0),
> +          date: new Date(0),
> -        regexp: /regexp/,
> +          // regexp: /regexp/,

Commented these out because the tests for them fail, and I don't really understand how these are serialized between the JavaScript runtime layer and the Gecko layer.

@glasserc, do you feel like trying to find a way to make these tests pass?

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:224
(Diff revision 1)
> +  runTests('local').then(() => {
> +    SpecialPowers.setBoolPref('extension.storage.sync.enabled', true);
> +    return runTests('sync');
> +  }).then(() => {
> +    return checkStorageSync();
> +  }).then(() => {

Loving this diff tool! :)

::: toolkit/components/extensions/test/xpcshell/test_locale_converter.js
(Diff revision 1)
>    let result = NetUtil.readInputStreamToString(resultStream, resultStream.available());
>  
>    equal(result, "Foo <localized-xxx> bar <localized-yyy> baz");
>  });
>  
> -

Oops, will undo this white line delete.

::: toolkit/modules/Sqlite.jsm:211
(Diff revision 1)
>   * This object contains more methods than just `close`.  When
>   * OpenedConnection needs to use the methods in this object, it will
>   * dispatch its method calls here.
>   */
>  function ConnectionData(connection, identifier, options={}) {
> -  this._log = Log.repository.getLoggerWithMessagePrefix("Sqlite.Connection",
> +  // this._log = Log.repository.getLoggerWithMessagePrefix("Sqlite.Connection",

Oops, will undo this change, was using this for debugging.

::: toolkit/modules/Sqlite.jsm:908
(Diff revision 1)
>      throw new Error("Sqlite.jsm has been shutdown. Cannot open connection to: " + options.path);
>    }
>  
>    // Retains absolute paths and normalizes relative as relative to profile.
>    let path = OS.Path.join(OS.Constants.Path.profileDir, options.path);
> -
> +dump('\n\n\njoined path '+path+'\n\n\n');

Oops, will remove this as well.
Thanks Michiel!  I'm glad you got it all sorted. I will take a look at this today and provide some feedback.
Flags: needinfo?(bob.silverberg)
Comment on attachment 8754797 [details] [diff] [review]
0001-Bug-1253740-Implement-storage.sync-r-bsilverberg.patch

Now that this is up in MozReview, you may want to obsolete this patch.
Attachment #8754797 - Flags: feedback?(bob.silverberg)
Attachment #8754797 - Attachment is obsolete: true
https://reviewboard.mozilla.org/r/54584/#review52188

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:145
(Diff revision 1)
> -        bool: true,
> +          bool: true,
> -        undef: undefined,
> +          undef: undefined,
> -        obj: {},
> +          obj: {},
> -        arr: [1, 2],
> +          arr: [1, 2],
> -        date: new Date(0),
> +          date: new Date(0),
> -        regexp: /regexp/,
> +          // regexp: /regexp/,

I looked at these yesterday and I saw that:

1. uncommenting the function "func" and its associated test seemed to work OK, so I'm not sure why that's commented out.
2. regexps aren't serialized correctly; the "spec" says that they should come out stringified. We might re-use the "sanitize" function that ExtensionStorage.jsm defines to handle this correctly.
3. including window causes it to crash because it has a cyclic object. The "sanitize" function also takes care of that by refusing to serialize anything that it doesn't recognize.

Based on this, I guess the best thing to do would be for ExtensionStorage.jsm to export the sanitize function and to use it in ExtensionStorageSync. I'll try to do this.
Attachment #8755387 - Flags: review?(bob.silverberg)
Comment on attachment 8755387 [details]
MozReview Request: Bug 1253740 - Implement storage.sync, r?bsilverberg

https://reviewboard.mozilla.org/r/54586/#review52258

Nice work, Michiel. Many of my comments are just questions, but some are suggestions for changes. I have reviewed all of the code changes, but haven't looked at the tests yet. One overall question though is: could we not just replace all of the existing code for `storage.local` with the new code for `storage.sync`? That is, couldn't we use the implementation for `storage.sync` as the back end for `storage.local` which would save us from having to maintain both and reduce code duplication? I think it's worth considering, but it's possible (or maybe even likely) that I am missing some fundamental differences that would prevent us from making that work.

I look forward to your responses to my questions and will start to dig into the tests.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:27
(Diff revision 1)
> +
> +Cu.import("resource://gre/modules/Task.jsm");
> +
> +/* globals ExtensionStorageSync */
> +
> +var collPromise = {};

We tend to use `Map`s for these. See for example [1].

Also, this name is a bit unclear. Perhaps simply `collections` would be better as, afaict, this is going to contain all of the collections for all of the installed extensions.

[1] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/extensions/ExtensionStorage.jsm#64

::: toolkit/components/extensions/ExtensionStorageSync.jsm:30
(Diff revision 1)
> +/* globals ExtensionStorageSync */
> +
> +var collPromise = {};
> +
> +function openColl(extensionId) {
> +  var collectionId = md5(extensionId);

Why do we need to apply the `md5` function to the `extensionId` to generate a `collectionId`?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:51
(Diff revision 1)
> +  }).then(() => {
> +    return coll;
> +  });
> +}
> +
> +var md5 = function(str) {

It's too bad you need to reimplement this. I wonder if there is somewhere from whence it could be imported? Could you maybe use `computeHash` from `AppsUtils.jsm`?

[1] https://dxr.mozilla.org/mozilla-central/source/dom/apps/AppsUtils.jsm#759

::: toolkit/components/extensions/ExtensionStorageSync.jsm:78
(Diff revision 1)
> +    hexrep[i * 2 + 1] = hexchars.charAt(digest.charCodeAt(i) & 15);
> +  }
> +  return hexrep.join('');
> +}
> +
> +function keyToId(key) {

Why is this needed? Is there a requirement that ids in Kinto collections be in that format?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:131
(Diff revision 1)
> +            newValue: record.data
> +          };
> +          return coll.update(record);
> +        }
> +
> +        return coll.get(record.id, { includeDeleted: true })

We tend to use arrow functions when chaining with `then`, so this would be more like:

```javascript
return coll.get(record.id, { includeDeleted: true }).then(old_record => {
  return updateItem(old_record.data);
}, reason => {
  if (reason.message.indexOf(" not found.") !== -1) {
    return createItem();
  }
  throw reason;
});
```

::: toolkit/components/extensions/ExtensionStorageSync.jsm:138
(Diff revision 1)
> +            return updateItem(old_record.data);
> +          }, function(reason) {
> +            if (reason.message.indexOf(" not found.") !== -1) {
> +              return createItem();
> +            }
> +            throw reason;

If we're dealing with promises here, should this return a rejected promise, rather than throwing?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:150
(Diff revision 1)
> +          id: keyToId(itemId),
> +          key: itemId,
> +          data: items[itemId]
> +        }));
> +      }
> +      return Promise.all(promises).then(results => {

I assume that `coll.create` and `coll.update` return promises, which is why this is being done?

Also, what happens if an error occurs with one of the keys? That would throw at line 138, but then would the listeners be notified of the changes that did work? Would we end up with some keys updated and other not, and if so is that the desired outcome? What about the keys that have not even been attempted yet when the error is thrown?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:178
(Diff revision 1)
> +            return;
> +          }
> +          throw err;
> +        });
> +      }
> +      return Promise.all(keys.map(removeItem))

Same questions here about throwing and what happens when some removals succeed but one fails?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:209
(Diff revision 1)
> +  get(extensionId, spec) {
> +    return this.getCollection(extensionId).then(coll => {
> +      let keys, records;
> +      if (spec === null) {
> +        records = {};
> +        return coll.list().then(function(res) {

Prefer arrow function as mentioned above.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:228
(Diff revision 1)
> +        keys = Object.keys(spec);
> +        records = spec;
> +      }
> +
> +      return Promise.all(keys.map(key => {
> +        return coll.get(keyToId(key)).then(function (res) {

Prefer arrow functions.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:233
(Diff revision 1)
> +        return coll.get(keyToId(key)).then(function (res) {
> +          if (res) {
> +            records[res.data.key] = res.data.data;
> +            return res.data;
> +          } else {
> +            return Promise.reject("boom");

When would this happen, and should we really return a rejected promise with "boom"?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:236
(Diff revision 1)
> +            return res.data;
> +          } else {
> +            return Promise.reject("boom");
> +          }
> +        }, function () {
> +          // XXX we just swallow the error and not set any key

The comment is good, but please remove the `XXX` and replace with a sentence with a capital letter and a period.

::: toolkit/components/extensions/ext-storage.js:15
(Diff revision 1)
>  Cu.import("resource://gre/modules/ExtensionUtils.jsm");
>  var {
>    EventManager,
>  } = ExtensionUtils;
>  
> +function wrapRejection(err) {

I saw your comment on this but I still don't understand. What is an example of when this would be needed?

::: toolkit/components/extensions/ext-storage.js:27
(Diff revision 1)
>  extensions.registerSchemaAPI("storage", "storage", (extension, context) => {
>    return {
>      storage: {
>        local: {
> -        get: function(keys) {
> -          return ExtensionStorage.get(extension.id, keys);
> +        get: function(spec) {
> +          return ExtensionStorage.get(extension.id, spec).catch(wrapRejection);

Why do the `local` methods need this `catch(wrapRejection)` added to them when they didn't need them before? It doesn't look like this patch makes any changes to `storage.local`.
https://reviewboard.mozilla.org/r/54586/#review52296

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:20
(Diff revision 1)
>  
>  function backgroundScript() {
> -  let storage = browser.storage.local;
> -  function check(prop, value) {
> +  function check(areaName, prop, value) {
> +    let storage = browser.storage[areaName];
>      return storage.get(null).then(data => {
>        browser.test.assertEq(value, data[prop], "null getter worked for " + prop);

I know it's not how the test is currently written, but we prefer to use template strings over concatenation, so this would become:

```javascript
browser.test.assertEq(value, data[prop], `null getter worked for ${prop}`);
```

I would also suggest adding `areaName` into the message now that each test is being run for each area.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:34
(Diff revision 1)
>        browser.test.assertEq(value, data[prop], "object getter worked for " + prop);
>      });
>    }
>  
>    let globalChanges = {};
> +  let expectedAreaNameForEvents = 'none yet';

Just `let expectedAreaNameForEvents;` is probably sufficient, and I think it would be fair to shorten it to just `expectedAreaName`.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:38
(Diff revision 1)
>    let globalChanges = {};
> +  let expectedAreaNameForEvents = 'none yet';
>  
> -  browser.storage.onChanged.addListener((changes, storage) => {
> -    browser.test.assertEq("local", storage, "storage is local");
> +  browser.storage.onChanged.addListener((changes, areaName) => {
> +    browser.test.assertEq(expectedAreaNameForEvents, areaName,
> +        `StorageChange area name`);

This is using backticks as if it is a template string, but it currently is not. Also the message should probably be something like "Expected area name received by listener".

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:42
(Diff revision 1)
> -    browser.test.assertEq("local", storage, "storage is local");
> +    browser.test.assertEq(expectedAreaNameForEvents, areaName,
> +        `StorageChange area name`);
>      Object.assign(globalChanges, changes);
>    });
>  
> -  function checkChanges(changes) {
> +  function checkChanges(areaName, changes, message) {

`areaName` appears to be unused.

I'm not sure that we need `message`. It does give a bit more context to the messages, but it's also a hassle to pass it into `checkChanges`. I could go either way on that one.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:191
(Diff revision 1)
> +      { method: 'get', args: ['foo'], result: {} },
> +      { method: 'set', args: [{ foo: 'bar' }], result: undefined },
> +      { method: 'remove', args: ['foo'], result: undefined },
> +      { method: 'clear', args: [], result: undefined },
> +    ];
> +    apiTests.forEach(testDef => {

I'm not sure we need these tests (for API being defined). We are exercising the API elsewhere, so if it's not defined we'll be seeing a bunch of other errors.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:195
(Diff revision 1)
> +    ];
> +    apiTests.forEach(testDef => {
> +      browser.test.assertEq('function', typeof browser.storage.sync[testDef.method],
> +          `storage.sync.${testDef.method} is defined`);
> +    });
> +    SpecialPowers.setBoolPref('extension.storage.sync.enabled', false);

Perhaps add a constant for `extension.storage.sync.enabled`.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:207
(Diff revision 1)
> +          resolve();
> +        }));
> +      }));
> +    });
> +    SpecialPowers.setBoolPref('extension.storage.sync.enabled', true);
> +    apiTests.forEach(testDef => {

The same goes for these tests. I think that the exercising of the API that we are doing is testing the same thing.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:220
(Diff revision 1)
> +    });
> +    return Promise.all(promises);
> +  }
> +
> +  runTests('local').then(() => {
> +    SpecialPowers.setBoolPref('extension.storage.sync.enabled', true);

We should add a `SpecialPowers.clearUserPref("extension.storage.sync.enabled");` at the end of the test too, and probably add a `SimpleTest.registerCleanupFunction(() => { SpecialPowers.clearUserPref("extension.storage.sync.enabled"); });` at the top in case things blow up in the middle.
https://reviewboard.mozilla.org/r/54584/#review52188

> I looked at these yesterday and I saw that:
> 
> 1. uncommenting the function "func" and its associated test seemed to work OK, so I'm not sure why that's commented out.
> 2. regexps aren't serialized correctly; the "spec" says that they should come out stringified. We might re-use the "sanitize" function that ExtensionStorage.jsm defines to handle this correctly.
> 3. including window causes it to crash because it has a cyclic object. The "sanitize" function also takes care of that by refusing to serialize anything that it doesn't recognize.
> 
> Based on this, I guess the best thing to do would be for ExtensionStorage.jsm to export the sanitize function and to use it in ExtensionStorageSync. I'll try to do this.

I agree. I was wondering why the santiziation that is currently in `ExtensionStorage.jsm` isn't also being done for `ExtensionStorageSync.jsm`. Although see also my comment about the possibility of just combining the two.
https://reviewboard.mozilla.org/r/54586/#review52258

You're right that we should probably get rid of the current storage.local code in favor of using the new Kinto-based code for both storage.local and storage.sync - the only thing that stopped me from doing that was that it would require data migration: we would need to add code that (e.g. at startup) checks if there is any extension that has stored data in storage.local, and if there is, migrate it from the JSON-file format to the Kinto-based database, and remove the JSON-file. I guess not a big percentage of users will have extensions installed that already saved valuable data, but for the ones that do, it would be nice if we could make the new code backwards-compatible through a one-time migration like that. :glasserc what do you think?
That makes sense, Michiel. I want to check in with Kris to see what he thinks, in case we're missing something.

Kris, do you think it's fine to replace all of the current implementation of browser.storage.local with this new implementation using Kinto? How might we architect this migration from the old local data store to a new Kinto-based one?
Flags: needinfo?(kmaglione+bmo)
https://reviewboard.mozilla.org/r/54586/#review52258

As far as I can tell, storage.local is intended to *never* sync. You might use this for per-client settings that weren't meant to be shared -- maybe things like properties of the machine's hardware. If we use kinto as the backend for storage.local, we have to take care to *never* sync these collections. It's certainly a possible implementation strategy, but I'm not convinced that it will be much less duplication or complexity.

How many extensions use storage.local so far? Can we maybe get away with not doing a migration at all?

> Why is this needed? Is there a requirement that ids in Kinto collections be in that format?

Yes, at the present time, Kinto *only* accepts IDs that are UUIDs. I also don't like this restriction and asked about it, and now there's a [Github issue](https://github.com/Kinto/kinto/issues/655) discussing this.
https://reviewboard.mozilla.org/r/54586/#review52258

> Why do we need to apply the `md5` function to the `extensionId` to generate a `collectionId`?

Ah right, this is probably unnecessary. Kinto's constraints for collection names are probably permitting enough to allow us to use the extensionId directly as the collection name.

> I assume that `coll.create` and `coll.update` return promises, which is why this is being done?
> 
> Also, what happens if an error occurs with one of the keys? That would throw at line 138, but then would the listeners be notified of the changes that did work? Would we end up with some keys updated and other not, and if so is that the desired outcome? What about the keys that have not even been attempted yet when the error is thrown?

Right, well spotted, that's a bug. If that Promise.all statement fails, we still want to call notifyListeners (same for other cases below).

> When would this happen, and should we really return a rejected promise with "boom"?

Right, looking at https://github.com/Kinto/kinto.js/blob/master/src/collection.js#L528 the `if (res)` clause will always be true, and this else clause can be removed entirely.

> I saw your comment on this but I still don't understand. What is an example of when this would be needed?

I'm not entirely clear on what the underlying reason is, but the tests are an example - if you remove the call to this function, IIRC it was the `Please set extension.storage.sync.enabled to ...` test that will start to fail (at least last time I checked).

> Why do the `local` methods need this `catch(wrapRejection)` added to them when they didn't need them before? It doesn't look like this patch makes any changes to `storage.local`.

Oh, I just added it there for symmetry - looking at it now, it's probably indeed unnecessary there because ExtensionStorage.jsm never returns a rejected promise, so no need to call wrapRejection there.
https://reviewboard.mozilla.org/r/54586/#review52258

It would definitely be less duplication and complexity. Kinto.js will sync a given collection only if and when you call `collection.sync()`, it's perfectly fine to have collections that never sync. storage.local and storage.sync would be entirely identical except for this one call.
Review commit: https://reviewboard.mozilla.org/r/55838/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/55838/
Attachment #8757379 - Flags: review?(bob.silverberg)
Attachment #8757380 - Flags: review?(bob.silverberg)
This is the function that storage.local uses to make sure we only serialize
things that will deserialize correctly. In particular, our behavior with
regexes now matches the "spec", which says they are serialized as their string
representations.

Uncomment the tests now that they're passing.

Review commit: https://reviewboard.mozilla.org/r/55840/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/55840/
I discussed this review with Michiel. Since he has his hands full with Connected Devices work, and since I'm going to have to come up to speed on this code anyhow, I will be taking over this review. Accordingly, I pushed a new review request to MozReview. This request amends Michiel's commit to take out the changes to Sqlite.jsm, then adds a commit to fix some of the tests. I'll be addressing the comments Bob raised, probably by adding commits. I'm not sure of the typical workflow using mercurial and mozreview so if I do something strange or wrong, please let me know.
(In reply to Bob Silverberg [:bsilverberg] from comment #24)
> Kris, do you think it's fine to replace all of the current implementation of
> browser.storage.local with this new implementation using Kinto? How might we
> architect this migration from the old local data store to a new Kinto-based
> one?

I like the idea in theory, but I'd need to look over the implementation before I can say for certain.

The migration should be easy enough. We'd just need to check for an existing JSON data store, and if it exists, migrate it to the Kinto data store, and then remove the file. But, again, I'd need to look over the implementation before I can say more.

(In reply to Ethan Glasser-Camp (:glasserc) from comment #25)
> How many extensions use storage.local so far? Can we maybe get away with not
> doing a migration at all?

No, we definitely need to migrate any existing data.
Flags: needinfo?(kmaglione+bmo)
https://reviewboard.mozilla.org/r/55838/#review52522

::: toolkit/components/extensions/ExtensionStorageSync.jsm:23
(Diff revision 1)
> +
> +Cu.import("resource://services-common/kinto-offline-client.js");
> +
> +Cu.import("resource://gre/modules/Preferences.jsm");
> +
> +Cu.import("resource://gre/modules/Task.jsm");

Please use `XPCOMUtils.defineLazyModuleGetter` for all of the above.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:38
(Diff revision 1)
> +  if (!Kinto) {
> +    return Promise.reject(new Error('Not supported'));
> +  }
> +  return Promise.resolve().then(() => {
> +    //TODO: implement sync process
> +    return Task.spawn(function* () {

This `Task.spawn` doesn't seem necessary.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:54
(Diff revision 1)
> +}
> +
> +var md5 = function(str) {
> +  // In a deterministic way, make sure string is long enough:
> +  // str = '-----------------------------' + str;
> +  str = '' + str;

Is there a purpose to this?

Also, please use double quotes for all strings. ESLint will not accept single quotes.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:58
(Diff revision 1)
> +  // str = '-----------------------------' + str;
> +  str = '' + str;
> +  // Adapted from toolkit/components/url-classifier/content/moz/cryptohasher.js:
> +  var hasher_ = Cc["@mozilla.org/security/hash;1"]
> +                   .createInstance(Ci.nsICryptoHash);
> +  hasher_.init(Ci.nsICryptoHash.MD5);

Is there a particular reason we need to use MD5 rather than a stronger hash function?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:67
(Diff revision 1)
> +  if (stream.available()) {
> +    hasher_.updateFromStream(stream, stream.available());
> +  }
> +
> +  var digest = hasher_.finish(false /* not b64 encoded */);
> +

return Array.from(digest, char => ("0" + char.charAt(0).toString(16)).slice(-1)).join("")

Or something along those lines.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:81
(Diff revision 1)
> +}
> +
> +function keyToId(key) {
> +  let md5Str = md5(key);
> +  const parts = [];
> +  [8,4,4,4,12].map(numChars => {

Space after comma. And if you're using map,

    let parts = [8, 4, 4, 4, 12].map(numChars => {
      let part = mdvStr.slice(0, numChars);
      md5Str = md5Str.slice(numChars);
      return part;
    });

::: toolkit/components/extensions/ExtensionStorageSync.jsm:92
(Diff revision 1)
> +
> +this.ExtensionStorageSync = {
> +  listeners: new Map(),
> +
> +  getCollection(extensionId) {
> +    if (Preferences.get(STORAGE_SYNC_ENABLED, false) !== true) {

Please use `XPCOMUtils.defineLazyPreferenceGetter` to cache this preference value.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:114
(Diff revision 1)
> +            newValue: record.data
> +          };
> +          return coll.create(record, {useRecordId: true});
> +        }
> +
> +        function updateItem(old_record) {

Please use camel case rather than underscores in variable names.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:120
(Diff revision 1)
> +          if (old_record._status === "deleted") {
> +            changes[record.key] = {
> +              oldValue: undefined,
> +              newValue: record.data
> +            };
> +            return coll.delete(old_record.id, { virtual: false }).then(() => {

No space inside curly brackets.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:131
(Diff revision 1)
> +            newValue: record.data
> +          };
> +          return coll.update(record);
> +        }
> +
> +        return coll.get(record.id, { includeDeleted: true })

No space inside curly brackets.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:132
(Diff revision 1)
> +          };
> +          return coll.update(record);
> +        }
> +
> +        return coll.get(record.id, { includeDeleted: true })
> +          .then(function(old_record) {

Camel case. And please use arrow functions rather than function expressions.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:135
(Diff revision 1)
> +
> +        return coll.get(record.id, { includeDeleted: true })
> +          .then(function(old_record) {
> +            return updateItem(old_record.data);
> +          }, function(reason) {
> +            if (reason.message.indexOf(" not found.") !== -1) {

`reason.message.includes(" not found.")`

::: toolkit/components/extensions/ExtensionStorageSync.jsm:143
(Diff revision 1)
> +            throw reason;
> +          });
> +      }
> +
> +      const promises = [];
> +      for(let itemId in items) {

Space before opening paren.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:170
(Diff revision 1)
> +          }
> +          changes[key] = {
> +            oldValue: record.data.data,
> +            newValue: undefined
> +          };
> +          return coll.delete(keyToId(key));

Can we store the result of `keyToId` rather than calling it multiple times? It's more expensive than you might expect.

Also, what happens if we have something like:

    sync.remove(key);
    sync.set(key, value);

It seems like we can easily wind in an inconsistent state, given the timing of the asynchronous callbacks.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:180
(Diff revision 1)
> +          throw err;
> +        });
> +      }
> +      return Promise.all(keys.map(removeItem))
> +        .then(() => {
> +          this.notifyListeners(extensionId, changes);

We should only do this if we actually wind up removing something.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:199
(Diff revision 1)
> +          };
> +          return coll.delete(record.id);
> +        });
> +        return Promise.all(promises);
> +      }).then(result => {
> +        this.notifyListeners(extensionId, changes);

We should only do this if we actually wind up removing something.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:233
(Diff revision 1)
> +        return coll.get(keyToId(key)).then(function (res) {
> +          if (res) {
> +            records[res.data.key] = res.data.data;
> +            return res.data;
> +          } else {
> +            return Promise.reject("boom");

"boom"?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:259
(Diff revision 1)
> +
> +  notifyListeners(extensionId, changes) {
> +    let listeners = this.listeners.get(extensionId);
> +    if (listeners) {
> +      for (let listener of listeners) {
> +        listener(changes);

We should trap and report errors if this throws.

::: toolkit/components/extensions/ext-storage.js:15
(Diff revision 1)
>  Cu.import("resource://gre/modules/ExtensionUtils.jsm");
>  var {
>    EventManager,
>  } = ExtensionUtils;
>  
> +function wrapRejection(err) {

Why is this necessary?

::: toolkit/components/extensions/ext-storage.js:17
(Diff revision 1)
>    EventManager,
>  } = ExtensionUtils;
>  
> +function wrapRejection(err) {
> +  if (typeof err === 'string') {
> +    return Promise.reject({ message: err });

No spaces inside curly braces.

::: toolkit/components/extensions/ext-storage.js:67
(Diff revision 1)
>  
> -        ExtensionStorage.addOnChangedListener(extension.id, listener);
> +          ExtensionStorage.addOnChangedListener(extension.id, listenerLocal);
> +          try {
> +            ExtensionStorageSync.addOnChangedListener(extension.id, listenerSync);
> +          } catch(e) {
> +            dump('error accessing ExtensionStorageSync: ' + e.message);

Please use Cu.reportError instead.

Also, please make sure to always end dump() calls with a newilne, and always use double quotes.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:46
(Diff revision 1)
>  
> -  function checkChanges(changes) {
> +  function checkChanges(areaName, changes, message) {
>      function checkSub(obj1, obj2) {
>        for (let prop in obj1) {
> -        browser.test.assertEq(obj1[prop].oldValue, obj2[prop].oldValue);
> -        browser.test.assertEq(obj1[prop].newValue, obj2[prop].newValue);
> +        browser.test.assertEq(obj1[prop].oldValue, obj2[prop].oldValue,
> +            `checkChanges ${prop} old (${message})`);

Please align with opening paren.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:147
(Diff revision 1)
> -        obj: {},
> +          obj: {},
> -        arr: [1, 2],
> +          arr: [1, 2],
> -        date: new Date(0),
> +          date: new Date(0),
> -        regexp: /regexp/,
> -        func: function func() {},
> -        window,
> +          // regexp: /regexp/,
> +          // func: function func() {},
> +          // window,

These tests still need to pass.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:192
(Diff revision 1)
> +      { method: 'set', args: [{ foo: 'bar' }], result: undefined },
> +      { method: 'remove', args: ['foo'], result: undefined },
> +      { method: 'clear', args: [], result: undefined },
> +    ];
> +    apiTests.forEach(testDef => {
> +      browser.test.assertEq('function', typeof browser.storage.sync[testDef.method],

Double quote.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:193
(Diff revision 1)
> +      { method: 'remove', args: ['foo'], result: undefined },
> +      { method: 'clear', args: [], result: undefined },
> +    ];
> +    apiTests.forEach(testDef => {
> +      browser.test.assertEq('function', typeof browser.storage.sync[testDef.method],
> +          `storage.sync.${testDef.method} is defined`);

Please align with opening paren.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:200
(Diff revision 1)
> +    SpecialPowers.setBoolPref('extension.storage.sync.enabled', false);
> +    apiTests.forEach(testDef => {
> +      promises.push(new Promise(resolve => {
> +        browser.storage.sync[testDef.method].apply(undefined, testDef.args.concat(function(res) {
> +          browser.test.assertEq('Please set extension.storage.sync.enabled to tr' +
> +              'ue in about:config', browser.runtime.lastError.message,

Please just use the promise-based version of the API rather than wrapping the callback version, and testing lastError

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:206
(Diff revision 1)
> +              `storage.sync.${testDef.method} is behind a flag`);
> +          resolve();
> +        }));
> +      }));
> +    });
> +    SpecialPowers.setBoolPref('extension.storage.sync.enabled', true);

We should wait for all of the previous tests to complete before flipping this pref.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:226
(Diff revision 1)
> +    return runTests('sync');
> +  }).then(() => {
> +    return checkStorageSync();
> +  }).then(() => {
> +    browser.test.notifyPass("storage");
> +  });

Please add a `.catch()` and fail the test if you hit it.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/1-2/
Comment on attachment 8757380 [details]
MozReview Request: use sanitize to fix tests r?bsilverberg

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55840/diff/1-2/
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/2-3/
Attachment #8757380 - Attachment is obsolete: true
Attachment #8757380 - Flags: review?(bob.silverberg)
https://reviewboard.mozilla.org/r/54586/#review52258

> Ah right, this is probably unnecessary. Kinto's constraints for collection names are probably permitting enough to allow us to use the extensionId directly as the collection name.

In actual fact, we should try to avoid leaking metadata about which users have which extensions installed, but this doesn't do that. I'll take it out and add a FIXME comment.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/3-4/
It looks like I reviewed Micihel's patch and Kris reviewed Ethan's patch. 

Ethan, perhaps you can address the issues raised by both of us via your Mozreview request, and once you've done so go into Michiel's Mozreview request and mark the items that have been fixed as fixed, and also respond to any questions I added if they are still relevant. Once all of the issues I raised there have been addressed we can just obsolete that patch so it is no longer a distraction.
https://reviewboard.mozilla.org/r/54586/#review52258

> It's too bad you need to reimplement this. I wonder if there is somewhere from whence it could be imported? Could you maybe use `computeHash` from `AppsUtils.jsm`?
> 
> [1] https://dxr.mozilla.org/mozilla-central/source/dom/apps/AppsUtils.jsm#759

Yes, thanks. Also, I'm +1 on your use of the word "whence".
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/4-5/
(In reply to Ethan Glasser-Camp (:glasserc) from comment #40)
> Comment on attachment 8757379 [details]
> MozReview Request: Bug 1253740 - Implement storage.sync, r?bsilverberg
> 
> Review request updated; see interdiff:
> https://reviewboard.mozilla.org/r/55838/diff/4-5/

I'm glad that `computeHash` worked for you. Just a tip/request regarding addressing issues raised in a review: it will be easier for the reviewer if you attempt to address as many issues as possible in a single commit and then push that to review, rather than pushing a new commit for each individual change. Also, once you've pushed a change to an issue, please either comment on the issue or mark it as fixed. Again, this makes it easier for the reviewer who is going to go back in and check the review for updates.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/5-6/
https://reviewboard.mozilla.org/r/54586/#review52258

> Right, well spotted, that's a bug. If that Promise.all statement fails, we still want to call notifyListeners (same for other cases below).

Hmm. Do you have any advice on how to do this? It seems like if one of them fails, the Promise.all will reject immediately, which means that `changes` might not be complete. Ideally I'd be able to just slap a `.finally(notifyListeners)` on here, but it seems like our Promise library doesn't support that?
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/6-7/
https://reviewboard.mozilla.org/r/54586/#review52258

> Hmm. Do you have any advice on how to do this? It seems like if one of them fails, the Promise.all will reject immediately, which means that `changes` might not be complete. Ideally I'd be able to just slap a `.finally(notifyListeners)` on here, but it seems like our Promise library doesn't support that?

Could you simply call `notifyListeners` both in the `resolve` handler and in the `reject` handler? The it would get called in either case.

I'm still not entirely clear on what we want to happen if one of the updates is rejected though - do we want to allow any successful updates that happened before the rejection to be committed, or do we want it to be an all-or-nothing scenario? Might it be confusing to a consumer if they call `set` and some items are successfully set while others are not? I guess they can interrogate `changes` in an `onChanged` listener, but will they know that that is necessary?
(In reply to Bob Silverberg [:bsilverberg] from comment #41)
> I'm glad that `computeHash` worked for you. Just a tip/request regarding
> addressing issues raised in a review: it will be easier for the reviewer if
> you attempt to address as many issues as possible in a single commit and
> then push that to review, rather than pushing a new commit for each
> individual change. Also, once you've pushed a change to an issue, please
> either comment on the issue or mark it as fixed. Again, this makes it easier
> for the reviewer who is going to go back in and check the review for updates.

Thanks. I didn't realize that each push was generating a notification. I had figured that you would review each change and that those would be easier to review in isolation, per http://mozilla-version-control-tools.readthedocs.io/en/latest/mozreview/commits.html#prefer-more-smaller-commits-over-large-monolithic-commits. This is a new workflow for me so please continue letting me know if I am doing it wrong.

I was figuring to mark all the issues as "fixed" as soon as I thought it was ready for another review, but I'll mark the ones that I already fixed as fixed. Unfortunately, I can't mark the ones as fixed on Michiel's patch, so I'll just comment on those.
(In reply to Ethan Glasser-Camp (:glasserc) from comment #46)
> 
> Thanks. I didn't realize that each push was generating a notification. I had
> figured that you would review each change and that those would be easier to
> review in isolation, per
> http://mozilla-version-control-tools.readthedocs.io/en/latest/mozreview/
> commits.html#prefer-more-smaller-commits-over-large-monolithic-commits. This
> is a new workflow for me so please continue letting me know if I am doing it
> wrong.

No problem, Ethan. Mozreview is pretty good at presenting a diff of the changes, so pushing a commit with a lot of changes in it isn't a big deal.

> I was figuring to mark all the issues as "fixed" as soon as I thought it was
> ready for another review, but I'll mark the ones that I already fixed as
> fixed. Unfortunately, I can't mark the ones as fixed on Michiel's patch, so
> I'll just comment on those.

Actually your original intent is good. I.e., try to fix as much as possible in a single commit, then push that, and then mark each of the issues that is fixed as "fixed" after the commit has been pushed. That will be a trigger for me to take another look. It's too bad that you cannot mark issues as fixed in Michiel's review request, but commenting will work too, and then I can just mark them as fixed after I see your comment and check the latest commit. As I mentioned above, once all of the issues in Michiel's patch are marked as fixed (or dropped) then we can just remove that review request and continue the review using just your review request.
https://reviewboard.mozilla.org/r/54586/#review52296

> I know it's not how the test is currently written, but we prefer to use template strings over concatenation, so this would become:
> 
> ```javascript
> browser.test.assertEq(value, data[prop], `null getter worked for ${prop}`);
> ```
> 
> I would also suggest adding `areaName` into the message now that each test is being run for each area.

OK. I didn't feel like this change belonged in the same commit as the other stuff, since it obscures the purpose of the change to the function, but I pushed it as https://reviewboard.mozilla.org/r/56614.

> Just `let expectedAreaNameForEvents;` is probably sufficient, and I think it would be fair to shorten it to just `expectedAreaName`.

OK, changed in https://reviewboard.mozilla.org/r/55838.

> This is using backticks as if it is a template string, but it currently is not. Also the message should probably be something like "Expected area name received by listener".

Fixed in https://reviewboard.mozilla.org/r/55838.

> `areaName` appears to be unused.
> 
> I'm not sure that we need `message`. It does give a bit more context to the messages, but it's also a hassle to pass it into `checkChanges`. I could go either way on that one.

I added `areaName` to the messages. I left the messages on the `assertEq` calls because they're the only thing that specifies what property failed, and since I was leaving that, I decided to leave the `message` calls.

> I'm not sure we need these tests (for API being defined). We are exercising the API elsewhere, so if it's not defined we'll be seeing a bunch of other errors.

I agree. Removed in https://reviewboard.mozilla.org/r/55838.

> The same goes for these tests. I think that the exercising of the API that we are doing is testing the same thing.

I think the original intent of this change is to verify that the API supports the "types" given in the spec, but I agree that it probably isn't necessary, so I took it out. https://reviewboard.mozilla.org/r/55838
https://reviewboard.mozilla.org/r/54586/#review52258

> We tend to use `Map`s for these. See for example [1].
> 
> Also, this name is a bit unclear. Perhaps simply `collections` would be better as, afaict, this is going to contain all of the collections for all of the installed extensions.
> 
> [1] https://dxr.mozilla.org/mozilla-central/source/toolkit/components/extensions/ExtensionStorage.jsm#64

This should be fixed in the patch at https://reviewboard.mozilla.org/r/55838/.

> In actual fact, we should try to avoid leaking metadata about which users have which extensions installed, but this doesn't do that. I'll take it out and add a FIXME comment.

I added a FIXME comment in https://reviewboard.mozilla.org/r/55838/.

> Yes, thanks. Also, I'm +1 on your use of the word "whence".

I fixed this in https://reviewboard.mozilla.org/r/55838/.

> We tend to use arrow functions when chaining with `then`, so this would be more like:
> 
> ```javascript
> return coll.get(record.id, { includeDeleted: true }).then(old_record => {
>   return updateItem(old_record.data);
> }, reason => {
>   if (reason.message.indexOf(" not found.") !== -1) {
>     return createItem();
>   }
>   throw reason;
> });
> ```

This should be fixed in https://reviewboard.mozilla.org/r/55838/.

> If we're dealing with promises here, should this return a rejected promise, rather than throwing?

Yes, I think so. Fixed in https://reviewboard.mozilla.org/r/55838/.

> Prefer arrow function as mentioned above.

Fixed in https://reviewboard.mozilla.org/r/55838/.

> Prefer arrow functions.

Fixed in https://reviewboard.mozilla.org/r/55838/.

> The comment is good, but please remove the `XXX` and replace with a sentence with a capital letter and a period.

I'm not completely convinced this behavior is correct -- I'm concerned that `.get()` might reject under some circumstance that I've forgotten about -- but I've made the changes in https://reviewboard.mozilla.org/r/55838/.
Attached file update test to use templates, (obsolete) —
Also, add another variable to the failure messages now that they are being
run for each area.

Review commit: https://reviewboard.mozilla.org/r/56614/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/56614/
Attachment #8758378 - Flags: review?(bob.silverberg)
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/7-8/
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/8-9/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/1-2/
https://reviewboard.mozilla.org/r/55838/#review52522

> Is there a particular reason we need to use MD5 rather than a stronger hash function?

We only use the hash function to turn an arbitrary key in our key-value store into a Kinto "record ID". Kinto record IDs *must* have the structure of a UUID, which maybe isn't ideal (see https://github.com/Kinto/kinto/issues/655). So we're not really using MD5 for any true "hash" behavior -- it's just a convenient way to turn a string into exactly 128 bits. We could just as easily use a stronger hash function, or IMO, a weaker one if it was more efficient.

> return Array.from(digest, char => ("0" + char.charAt(0).toString(16)).slice(-1)).join("")
> 
> Or something along those lines.

Fixed by completely removing this function and using `AppsUtils.computeHash`, like :bsilverberg suggested in https://reviewboard.mozilla.org/r/54586/#comment68478.

> Please align with opening paren.

Fixed by removing this whole block of tests, per :bsilverberg in https://reviewboard.mozilla.org/r/54586/#comment68590.
https://reviewboard.mozilla.org/r/55838/#review52522

> Double quote.

Fixed by removing this entire block of tests, per :bsilverberg in https://reviewboard.mozilla.org/r/54586/#comment68590.
https://reviewboard.mozilla.org/r/54586/#review52296

> Perhaps add a constant for `extension.storage.sync.enabled`.

Fixed in https://reviewboard.mozilla.org/r/55838/.
https://reviewboard.mozilla.org/r/55838/#review52522

> We only use the hash function to turn an arbitrary key in our key-value store into a Kinto "record ID". Kinto record IDs *must* have the structure of a UUID, which maybe isn't ideal (see https://github.com/Kinto/kinto/issues/655). So we're not really using MD5 for any true "hash" behavior -- it's just a convenient way to turn a string into exactly 128 bits. We could just as easily use a stronger hash function, or IMO, a weaker one if it was more efficient.

Even if it doesn't need to be cryptographically strong, it would be nice if we could avoid using MD5, since once we start using it we'll be stuck supporting it. And, if it comes to it, it's not inconievable that there could be a security issue, if an extension creates keys based on remote values. That could allow an attacker to generate two keys with the same hash, and change the behavior of the extension.

If we need 128 bits, can we use a SHA256 hash and XOR the two halves? Or just use the first 128 bits?
https://reviewboard.mozilla.org/r/55838/#review52522

> Fixed by completely removing this function and using `AppsUtils.computeHash`, like :bsilverberg suggested in https://reviewboard.mozilla.org/r/54586/#comment68478.

I'm not sure that's a good idea. That module is part of the webapp runtime, and it's not really meant to be used externally. And it's probably going to wind up breaking things if/when the webapp runtime is removed from desktop builds.
https://reviewboard.mozilla.org/r/54586/#review52258

> I fixed this in https://reviewboard.mozilla.org/r/55838/.

Kris has explained why we probably shouldn't use that function, and also that we should use something other than md5. I'm going to drop this issue as those issues are being tracked in the other review.

> I'm not entirely clear on what the underlying reason is, but the tests are an example - if you remove the call to this function, IIRC it was the `Please set extension.storage.sync.enabled to ...` test that will start to fail (at least last time I checked).

I think this might be solveable in the tests. Once everything else is addressed we can take another look at this.
Thanks for all the updates, Ethan. I have marked a number of issues on https://reviewboard.mozilla.org/r/54586/ as fixed or dropped, but there are still a few left to address.
https://reviewboard.mozilla.org/r/55838/#review52522

> Even if it doesn't need to be cryptographically strong, it would be nice if we could avoid using MD5, since once we start using it we'll be stuck supporting it. And, if it comes to it, it's not inconievable that there could be a security issue, if an extension creates keys based on remote values. That could allow an attacker to generate two keys with the same hash, and change the behavior of the extension.
> 
> If we need 128 bits, can we use a SHA256 hash and XOR the two halves? Or just use the first 128 bits?

In our storage team standup today, I asked :leplatrem to change Kinto to allow non-UUID record IDs. The character set is limited, but I can use that to encode IDs in a reversible way. This way I won't have to use the computeHash function or indeed any hash function. Does that sound OK?

> `reason.message.includes(" not found.")`

I actually really don't like this code at all. I'd much rather we get a clear signal from the kinto library that something was missing. In fact, in this particular case, I'd rather the kinto library exported more operations that mapped more to what we want -- see https://github.com/Kinto/kinto.js/issues/443, https://github.com/Kinto/kinto.js/issues/444, and https://github.com/Kinto/kinto.js/issues/445.

> Can we store the result of `keyToId` rather than calling it multiple times? It's more expensive than you might expect.
> 
> Also, what happens if we have something like:
> 
>     sync.remove(key);
>     sync.set(key, value);
> 
> It seems like we can easily wind in an inconsistent state, given the timing of the asynchronous callbacks.

I'm not exactly sure what you mean by "inconsistent state". There are two states that I think would be acceptable as a result of the above operations -- either key is removed, or key is set to value. Either one should be OK, since they're racing. However, the current implementation does have a bug in that the set() call does a get() to see whether it needs to do a create() or an update(), and that can change if the remove() comes between the get() and the update(). The set() would call update(), the record would be gone, and we would throw, which is definitely wrong. In general I feel like the primitives we have in kinto.js aren't rich enough, which leads to a lot of messy code in this module. I've filed bugs on the Kinto.js library to try to fix this some: https://github.com/Kinto/kinto.js/issues/443, https://github.com/Kinto/kinto.js/issues/444, and https://github.com/Kinto/kinto.js/issues/445.
Thanks for all the feedback, :bsilverberg and :kmag. I'd actually like to make more significant changes to this patch. In particular, I think we're about to relax the record ID constraints in Kinto, which means we could replace the md5 stuff with something friendlier. I'm also hoping to get some changes made to the upstream Kinto.js library that would clean up some of this code.

Also, I noticed that the API that we export here returns promises, but according to https://developer.chrome.com/extensions/storage, Chrome takes callbacks. Is that right?
(In reply to Ethan Glasser-Camp (:glasserc) from comment #62)
> Thanks for all the feedback, :bsilverberg and :kmag. I'd actually like to
> make more significant changes to this patch. In particular, I think we're
> about to relax the record ID constraints in Kinto, which means we could
> replace the md5 stuff with something friendlier. I'm also hoping to get some
> changes made to the upstream Kinto.js library that would clean up some of
> this code.
> 

That sounds good. Should we consider this bug on hold until some of that other stuff happens? We can discuss it further in tomorrow's meeting.

> Also, I noticed that the API that we export here returns promises, but
> according to https://developer.chrome.com/extensions/storage, Chrome takes
> callbacks. Is that right?

Our code will also work with callbacks, so you can use callbacks just as one would with Chrome or you can work with the promises that it returns. There is some magic in the framework that takes care of this.
Depends on: 1277612
https://reviewboard.mozilla.org/r/55838/#review52522

> In our storage team standup today, I asked :leplatrem to change Kinto to allow non-UUID record IDs. The character set is limited, but I can use that to encode IDs in a reversible way. This way I won't have to use the computeHash function or indeed any hash function. Does that sound OK?

Sounds good to me.

> I actually really don't like this code at all. I'd much rather we get a clear signal from the kinto library that something was missing. In fact, in this particular case, I'd rather the kinto library exported more operations that mapped more to what we want -- see https://github.com/Kinto/kinto.js/issues/443, https://github.com/Kinto/kinto.js/issues/444, and https://github.com/Kinto/kinto.js/issues/445.

I agree, that definitely sounds preferable.

> I'm not exactly sure what you mean by "inconsistent state". There are two states that I think would be acceptable as a result of the above operations -- either key is removed, or key is set to value. Either one should be OK, since they're racing. However, the current implementation does have a bug in that the set() call does a get() to see whether it needs to do a create() or an update(), and that can change if the remove() comes between the get() and the update(). The set() would call update(), the record would be gone, and we would throw, which is definitely wrong. In general I feel like the primitives we have in kinto.js aren't rich enough, which leads to a lot of messy code in this module. I've filed bugs on the Kinto.js library to try to fix this some: https://github.com/Kinto/kinto.js/issues/443, https://github.com/Kinto/kinto.js/issues/444, and https://github.com/Kinto/kinto.js/issues/445.

There are a few potential issues I see.

1) As you mention, some of the promise handlers depend on state information that changes if they're interleaved.
2) Most of these handlers dispatch events, and the ordering of those events is not deterministic. A change notification dispatched by `remove` might be sent after a notification for a change to the same key dispatched by `set`, even if the `set` call happened later, and represents the current state.
3) Call sequences like the one I mentioned above, `sync.remove(key); sync.set(key, value);`, should probably be expected to work, even if the caller didn't wait for one operation to finish before starting the next.

The easiest way I can think of dealing with this is to queue all operations, and perform them sequentially, but I'm not sure if the performance hit would be acceptable. It might be enough to just queue all operations on the same key. That wouldn't fix issue #1 when there are multiple clients, but it would at least make it less likely to happen in practice.
https://reviewboard.mozilla.org/r/55838/#review52522

Ugh. Sorry. It looks like I forgot to publish this reply after I wrote it.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/9-10/
Attachment #8757379 - Attachment description: MozReview Request: Bug 1253740 - Implement storage.sync, r?bsilverberg → Bug 1253740 - Implement storage.sync,
Attachment #8758378 - Attachment description: MozReview Request: update test to use templates, r?bsilverberg → update test to use templates,
Attachment #8757379 - Flags: review?(kmaglione+bmo)
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/2-3/
https://reviewboard.mozilla.org/r/55838/#review52522

> Please use Cu.reportError instead.
> 
> Also, please make sure to always end dump() calls with a newilne, and always use double quotes.

Per Michiel's comments in https://reviewboard.mozilla.org/r/54584/#comment66930, I have deleted these try-catch blocks entirely.
https://reviewboard.mozilla.org/r/54586/#review52258

> Could you simply call `notifyListeners` both in the `resolve` handler and in the `reject` handler? The it would get called in either case.
> 
> I'm still not entirely clear on what we want to happen if one of the updates is rejected though - do we want to allow any successful updates that happened before the rejection to be committed, or do we want it to be an all-or-nothing scenario? Might it be confusing to a consumer if they call `set` and some items are successfully set while others are not? I guess they can interrogate `changes` in an `onChanged` listener, but will they know that that is necessary?

The largest problem here is that Kinto doesn't really expose any concept of a transaction (because it's distributed, after all). Ideally every local update happens atomically, but there are no consistency guarantees about several updates happening at once. This means that it's hard to gather the information for the changes notification without implementing a race condition. It might be possible to finesse this at the Kinto.js layer since it uses IndexedDB, which has a very limited idea of a transaction, but either way, we can only gather the `changes` variable, which we use to tell the listeners what updates actually happened, from the asynchronous results of the promises produced by `update()` and `create()`.

So until all the promises complete, it's incorrect to call `notifyChanges`. And, as I wrote above, if any update fails, the giant Promise.all promise will reject immediately. Our options seem to me to be:

- Don't call notifyListeners when we fail, but instead throw the first exception we get. This is the easiest solution (more or less what is implemented already). It may surprise users because the operation doesn't fail all-or-nothing, but parts can fail and parts can succeed, and notifyListeners won't have been called.

- Call notifyListeners when we get the first failure with whatever happens to be already collected in changes. This is IMHO the worst solution because it's more work, it still might surprise users, and not all of the asynchronous results will be ready, so some things will be updated even after the notifyListeners call.

- Wrap all these operations in the Promise equivalent of try/except and take care to not accidentally trigger the Promise.all "short-circuit" behavior, but store their failures somehow, then call notifyChanges, then afterwards pick one of the failures to reject with. This way the user gets both some kind of failure notification, and an accurate notifyChanges, but it means the most work for us (partly because of the limits of the Promise API).

As far as I can tell, the chrome.storage docs don't make any sort of guarantees whatsoever about the failure modes of these operations -- all or nothing, or piecemeal, both seem permitted according to the "spec". Of course, neither does Kinto.js, so it's not even really clear what kind of failures we might have. Mostly they seem like they would be programming failures in this ExtensionStorageSync code, except possibly if you exceed the storage limits, in which case I think pretty much any behavior would be acceptable. So I'd say we should just leave this code as-is, although I'm happy to implement the last solution if you prefer.

> I think this might be solveable in the tests. Once everything else is addressed we can take another look at this.

I just tried it, and yes, it seems like the tests that ensure that the flag behaves correctly are the ones that start to fail. They fail because the error message you get is no longer the flag error message, but instead "An unexpected error occurred", which is I think due to `normalizeError()` in `ExtensionUtils.jsm`.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/10-11/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/3-4/
https://reviewboard.mozilla.org/r/55838/#review52522

> Sounds good to me.

The new version of this patch has newer, cleaner `keyToId` and `idToKey` functions.

> I'm not sure that's a good idea. That module is part of the webapp runtime, and it's not really meant to be used externally. And it's probably going to wind up breaking things if/when the webapp runtime is removed from desktop builds.

Fixed as part of the new `keyToId` implementation, above.
https://reviewboard.mozilla.org/r/54586/#review52258

> Right, looking at https://github.com/Kinto/kinto.js/blob/master/src/collection.js#L528 the `if (res)` clause will always be true, and this else clause can be removed entirely.

Removed in https://reviewboard.mozilla.org/r/55838/.
https://reviewboard.mozilla.org/r/55838/#review52522

> There are a few potential issues I see.
> 
> 1) As you mention, some of the promise handlers depend on state information that changes if they're interleaved.
> 2) Most of these handlers dispatch events, and the ordering of those events is not deterministic. A change notification dispatched by `remove` might be sent after a notification for a change to the same key dispatched by `set`, even if the `set` call happened later, and represents the current state.
> 3) Call sequences like the one I mentioned above, `sync.remove(key); sync.set(key, value);`, should probably be expected to work, even if the caller didn't wait for one operation to finish before starting the next.
> 
> The easiest way I can think of dealing with this is to queue all operations, and perform them sequentially, but I'm not sure if the performance hit would be acceptable. It might be enough to just queue all operations on the same key. That wouldn't fix issue #1 when there are multiple clients, but it would at least make it less likely to happen in practice.

I'm hoping to address point 1) by defining a set of Kinto.js operations that are atomic, i.e. conducted entirely within one IndexedDB transaction. Specifically, I'm hoping that :leplatrem will solve https://github.com/Kinto/kinto.js/issues/446 for me. That may be a somewhat invasive change and I don't know how long it will take, but it's really the only way to go for a general-purpose key-value store.

As for point 3), I think we're covered because IndexedDB transactions, on top of which Kinto.js is implemented, are guaranteed to execute in the order of their creation (see https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction), which (once I have the above atomic operations) will be the order of the Kinto.js function calls.

I don't think this completely covers point 2), because the code after the IndexedDB transactions finish could be run in any order. I agree that this is bad. We probably can't queue operations on the same key because you can have multi-key sets; if each one grabs the lock on one key, you get a deadlock. So I think the only way to address this is to enqueue all operations on chrome.storage, which totally sucks. I get the impression that the intended use case for chrome.storage.sync is mostly settings and stuff that doesn't change that often, so I imagine the performance hit would be OK. (I'm not sure if this is true for chrome.storage.local.) I guess we only really need to enqueue any operation that produces a notification, so we only need to enqueue calls to `set`, `remove`, and `clear`. Maybe this won't be so bad.
https://reviewboard.mozilla.org/r/54586/#review52258

> The largest problem here is that Kinto doesn't really expose any concept of a transaction (because it's distributed, after all). Ideally every local update happens atomically, but there are no consistency guarantees about several updates happening at once. This means that it's hard to gather the information for the changes notification without implementing a race condition. It might be possible to finesse this at the Kinto.js layer since it uses IndexedDB, which has a very limited idea of a transaction, but either way, we can only gather the `changes` variable, which we use to tell the listeners what updates actually happened, from the asynchronous results of the promises produced by `update()` and `create()`.
> 
> So until all the promises complete, it's incorrect to call `notifyChanges`. And, as I wrote above, if any update fails, the giant Promise.all promise will reject immediately. Our options seem to me to be:
> 
> - Don't call notifyListeners when we fail, but instead throw the first exception we get. This is the easiest solution (more or less what is implemented already). It may surprise users because the operation doesn't fail all-or-nothing, but parts can fail and parts can succeed, and notifyListeners won't have been called.
> 
> - Call notifyListeners when we get the first failure with whatever happens to be already collected in changes. This is IMHO the worst solution because it's more work, it still might surprise users, and not all of the asynchronous results will be ready, so some things will be updated even after the notifyListeners call.
> 
> - Wrap all these operations in the Promise equivalent of try/except and take care to not accidentally trigger the Promise.all "short-circuit" behavior, but store their failures somehow, then call notifyChanges, then afterwards pick one of the failures to reject with. This way the user gets both some kind of failure notification, and an accurate notifyChanges, but it means the most work for us (partly because of the limits of the Promise API).
> 
> As far as I can tell, the chrome.storage docs don't make any sort of guarantees whatsoever about the failure modes of these operations -- all or nothing, or piecemeal, both seem permitted according to the "spec". Of course, neither does Kinto.js, so it's not even really clear what kind of failures we might have. Mostly they seem like they would be programming failures in this ExtensionStorageSync code, except possibly if you exceed the storage limits, in which case I think pretty much any behavior would be acceptable. So I'd say we should just leave this code as-is, although I'm happy to implement the last solution if you prefer.

Per our meeting today, we agreed that we need to follow whatever behavior Chrome has here, since the only standard we have is really their behavior. AFAICS from https://cs.chromium.org/chromium/src/extensions/browser/value_store/legacy_value_store_factory.cc?sq=package:chromium&l=176 and related files, Chrome's backend is LevelDB, and it uses a WriteBatch to make sure that these things happen atomically. In other words, if any update fails, none are applied, and then we don't call notifyListeners at all. Kinto doesn't exactly support the idea of a transaction by itself, since it's a distributed system, but Kinto.js is built on a local database (either IndexedDB or SQLite) which should let us perform batch writes (which is the only thing we actually need). Since this is the first real use case for a read-write Kinto, we (I) will implement whatever it is that the use case requires.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/11-12/
Attachment #8757379 - Flags: review?(kmaglione+bmo)
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/4-5/
https://reviewboard.mozilla.org/r/54586/#review52296

> We should add a `SpecialPowers.clearUserPref("extension.storage.sync.enabled");` at the end of the test too, and probably add a `SimpleTest.registerCleanupFunction(() => { SpecialPowers.clearUserPref("extension.storage.sync.enabled"); });` at the top in case things blow up in the middle.

This took me longer to figure out than I would have liked, first because I misread "SimpleTest" as "SpecialPowers" and then second because of how `backgroundScript` is run in a separate scope. I ended up having to duplicate the definition of `STORAGE_SYNC_PREF`. Could you tell me if there's a cleaner way to do this?
The newest version of this patch depends on some patches to the Kinto.js library, which I used while testing locally, but haven't committed as part of this patch. As discussed in a video chat a week or two ago, I'd rather those changes be a separate commit that lands "ahead" of this one.

This version of the patch cleans up the ugliness of trying to figure out whether something is present, absent or deleted when getting/setting/removing it from Kinto, using the new methods I added to Kinto.js. It doesn't yet address the need to set or remove objects in a transaction, but I wanted to first get it out while it's in a decent shape.
https://reviewboard.mozilla.org/r/55838/#review56598

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:217
(Diff revision 12)
> +      SpecialPowers.setBoolPref(STORAGE_SYNC_PREF, true)
> +    );
> +  }
> +
> +  runTests("local").then(() => {
> +    SpecialPowers.setBoolPref(STORAGE_SYNC_PREF, true);

Bob was astonished that calling SpecialPowers from inside the backgroundScript worked correctly. It would be better to coordinate the pref-flipping and things from the tast harness. This would also resolve the problem of how to pass STORAGE_SYNC_PREF into the background script.
https://reviewboard.mozilla.org/r/55838/#review52522

> We should trap and report errors if this throws.

That just means catching and `console.error` or something? How should I report errors?
https://reviewboard.mozilla.org/r/54586/#review52258

> I just tried it, and yes, it seems like the tests that ensure that the flag behaves correctly are the ones that start to fail. They fail because the error message you get is no longer the flag error message, but instead "An unexpected error occurred", which is I think due to `normalizeError()` in `ExtensionUtils.jsm`.

I've marked this as fixed because we just looked at it today, although I don't think the patch attached to the other review has been updated to address this yet.
https://reviewboard.mozilla.org/r/54586/#review52296

> This took me longer to figure out than I would have liked, first because I misread "SimpleTest" as "SpecialPowers" and then second because of how `backgroundScript` is run in a separate scope. I ended up having to duplicate the definition of `STORAGE_SYNC_PREF`. Could you tell me if there's a cleaner way to do this?

We discussed this as well today. You're going to move the calls to SpecialPowers outside of the background script which should eliminate the need for the pref const inside the background script.
https://reviewboard.mozilla.org/r/55838/#review56764

Copying over the remaining comment from the other review so I can stop looking at it.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:133
(Diff revision 12)
> +      let changes = {};
> +      const promises = [];
> +      for (let itemId in items) {
> +        let item = ExtensionStorage.sanitize(items[itemId], global);
> +        let key = itemId;   // we can't close over for .. in
> +        promises.push(coll.put({

Bob commented in https://reviewboard.mozilla.org/r/54586/#comment68484 that this, as well as remove, should happen in a transaction (which requires changes to Kinto.js). This comment represents that remaining comment on the other review.
Comment on attachment 8755387 [details]
MozReview Request: Bug 1253740 - Implement storage.sync, r?bsilverberg

Thanks Ethan. It does indeed look like all of my initial comments, other than the big issue of transactions, have been addressed, so I'm going to obsolete this attachment so it won't distract us anymore.
Attachment #8755387 - Attachment is obsolete: true
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/12-13/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/5-6/
https://reviewboard.mozilla.org/r/55838/#review52522

> I'm hoping to address point 1) by defining a set of Kinto.js operations that are atomic, i.e. conducted entirely within one IndexedDB transaction. Specifically, I'm hoping that :leplatrem will solve https://github.com/Kinto/kinto.js/issues/446 for me. That may be a somewhat invasive change and I don't know how long it will take, but it's really the only way to go for a general-purpose key-value store.
> 
> As for point 3), I think we're covered because IndexedDB transactions, on top of which Kinto.js is implemented, are guaranteed to execute in the order of their creation (see https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction), which (once I have the above atomic operations) will be the order of the Kinto.js function calls.
> 
> I don't think this completely covers point 2), because the code after the IndexedDB transactions finish could be run in any order. I agree that this is bad. We probably can't queue operations on the same key because you can have multi-key sets; if each one grabs the lock on one key, you get a deadlock. So I think the only way to address this is to enqueue all operations on chrome.storage, which totally sucks. I get the impression that the intended use case for chrome.storage.sync is mostly settings and stuff that doesn't change that often, so I imagine the performance hit would be OK. (I'm not sure if this is true for chrome.storage.local.) I guess we only really need to enqueue any operation that produces a notification, so we only need to enqueue calls to `set`, `remove`, and `clear`. Maybe this won't be so bad.

Now that I have the possibility of using transactions at this level, I could pull the `notifyListeners` into the transaction. That feels kind of wrong to me because I'm not sure how long the listeners will take to run and I'll be blocking the main event loop, plus I just generically feel bad for having more stuff in a transaction than I need to. But that might solve this issue as well. What do you think?

> Why is this necessary?

Thanks a whole bunch to :bsilverberg who figured out what was going on. The old version of the `ExtensionStorageSync` code was throwing strings as exceptions, and there's a bit of code in `ExtensionUtils` called `normalizeError` which assumes errors will always be `{message: ....}` and if they aren't, replaces them with `An unexpected error occurred`. This function was normalizing the "correct" form of exceptions (which were being thrown in some places) with the string form of exceptions (which were being thrown in other places). :bsilverberg informs me that you should never throw string exceptions, so I've just turned everything into `{message: ...}` and gotten rid of this function.
This diff is ready for another round of review. I think I've addressed every comment except possibly one from :kmag. Note that this code won't work on your machine unless you also update the `kinto-offline-client.js`, which I hope to do as a separate diff once I get something like https://github.com/Kinto/kinto.js/pull/478 merged.
Thanks Ethan. I will take another look at the latest commit.
Depends on: 1282109
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/13-14/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/6-7/
https://reviewboard.mozilla.org/r/55838/#review52522

> That just means catching and `console.error` or something? How should I report errors?

Hmm, having read a bunch more code, I guess `Cu.reportError` is the way to report errors?
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/14-15/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/7-8/
https://reviewboard.mozilla.org/r/55838/#review58238

I've reviewed all of the code except for the tests, which I will get to tomorrow. Nice work!

::: toolkit/components/extensions/ExtensionStorageSync.jsm:16
(Diff revision 15)
> +const Cu = Components.utils;
> +const Cr = Components.results;
> +
> +
> +const STORAGE_SYNC_ENABLED = "extension.storage.sync.enabled";
> +const AREA_NAME = "sync";

This appears to be unused.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:57
(Diff revision 15)
> +  // user ID, extension ID, and some secret.
> +  let collectionId = extensionId;
> +  const Kinto = loadKinto();
> +  let coll;
> +  if (!Kinto) {
> +    return Promise.reject(new Error("Not supported"));

Should this error be more specific about _what_ is not supported?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:164
(Diff revision 15)
> +          const key = keys[i];
> +          const id = ids[i];
> +          const res = txn.deleteAny(id);
> +          if (res.deleted) {
> +            changes[key] = {
> +              oldValue: res.data.data,

Why `res.data.data`? That seems odd compared to `oldRecord.data` in `set()`.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:230
(Diff revision 15)
> +  },
> +
> +  addOnChangedListener(extensionId, listener) {
> +    let listeners = this.listeners.get(extensionId) || new Set();
> +    listeners.add(listener);
> +    this.listeners.set(extensionId, listeners);

Could there be an issue with the fact that we are replacing all the listeners when adding a new listener, as opposed to just adding to the existing Set? There may not be, but I'm wondering if it's possible that there could be a race between a listener firing and another listener being added.
https://reviewboard.mozilla.org/r/55838/#review52522

> Hmm, having read a bunch more code, I guess `Cu.reportError` is the way to report errors?

I do believe that `Cu.reportError` is the way to go, so give that a try. Kris will correct us both during final review if we're both wrong. :)
https://reviewboard.mozilla.org/r/55838/#review58386

The test looks good now, Ethan. Nice work!

::: toolkit/components/extensions/ExtensionStorageSync.jsm:7
(Diff revision 15)
> + * License, v. 2.0. If a copy of the MPL was not distributed with this
> + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
> +
> +"use strict";
> +
> +this.EXPORTED_SYMBOLS = ["ExtensionStorageSync", "keyToId", "idToKey"];

Is there a need to export `keyToId` and `idToKey` other than so they can be unit tested? If not then maybe they don't need to be exported nor tested.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:19
(Diff revision 15)
>  "use strict";
>  
> +const STORAGE_SYNC_PREF = "extension.storage.sync.enabled";
> +
>  function backgroundScript() {
> -  let storage = browser.storage.local;
> +  // This is necessary because backgroundScript is toString()'d and

I'm not sure to what this comment refers, which suggests to me it's not needed.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:21
(Diff revision 15)
> +const STORAGE_SYNC_PREF = "extension.storage.sync.enabled";
> +
>  function backgroundScript() {
> -  let storage = browser.storage.local;
> -  function check(prop, value) {
> +  // This is necessary because backgroundScript is toString()'d and
> +  // run in a different scope
> +  function check(areaName, prop, value) {

Maybe call this `checkGet` as that's what it's doing.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:135
(Diff revision 15)
> -  }).then(() => {
> +    }).then(() => {
> -    return storage.set({"test-prop1": "value1", "test-prop2": "value2"});
> +      return storage.set({"test-prop1": "value1", "test-prop2": "value2"});
> -  }).then(() => {
> +    }).then(() => {
> -    globalChanges = {};
> +      globalChanges = {};
> -    browser.test.sendMessage("invalidate");
> +      browser.test.sendMessage(`invalidate-${areaName}`);
> -    return new Promise(resolve => browser.test.onMessage.addListener(resolve));
> +      return new Promise(resolve => browser.test.onMessage.addListener(resolve));

I'm not totally sure if this is okay, but it seems to work so I'll leave it to Kris to verify.

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:6
(Diff revision 15)
> +"use strict";
> +
> +Cu.import("resource://gre/modules/ExtensionStorageSync.jsm");
> +
> +function run_test() {
> +  equal(typeof ExtensionStorageSync, "object");

This isn't really needed as none of the other tests will work if `ExtensionStorageSync` doesn't exist.

As well, we might be able to just get rid of all of these tests if the mochitests that exercise the API are sure to also exercise these key operations.
https://reviewboard.mozilla.org/r/54586/#review52258

> This should be fixed in the patch at https://reviewboard.mozilla.org/r/55838/.

It turns out that this map doesn't actually store collections. Instead, it's storing *promises* to collections. So the original name might have been more correct. I just discovered this the hard way after struggling for a couple hours to understand why my collections were suddenly losing their attributes.
https://reviewboard.mozilla.org/r/55838/#review58628

::: toolkit/components/extensions/ExtensionStorageSync.jsm:68
(Diff revision 15)
> +      adapterOptions: {path: "storage-sync.sqlite"},
> +    });
> +    coll = db.collection(collectionId, {
> +      idSchema: storageSyncIdSchema,
> +    });
> +    return coll.db.open();

I just realized/noticed that the reason that the mochitests take so long to shut down is because they're waiting for the Sqlite connection to be closed. This is the same issue I was running into with testing the next patch using xpcshell tests. How can I ensure that cleanup happens on this DB handle? I can add something to the tests to force it, but I imagine that if the tests hang, it's because real code will also hang, so probably I should fix it for real, but I don't know how.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/15-16/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/8-9/
https://reviewboard.mozilla.org/r/55838/#review58386

> Is there a need to export `keyToId` and `idToKey` other than so they can be unit tested? If not then maybe they don't need to be exported nor tested.

That's the only reason. Are you sure you want not to test them? I feel like they're easy to test since they're pure functions with no dependencies, and I don't really see a problem with exporting them.

> I'm not totally sure if this is okay, but it seems to work so I'll leave it to Kris to verify.

I didn't really know what the correct thing to do here was, but now that you point it out, it sure does look suspicious that we keep adding more and more listeners. I've revised this a little bit, can you tell me if you think it seems more sane like this? I couldn't find a `browser.test.awaitMessage`, which is what I really wanted in this case.

> This isn't really needed as none of the other tests will work if `ExtensionStorageSync` doesn't exist.
> 
> As well, we might be able to just get rid of all of these tests if the mochitests that exercise the API are sure to also exercise these key operations.

Yes, but I like having additional assurance that this underlying function behaves in this known way. But I'll trust your judgment on this if you want to get rid of them.
https://reviewboard.mozilla.org/r/55838/#review56598

> Bob was astonished that calling SpecialPowers from inside the backgroundScript worked correctly. It would be better to coordinate the pref-flipping and things from the tast harness. This would also resolve the problem of how to pass STORAGE_SYNC_PREF into the background script.

I think you've now fixed this, correct? If so, feel free to mark this issue as fixed.
https://reviewboard.mozilla.org/r/55838/#review58628

> I just realized/noticed that the reason that the mochitests take so long to shut down is because they're waiting for the Sqlite connection to be closed. This is the same issue I was running into with testing the next patch using xpcshell tests. How can I ensure that cleanup happens on this DB handle? I can add something to the tests to force it, but I imagine that if the tests hang, it's because real code will also hang, so probably I should fix it for real, but I don't know how.

I'm not sure how to address this, but I wanted to add that I am seeing error messages about this too. When running the tests, at the end of the console log I see:

217 INFO TEST-OK | toolkit/components/extensions/test/mochitest/test_ext_storage.html | took 1470ms
218 INFO TEST-START | Shutdown
219 INFO Passed:  212
220 INFO Failed:  0
221 INFO Todo:    0
222 INFO Mode:    e10s
223 INFO Slowest: 1470ms - /tests/toolkit/components/extensions/test/mochitest/test_ext_storage.html
224 INFO SimpleTest FINISHED
225 INFO TEST-INFO | Ran 1 Loops
226 INFO SimpleTest FINISHED
JavaScript error: chrome://browser/content/tabbrowser.xml, line 2968: TypeError: this.tabs is undefined
WARNING: At least one completion condition is taking too long to complete. Conditions: [{"name":"Sqlite.jsm shutdown blocker","state":{"description":"Waiting for connections to close","state":[{"name":"storage-sync.sqlite#0: waiting for shutdown","state":{"identifier":"storage-sync.sqlite#0","isCloseRequested":false,"hasDbConn":true,"hasInProgressTransaction":false,"pendingStatements":0,"statementCounter":137},"filename":"resource://gre/modules/Sqlite.jsm","lineNumber":259,"stack":["resource://gre/modules/Sqlite.jsm:ConnectionData:259","resource://gre/modules/Sqlite.jsm:OpenedConnection:1124","resource://gre/modules/Sqlite.jsm:openConnection/</<:933"]}]},"filename":"resource://gre/modules/Sqlite.jsm","lineNumber":157,"stack":["resource://gre/modules/Sqlite.jsm:null:157","resource://gre/modules/XPCOMUtils.jsm:XPCU_defineLazyGetter/<.get:198","resource://gre/modules/Sqlite.jsm:ConnectionData:259","resource://gre/modules/Sqlite.jsm:OpenedConnection:1124","resource://gre/modules/Sqlite.jsm:wrapStorageConnection/<:1060","resource://gre/modules/Sqlite.jsm:wrapStorageConnection:1057","resource://gre/modules/PlacesUtils.jsm:null:1937","resource://gre/modules/XPCOMUtils.jsm:XPCU_defineLazyGetter/<.get:198","resource://gre/modules/PlacesUtils.jsm:this.PlacesUtils.withConnectionWrapper/<:1279","resource://gre/modules/Task.jsm:TaskImpl_run:319","resource://gre/modules/Task.jsm:TaskImpl:280","resource://gre/modules/Task.jsm:createAsyncFunction/asyncFunction:254","resource://gre/modules/Task.jsm:Task_spawn:168","resource://gre/modules/PlacesUtils.jsm:this.PlacesUtils.withConnectionWrapper:1278","resource://gre/modules/PlacesUtils.jsm:null:2132","resource://gre/modules/XPCOMUtils.jsm:XPCU_defineLazyGetter/<.get:198","resource://gre/modules/PlacesUtils.jsm:PU_observe:434","file:///Users/bsilverberg/mozilla-central/obj-ff-opt-artifact/dist/Nightly.app/Contents/Resources/components/PlacesCategoriesStarter.js:PlacesCategoriesStarter/notify:51","resource://gre/modules/BookmarkHTMLUtils.jsm:walkTreeForImport:935","resource://gre/modules/BookmarkHTMLUtils.jsm:BookmarkImporter.prototype.importFromURL</</xhr.onload:945"]}] Barrier: profile-before-change
WARNING: At least one completion condition is taking too long to complete. Conditions: [{"name":"storage-sync.sqlite#0: waiting for shutdown","state":{"identifier":"storage-sync.sqlite#0","isCloseRequested":false,"hasDbConn":true,"hasInProgressTransaction":false,"pendingStatements":0,"statementCounter":137},"filename":"resource://gre/modules/Sqlite.jsm","lineNumber":259,"stack":["resource://gre/modules/Sqlite.jsm:ConnectionData:259","resource://gre/modules/Sqlite.jsm:OpenedConnection:1124","resource://gre/modules/Sqlite.jsm:openConnection/</<:933"]}] Barrier: Sqlite.jsm: wait until all connections are closed
FATAL ERROR: AsyncShutdown timeout in profile-before-change Conditions: [{"name":"Sqlite.jsm shutdown blocker","state":{"description":"Waiting for connections to close","state":[{"name":"storage-sync.sqlite#0: waiting for shutdown","state":{"identifier":"storage-sync.sqlite#0","isCloseRequested":false,"hasDbConn":true,"hasInProgressTransaction":false,"pendingStatements":0,"statementCounter":137},"filename":"resource://gre/modules/Sqlite.jsm","lineNumber":259,"stack":["resource://gre/modules/Sqlite.jsm:ConnectionData:259","resource://gre/modules/Sqlite.jsm:OpenedConnection:1124","resource://gre/modules/Sqlite.jsm:openConnection/</<:933"]}]},"filename":"resource://gre/modules/Sqlite.jsm","lineNumber":157,"stack":["resource://gre/modules/Sqlite.jsm:null:157","resource://gre/modules/XPCOMUtils.jsm:XPCU_defineLazyGetter/<.get:198","resource://gre/modules/Sqlite.jsm:ConnectionData:259","resource://gre/modules/Sqlite.jsm:OpenedConnection:1124","resource://gre/modules/Sqlite.jsm:wrapStorageConnection/<:1060","resource://gre/modules/Sqlite.jsm:wrapStorageConnection:1057","resource://gre/modules/PlacesUtils.jsm:null:1937","resource://gre/modules/XPCOMUtils.jsm:XPCU_defineLazyGetter/<.get:198","resource://gre/modules/PlacesUtils.jsm:this.PlacesUtils.withConnectionWrapper/<:1279","resource://gre/modules/Task.jsm:TaskImpl_run:319","resource://gre/modules/Task.jsm:TaskImpl:280","resource://gre/modules/Task.jsm:createAsyncFunction/asyncFunction:254","resource://gre/modules/Task.jsm:Task_spawn:168","resource://gre/modules/PlacesUtils.jsm:this.PlacesUtils.withConnectionWrapper:1278","resource://gre/modules/PlacesUtils.jsm:null:2132","resource://gre/modules/XPCOMUtils.jsm:XPCU_defineLazyGetter/<.get:198","resource://gre/modules/PlacesUtils.jsm:PU_observe:434","file:///Users/bsilverberg/mozilla-central/obj-ff-opt-artifact/dist/Nightly.app/Contents/Resources/components/PlacesCategoriesStarter.js:PlacesCategoriesStarter/notify:51","resource://gre/modules/BookmarkHTMLUtils.jsm:walkTreeForImport:935","resource://gre/modules/BookmarkHTMLUtils.jsm:BookmarkImporter.prototype.importFromURL</</xhr.onload:945"]}] At least one completion condition failed to complete within a reasonable amount of time. Causing a crash to ensure that we do not leave the user with an unresponsive process draining resources.
[Parent 81413] ###!!! ABORT: file resource://gre/modules/Sqlite.jsm, line 157
[Parent 81413] ###!!! ABORT: file resource://gre/modules/Sqlite.jsm, line 157
TEST-INFO | Main app process: exit 1
TEST-UNEXPECTED-FAIL | toolkit/components/extensions/test/mochitest/test_ext_storage.html | application terminated with exit code 1
runtests.py | Application ran for: 0:01:06.131889
zombiecheck | Reading PID log: /var/folders/kk/kg9b05_93xv73zflrf0v17m00000gn/T/tmpYp8Jzepidlog
==> process 81413 launched child process 81414
zombiecheck | Checking for orphan process with PID: 81414
PROCESS-CRASH | toolkit/components/extensions/test/mochitest/test_ext_storage.html | application crashed [None]
Crash dump filename: /var/folders/kk/kg9b05_93xv73zflrf0v17m00000gn/T/tmps13H4m.mozrunner/minidumps/E7E85EB2-08ED-48B6-B138-7722A79EC742.dmp
MINIDUMP_STACKWALK not set, can't process dump.
Stopping web server
Stopping web socket server
Stopping ssltunnel
WARNING | leakcheck | refcount logging is off, so leaks can't be detected!
runtests.py | Running tests: end.
0 INFO TEST-START | Shutdown
1 INFO Passed:  212
2 INFO Failed:  0
3 INFO Todo:    0
4 INFO Mode:    e10s
5 INFO SimpleTest FINISHED
SUITE-END | took 66s
https://reviewboard.mozilla.org/r/55838/#review58386

> That's the only reason. Are you sure you want not to test them? I feel like they're easy to test since they're pure functions with no dependencies, and I don't really see a problem with exporting them.

I agree that it's nice to have unit tests. Let's keep it the way it is.

> I didn't really know what the correct thing to do here was, but now that you point it out, it sure does look suspicious that we keep adding more and more listeners. I've revised this a little bit, can you tell me if you think it seems more sane like this? I couldn't find a `browser.test.awaitMessage`, which is what I really wanted in this case.

What you've done is interesting, but not the way we generally deal with the issue. I'm not saying it's wrong, but there is some value in consistency. Rather than trying to force everthing into one big promise chain, you could put each of the test blocks into a listener, which then runs a different block depending on which message is received. You would control the execution of the blocks by sending messages from the test to the extension and then sending a message from the extension (i.e., the background script) back to the test to inform it that the block was complete. For an example of this see `test_delete` in `browser_ext_history.js` [1].

[1] https://dxr.mozilla.org/mozilla-central/source/browser/components/extensions/test/browser/browser_ext_history.js#12

> Yes, but I like having additional assurance that this underlying function behaves in this known way. But I'll trust your judgment on this if you want to get rid of them.

Sure, let's keep 'em.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/16-17/
Comment on attachment 8758378 [details]
update test to use templates,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/56614/diff/9-10/
https://reviewboard.mozilla.org/r/55838/#review58386

> What you've done is interesting, but not the way we generally deal with the issue. I'm not saying it's wrong, but there is some value in consistency. Rather than trying to force everthing into one big promise chain, you could put each of the test blocks into a listener, which then runs a different block depending on which message is received. You would control the execution of the blocks by sending messages from the test to the extension and then sending a message from the extension (i.e., the background script) back to the test to inform it that the block was complete. For an example of this see `test_delete` in `browser_ext_history.js` [1].
> 
> [1] https://dxr.mozilla.org/mozilla-central/source/browser/components/extensions/test/browser/browser_ext_history.js#12

I've taken another swing at this, modeling it on the browser_ext_history tests you link to. In addition, I split out parts of the test that didn't need to be in the same test, including the test for invalidating local cache in storage.local and the test for checking that you can't use storage.sync unless you set a config flag. This is more invasive to the original underlying code, but I guess that ship has sailed. Could you take another look at this and see if it's more like what you want?
https://reviewboard.mozilla.org/r/55838/#review59630

The new test looks great, Ethan. I find it much more readible in its current format. Nice work!

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:46
(Diff revision 17)
> +      if (msg === "set-initial") {
> +        browser.storage.local.set({"test-prop1": "value1", "test-prop2": "value2"}).then(() => {
> +          browser.test.sendMessage("set-initial-done");
> +        });
> +      } else if (msg === "check") {
> +        checkGet("local", "test-prop1", "value1").then(() => {

I'm confused by this test. This seems to be checking that the values are the ones that we set during `set-initial`, but the check is being done after we call `invalidateExtensionStorageCache`. Are we testing that the cache invalidation does not remove the values?

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:93
(Diff revision 17)
> +      apiTests.forEach(testDef => {
> +        promises.push(new Promise((resolve, reject) => {
> +          browser.storage.sync[testDef.method].apply(undefined, testDef.args).then(
> +            () => reject("didn't fail with extension.storage.sync.enabled = false"),
> +            error => {
> +              browser.test.assertEq("Please set extension.storage.sync.enabled to tr" +

Nit: This seems like an odd place to break this string (in the middle of a word).

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:104
(Diff revision 17)
> +      });
> +
> +      return Promise.all(promises);
> +    }
> +
> +    browser.test.onMessage.addListener(msg => {

I don't think you need this at all. Rather than sending a message from the test to the background script to tell it to do the check, you could just do it in the background script when it loads. I.e., you could remove the `checkStorageSyncNeedsFlag` function wrapper and just do everything inside that function directly in the background script.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:131
(Diff revision 17)
> +  yield extension.awaitMessage("storage-sync-needs-flag-checked");
> +
> +  yield extension.unload();
> +});
> +
> +function backgroundScript(checkGet) {

We generally put this inside the task for the test, unless it needs to be shared by other tests, which I believe this does not.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:289
(Diff revision 17)
> +
> +  browser.test.sendMessage("ready");
> +}
> +
>  let extensionData = {
> -  background: "(" + backgroundScript.toString() + ")()",
> +  background: "(" + backgroundScript.toString() + ")(" + checkGet.toString() + ")",

You can shorten this as you've done at line 119, for example.
Comment on attachment 8758378 [details]
update test to use templates,

https://reviewboard.mozilla.org/r/56614/#review59636

I think having the two commits/reviews for this patch is making it more confusing. I would suggest just including the few changes that are in this commit with the other commit.
Attachment #8758378 - Flags: review?(bob.silverberg) → review-
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/17-18/
Attachment #8758378 - Attachment is obsolete: true
https://reviewboard.mozilla.org/r/55838/#review59630

> I'm confused by this test. This seems to be checking that the values are the ones that we set during `set-initial`, but the check is being done after we call `invalidateExtensionStorageCache`. Are we testing that the cache invalidation does not remove the values?

Yes, I believe that the test is meant to verify that cache invalidation doesn't forget or alter values. This test doesn't really make sense for a Kinto-based implementation of the storage API -- it only really applies to storage.local (and predates my or Michiel's changes).

> Nit: This seems like an odd place to break this string (in the middle of a word).

Yes, I also don't really know why this is like that. I thought it might be an attempt to avoid causing spurious grep hits for anyone looking for the boolean `true`. I'll reflow it.

> You can shorten this as you've done at line 119, for example.

I had done this in the following commit, but as you say, it's confusing especially considering how many invasive changes I've already made in this commit, so I have rolled the second commit.
https://reviewboard.mozilla.org/r/55838/#review58238

> Should this error be more specific about _what_ is not supported?

`loadKinto` should always be present, and there's no circumstance in which it returns a falsy value, so I'm just removing this `if` block, if that's OK with you.

> Why `res.data.data`? That seems odd compared to `oldRecord.data` in `set()`.

A `get` operation returns a `Promise` which resolves to an object with `{data: record, permissions: ....}`. The records we ourselves store are `{data: "user data", key: "some key", id: "some_20_key"}`. So the first `data` gets us to the record, and the second `data` gets us to the user data.

A `set` operation returns a `Promise` which resolves to an object with `{data: record, oldRecord: oldRecord, permissions: ....}`. For this reason, we use pattern matching to extract the `oldRecord` field. So instead, we're really doing `res.oldRecord.data`, but because of pattern matching we never really see `res`.

> Could there be an issue with the fact that we are replacing all the listeners when adding a new listener, as opposed to just adding to the existing Set? There may not be, but I'm wondering if it's possible that there could be a race between a listener firing and another listener being added.

I think we're OK -- yes, there can be a race if both things happen at once, but either the new listener is notified, or it isn't, so there's no issue. I'm not 100% sure about this because I'm still struggling with the concurrency model in JS, and I'm assuming that there's no such thing as a concurrent modification exception or anything like that. Who can I ask to find out more about this?

There are two cases to consider: either there is no Set yet, or there is already one. If there is no Set yet, then we either do the `get()` in `notifyListeners` first, or we do the `set()` here first. If we do the `get()` in `notifyListeners` first, we don't notify listeners, which would be consistent with the listener happening first. If we do the `set()` here first, then when we do the `get()` in `notifyListeners`, we'll see the new listener, and it will be notified.

If there is already a set, then the race is between the iteration in `notifyListeners()` and the `add()` call here. I'm assuming that iterating over a set will produce its elements in an arbitrary order, and that `add()` will put the new element somewhere in the order, either before the "current element" in the loop, or after. If it's before, then we won't notify the new listener; if it's after, then we will.

I was also worried about two threads simultaneously adding listeners. Could they both do the `get`, both create new `Set`s, and then both `set` them (in which case one listener "wins")? It seems like only one thread can be in `addOnChangedListener` at once, again because of the concurrency model in JS-land, so I think we're OK.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/18-19/
https://reviewboard.mozilla.org/r/55838/#review58238

> `loadKinto` should always be present, and there's no circumstance in which it returns a falsy value, so I'm just removing this `if` block, if that's OK with you.

Sure, sounds good.

> A `get` operation returns a `Promise` which resolves to an object with `{data: record, permissions: ....}`. The records we ourselves store are `{data: "user data", key: "some key", id: "some_20_key"}`. So the first `data` gets us to the record, and the second `data` gets us to the user data.
> 
> A `set` operation returns a `Promise` which resolves to an object with `{data: record, oldRecord: oldRecord, permissions: ....}`. For this reason, we use pattern matching to extract the `oldRecord` field. So instead, we're really doing `res.oldRecord.data`, but because of pattern matching we never really see `res`.

Thanks for the explanation.

> I think we're OK -- yes, there can be a race if both things happen at once, but either the new listener is notified, or it isn't, so there's no issue. I'm not 100% sure about this because I'm still struggling with the concurrency model in JS, and I'm assuming that there's no such thing as a concurrent modification exception or anything like that. Who can I ask to find out more about this?
> 
> There are two cases to consider: either there is no Set yet, or there is already one. If there is no Set yet, then we either do the `get()` in `notifyListeners` first, or we do the `set()` here first. If we do the `get()` in `notifyListeners` first, we don't notify listeners, which would be consistent with the listener happening first. If we do the `set()` here first, then when we do the `get()` in `notifyListeners`, we'll see the new listener, and it will be notified.
> 
> If there is already a set, then the race is between the iteration in `notifyListeners()` and the `add()` call here. I'm assuming that iterating over a set will produce its elements in an arbitrary order, and that `add()` will put the new element somewhere in the order, either before the "current element" in the loop, or after. If it's before, then we won't notify the new listener; if it's after, then we will.
> 
> I was also worried about two threads simultaneously adding listeners. Could they both do the `get`, both create new `Set`s, and then both `set` them (in which case one listener "wins")? It seems like only one thread can be in `addOnChangedListener` at once, again because of the concurrency model in JS-land, so I think we're OK.

Sounds reasonable to me.
https://reviewboard.mozilla.org/r/55838/#review59630

> Yes, I also don't really know why this is like that. I thought it might be an attempt to avoid causing spurious grep hits for anyone looking for the boolean `true`. I'll reflow it.

Please feel free to mark things as fixed yourself if you've addressed an issue and pushed a new commit with the changes.
https://reviewboard.mozilla.org/r/55838/#review58386

> I've taken another swing at this, modeling it on the browser_ext_history tests you link to. In addition, I split out parts of the test that didn't need to be in the same test, including the test for invalidating local cache in storage.local and the test for checking that you can't use storage.sync unless you set a config flag. This is more invasive to the original underlying code, but I guess that ship has sailed. Could you take another look at this and see if it's more like what you want?

Sorry, I should have responded to this issue last time. It looks like it's been addressed with the redesign of the test.
https://reviewboard.mozilla.org/r/55838/#review60434

::: toolkit/components/extensions/ExtensionStorageSync.jsm:65
(Diff revision 19)
> +    });
> +    coll = db.collection(collectionId, {
> +      idSchema: storageSyncIdSchema,
> +    });
> +    context.callOnClose(coll.db);
> +    return coll.db.open();

I'm not 100% sure this is correct. We close the connection after the first context is done with it, even if other contexts use it. Can someone who understands contexts better than me tell me what I'm supposed to do here?
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/19-20/
https://reviewboard.mozilla.org/r/55838/#review60434

> I'm not 100% sure this is correct. We close the connection after the first context is done with it, even if other contexts use it. Can someone who understands contexts better than me tell me what I'm supposed to do here?

OK I think I fixed this and added a test that proves it.
Review commit: https://reviewboard.mozilla.org/r/66112/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/66112/
Attachment #8773412 - Flags: review?(markh)
Attachment #8773413 - Flags: review?(markh)
Attachment #8773414 - Flags: review?(markh)
Attachment #8773415 - Flags: review?(markh)
Attachment #8773416 - Flags: review?(markh)
Attachment #8773417 - Flags: review?(markh)
Attachment #8773418 - Flags: review?(markh)
Attachment #8773419 - Flags: review?(bob.silverberg)
Attachment #8773420 - Flags: review?(bob.silverberg)
Also define withSyncContext, which is only useful as long as we have a
preference that controls whether this API is even available.

Review commit: https://reviewboard.mozilla.org/r/66118/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/66118/
I wanted to originally call the main entry point for Sync, but doing
that fails because it wants to sync ClientsEngine first.

Review commit: https://reviewboard.mozilla.org/r/66120/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/66120/
Attached file Add crypto, including tests, (obsolete) —
This was originally Michiel's code that has been enhanced and mangled.

Review commit: https://reviewboard.mozilla.org/r/66126/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/66126/
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/55838/diff/20-21/
OK, I've just pushed the code that I've been working on for the last couple weeks. It's not by any means complete, but shows something that seems to be progress and I would like to get some vague feedback.

The first few patches introduce code in the Sync codebase to call a new main entry point, ExtensionStorageSync.syncAll. I r?'d markh on this because he was the one who brought it up, but I don't really know who is a better person to review it, so feel free to reassign.

Subsequent commits take Michiel's code from https://github.com/michielbdejong/gecko-dev/commit/edab4c620ceab400c790915824549cfa76bed866, clean it up a bit, update it according to what I've been doing on top of his earlier work, and finally the last patch adds some tests. These tests stand up a fake httpd per markh's comments on 1253741, but end up doing a bunch of work to try to imitate what a real Kinto server would do, including trying to maintain an internal state about what records are present. To me this might be a code smell, since our code now tests against a concept of a Kinto server that isn't necessarily in sync with what a real Kinto server would do. Another approach might be to just use mocks on the Kinto library itself, but that might end up with the same problem, just at a different level of abstraction.

There are still a bunch of things wrong with these patches -- in particular I think the crypto is straight-up not working (because otherwise the tests would be broken), and I think a real Kinto server would refuse to have curly braces in a collection name. I'm looking for feedback about the approach, especially about what I did with Sync, and also about the test code I wrote, which seems quite verbose considering the relatively small amount of code it covers, as well as typical pedestrian things like what goes in which file et cetera.
Oh, a few more notes:

- I didn't bring over Michiel's prepare.sh script which is obviously not going to be used in a production solution. I'm not sure what problem it's meant to solve yet.

- Per our discussion last week on Vidyo, there's no reason for any of this code to land unless it can also sync, so I just dumped everything in this bug. I'm probably going to close 1253741 as a duplicate of this one.
Comment on attachment 8773419 [details]
Add crypto, including tests,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66126/diff/1-2/
Attachment #8773419 - Attachment description: Rough attempt to implement syncing functionality, → Add crypto, including tests,
Attachment #8773420 - Attachment description: tests that vaguely exercise pushing and pulling, → Add code that syncs and tests,
Attachment #8773419 - Flags: review?(bob.silverberg) → review?(markh)
Comment on attachment 8773420 [details]
Add code that syncs and tests,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66128/diff/1-2/
https://reviewboard.mozilla.org/r/66112/#review63110

::: services/sync/modules/engines/storage-sync.js:69
(Diff revision 1)
> +      return;
> +    }
> +
> +    // Single adds, removes and changes are not so important on their
> +    // own, so let's just increment score a bit.
> +    this.score += SCORE_INCREMENT_MEDIUM;

It is worth keeping in mind that the Sync tracker story sucks :(

Unless an addon makes many storage changes, this change seems unlikely to cause a Sync to start - in practice I expect it will end up waiting for the next scheduled sync - which might be longer than you think if the user ends up shutting their laptop lid.

However, if we go for a larger increment, we may force a Sync more often, but this.ignoreAll will be true during a Sync, meaning the tracker will miss changes made during a Sync. This might mean that an addon that makes a number of storage changes at the same time would risk starting a sync after a few have been set, and not syncing the prefs subsequently set after the sync started.

I don't understand the storage implementation yet, but it's possible we can do something better if necessary (eg, bookmarks are moving away from an observer based tracker and querying the places DB directly for changes, for exactly the above reasons - a scheme like that might work better here, but it will require coordination)

::: services/sync/moz.build:59
(Diff revision 1)
>      'modules/engines/clients.js',
>      'modules/engines/forms.js',
>      'modules/engines/history.js',
>      'modules/engines/passwords.js',
>      'modules/engines/prefs.js',
> +    'modules/engines/storage-sync.js',

I think we should use a better name than "storage-sync" - it doesn't really make much sense in the context of sync itself. extenstion-storage or similar sounds clearer and better matches the toolkit module name.

::: services/sync/tests/unit/test_storage_sync_tracker.js:44
(Diff revision 1)
> +  const STORAGE_SYNC_PREF = "extension.storage.sync.enabled";
> +  const ssm = Services.scriptSecurityManager;
> +  const PRINCIPAL1 = ssm.createCodebasePrincipalFromOrigin("http://www.example.org");
> +  const context = new Context(PRINCIPAL1);
> +
> +  let prefs = Components.classes["@mozilla.org/preferences-service;1"]

Use Services.prefs here.
https://reviewboard.mozilla.org/r/66114/#review64286

::: browser/components/preferences/in-content/sync.xul:26
(Diff revision 1)
>                name="services.sync.engine.prefs"
>                type="bool"/>
>    <preference id="engine.passwords"
>                name="services.sync.engine.passwords"
>                type="bool"/>
> +  <preference id="engine.storage-sync"

I assume we will also need a checkbox the for engine? I don't think the addition of this pref here has any real effect? You will probably want to add the pref to services-sync.js.

Note also that the FxA server itself offers the choice of engines for the signup flow, so we'll also need to coordinate with them.

I guess we will want UX input here too - I expect most users will see this as part of addons, and conveying there is a difference between addons and addon-storage might be tricky - would a user really understand (and thus choose) to disable addon sync but enable addon-storage sync (and vice-versa)? Maybe it be OK to have this engine enabled only when the addons engine is? This might avoid needing the pref completely.
https://reviewboard.mozilla.org/r/66126/#review64308

::: toolkit/components/extensions/ExtensionStorageSync.jsm:28
(Diff revision 2)
>                                    "resource://gre/modules/ExtensionStorage.jsm");
>  XPCOMUtils.defineLazyModuleGetter(this, "AppsUtils",
>                                    "resource://gre/modules/AppsUtils.jsm");
>  XPCOMUtils.defineLazyModuleGetter(this, "Observers",
>                                    "resource://services-common/observers.js");
> +XPCOMUtils.defineLazyModuleGetter(this, "CommonUtils",

ISTM that this entire module should be in the sync directory? I'd prefer to at least keep all the encryption in the same place.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:68
(Diff revision 2)
> +  if (!user.kB) {
> +    throw new Error("user doesn't have kB");
> +  }
> +  let kB = CommonUtils.hexToBytes(user.kB);
> +
> +  // FIXME: does it make sense to make this depend on the extensionId in this

these are all good questions and part of the reason I'd prefer to see everything in sync.

It looks like this isn't ready for review yet, and the questions you ask should probably be discussed on sync-dev.
I'd really like to see some of these sync patches squashed - the patches that are just one or 2 lines line, and change "todo" lines from earlier patches makes reviewing difficult and doesn't seem to add much value. One of the patches is totally empty.

For the patches related to Sync, I think it makes sense to have a patch for the engine, a patch for any UI necessary and a patch for the encryption.
Depends on: 1290440
Comment on attachment 8773413 [details]
Introduce extension-storage engine with a sanity test,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66114/diff/1-2/
Attachment #8773413 - Attachment description: Enable engine using pref even though it's not ready yet, → Introduce extension-storage engine with a sanity test,
Attachment #8773412 - Attachment description: Introduce storage-sync engine with failing test, → Add sanity tests for StorageSyncEngine,
Attachment #8773414 - Attachment description: Fix test, → Introduce extensionIdToCollectionId,
Attachment #8773416 - Attachment description: Introduce sanity test for StorageSyncEngine, → Hash extension ID to obfuscate installed add-ons,
Attachment #8773414 - Flags: review?(markh) → review?(bob.silverberg)
Attachment #8773416 - Flags: review?(markh) → review?(bob.silverberg)
Comment on attachment 8773415 [details]
Extract test context and withContext to extensions,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66118/diff/1-2/
Comment on attachment 8773412 [details]
Bug 1253740 - Introduce extension-storage engine with a sanity test,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66112/diff/1-2/
Comment on attachment 8773419 [details]
Add crypto, including tests,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66126/diff/2-3/
Comment on attachment 8773420 [details]
Add code that syncs and tests,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66128/diff/2-3/
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66116/diff/1-2/
Comment on attachment 8773416 [details]
Bug 1253740 - Hash extension ID to obfuscate installed add-ons,

Review request updated; see interdiff: https://reviewboard.mozilla.org/r/66120/diff/1-2/
Attachment #8773417 - Attachment is obsolete: true
Attachment #8773417 - Flags: review?(markh)
Attachment #8773418 - Attachment is obsolete: true
Attachment #8773418 - Flags: review?(markh)
Thank you very much for the feedback :markh. I wasn't trying to suggest that the patches as-is are ready for final review, but was more hoping for early feedback on the approach and architecture. I think in Mozilla lingo, I was trying to ask for an f? rather than an r?, but I'm still new here so sorry for getting that wrong.

I don't feel like the entirety of ExtensionStorageSync should be in the sync directory, since most of it exists to satisfy the requirements of the API. However, I do agree that the crypto code is more like the sync code than like the extension code, so I've pushed a new version of the series with the encryption transformer moved into the sync directory (and the storage-sync.js file and variables named to extension-storage.js as you suggested).

I've also squashed some of the patches into one another. I couldn't find the empty patch you were talking about, but I tried to get rid of any that were only a few lines. Mozilla seems to prefer larger commits than I'm used to, and I'll try to calibrate to that.

There are still a handful of issues and possible improvements I'd like to make to these patches, but I have New Hire Orientation next week so I may not be able to make them yet. I'd be grateful once again for any suggestions on these patches.
https://reviewboard.mozilla.org/r/66114/#review64286

> I assume we will also need a checkbox the for engine? I don't think the addition of this pref here has any real effect? You will probably want to add the pref to services-sync.js.
> 
> Note also that the FxA server itself offers the choice of engines for the signup flow, so we'll also need to coordinate with them.
> 
> I guess we will want UX input here too - I expect most users will see this as part of addons, and conveying there is a difference between addons and addon-storage might be tricky - would a user really understand (and thus choose) to disable addon sync but enable addon-storage sync (and vice-versa)? Maybe it be OK to have this engine enabled only when the addons engine is? This might avoid needing the pref completely.

I didn't really know what I was doing when I changed sync.xul, but was just trying to update all the places that it seemed relevant. I've reverted that change while waiting for https://bugzilla.mozilla.org/show_bug.cgi?id=1290440.
https://reviewboard.mozilla.org/r/66112/#review63110

> It is worth keeping in mind that the Sync tracker story sucks :(
> 
> Unless an addon makes many storage changes, this change seems unlikely to cause a Sync to start - in practice I expect it will end up waiting for the next scheduled sync - which might be longer than you think if the user ends up shutting their laptop lid.
> 
> However, if we go for a larger increment, we may force a Sync more often, but this.ignoreAll will be true during a Sync, meaning the tracker will miss changes made during a Sync. This might mean that an addon that makes a number of storage changes at the same time would risk starting a sync after a few have been set, and not syncing the prefs subsequently set after the sync started.
> 
> I don't understand the storage implementation yet, but it's possible we can do something better if necessary (eg, bookmarks are moving away from an observer based tracker and querying the places DB directly for changes, for exactly the above reasons - a scheme like that might work better here, but it will require coordination)

So does this mean I should change the implementation of the tracker, or that what I have here is good enough for now? I checked the bookmarks tracker and AFAICT it's still listening to events and updating a score. Maybe "moving away" means there's a patch in-progress that hasn't landed yet? If so, where is it? I can try to follow its approach if you prefer.
https://reviewboard.mozilla.org/r/66126/#review64308

> these are all good questions and part of the reason I'd prefer to see everything in sync.
> 
> It looks like this isn't ready for review yet, and the questions you ask should probably be discussed on sync-dev.

These were all Michiel's questions and I'm pretty sure none of them are worth worrying about. Specifically:

- Deriving a key using HKDF is just as secure as storing a second encryption key which is decrypted using a first encryption key.
- HKDF is one-way so it doesn't leak information about the key.
- No WebExtension should be able to access kB, and without kB it's impossible to derive the encryption key for another extension, so you're back to brute force if you want to get access to another extension's data. If a WebExtension can access kB, we would probably have a serious security issue. (I'm checking with #webextensions to verify this.)

I deleted the questions from this version of the patch. However, I'm happy to ask them on sync-dev if you think they're important.
(In reply to Ethan Glasser-Camp (:glasserc) from comment #149)
> So does this mean I should change the implementation of the tracker, or that
> what I have here is good enough for now?

I'm not sure TBH :)

> I checked the bookmarks tracker and
> AFAICT it's still listening to events and updating a score. Maybe "moving
> away" means there's a patch in-progress that hasn't landed yet? If so, where
> is it? I can try to follow its approach if you prefer.

Yep, it is in progress, although there probably isn't much value in deciphering that patch. In summary, there are 2 distinct issues:

* A "score" is a way for Sync to know that enough has changed so it should "sync now". Once the score crosses a threshold (typically, 300) Sync will start immediately instead of waiting for the next scheduled sync (which is typically every 10 minutes). Incrementing the score by SCORE_INCREMENT_MEDIUM (=10) means that the observer must fire 30 times before it triggers a sync - it seems unlikely we'd see that many notifications in the 10 minutes between syncs. IOW, I expect the practical outcome of that score code is that it is unlikely to ever force sync to happen quicker than it otherwise would. That may or not be OK for your use-case.

* The "tracker" is how we know *what* has changed. The bookmark tracker currently in the tree uses notifications and tracks the ID of changed items - and only those IDs it has tracked is synced. However, there are important cases when the tracker will miss notifications - before Sync has initialized and while a Sync is currently happening (it ignores this latter case as Sync doesn't really know if itself is the reason for the change, or something else). The bookmark tracker is changing to store the "changed" state directly in places.sqlite and having places itself track the state. When a bookmark Sync happens, Sync will simply perform a DB query to get the changed items. This means that changes made even before Sync starts will be correctly tracked.

These 2 concepts are distinct, but do interact. Consider a user creating a new bookmark - the score is bumped so that a new Sync starts immediately. While we are syncing, the user edits the bookmark (eg, moves the bookmark to a new folder). Because we are syncing the current tracker doesn't notice this move. The end result is that the changes made to that new bookmark are *not* synced - the server version of the bookmark only has the initial creation, not the subsequent edit. This is, effectively, data-loss.

The key point I was trying to make re your tracker is that assuming we don't want some of this data to be missing, then we really need a tracker that's better than the existing ones, for the reasons above. For example, let's say your implementation was such that every single change caused a "revision number" to increment. In that case, we'd probably just want to store the revision number that was current when we Synced. Then, at any time in the future we could simply ask for all changes from that earlier revision to the now current one - we wouldn't need observers etc to work out what has changed since the last Sync.

I'll try and find some time to look at more of the actual implementation to see what is viable there, but hopefully that makes sense in the meantime.
https://reviewboard.mozilla.org/r/55838/#review65316

I should have looked at this patch first :) The comments below are just a drive-by and my opinion. I hope you find them useful but I'm not the reviewer here ;)

Re Sync itself: The question that needs to be answered is "how much change is enough to force a sync immediately, given we sync every 10 minutes anyway?". This probably depends on the expected write pattern, but I could easily imagine some extensions writing regularly. The trade-off, of course, is a user doing something that should sync, closing their laptop and moving to a second device, just to find their data didn't, and isn't going to any time soon, Sync.

I think that until that is answered we should stick with exactly you have got - with very little "set" traffic, we will just sync every 10 minutes - but huge "set" traffic (say, an extension deciding to write something every page load) should remain sane. If we want more frequent Syncs we can discuss changes later (and need not necessarily sync every other existing engine)

I'm not clear on the encryption patches below as I don't know kinto. Will the records be encrypted in the local store backing this API, or just the server copies?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:52
(Diff revision 21)
> +
> +/**
> + * Generate a promise that produces the Collection for an extension.
> + */
> +function openCollection(extensionId, context) {
> +  // FIXME: This leaks metadata about what extensions a user has

For my interest, what leak are you worried about here? I'd expect this code is only accessible to other privileged code.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:58
(Diff revision 21)
> +  // installed.  We should calculate collection ID using a hash of
> +  // user ID, extension ID, and some secret.
> +  let collectionId = extensionId;
> +  const Kinto = loadKinto();
> +  let coll;
> +  return Promise.resolve().then(() => {

Why this extra .resolve?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:137
(Diff revision 21)
> +  getCollection(extensionId, context) {
> +    if (prefPermitsStorageSync !== true) {
> +      return Promise.reject({message: `Please set ${STORAGE_SYNC_ENABLED} to true in about:config`});
> +    }
> +    if (!collectionPromises.has(extensionId)) {
> +      collectionPromises.set(extensionId, openCollection(extensionId, context));

there seems a slight risk of an "unhandled rejection" happening here, as you don't have control of where the promises end up.

eg, something like:

let newp = OpenCollection(...);
newp.catch(Cu.reportError);
collectionPromises.set(extensionId, newp);

would mean you are guaranteed an exception there will be reported *somewhere*.

You could also consider using the Log.jsm module, as Sync already has fairly good support for that, and we could ensure messages generated by this module end up in sync logs.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:199
(Diff revision 21)
> +
> +  clear(extensionId, context) {
> +    // We can't call Collection#clear here, because that just clears
> +    // the local database. We have to explicitly delete everything so
> +    // that the deletions can be synced as well.
> +    return this.getCollection(extensionId, context).then(coll => {

this is a nit, but IMO "nesting" promises makes them more difficult to read and they should be "chained". Something like:

    return this.getCollection(extensionId, context).then(coll => {
      return coll.list();
    }).then(res => {
      const records = res.data;
      const keys = records.map(record => record.key);
      return this.remove(extensionId, keys);
    });

The same applies in a few other places in this patch. You could also consider using Task.jsm ;)
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review65316

Just the server copies. The "remote transformer" is responsible for turning local versions into server versions and vice versa.

> For my interest, what leak are you worried about here? I'd expect this code is only accessible to other privileged code.

Some of our security team were worried about the possibility of a database dump revealing personal information about a given user's installed extensions. This particular line of code isn't really the problem -- it's just a note to flag that the translation between extension IDs and collection IDs should be thought through better. A later patch doesn't change anything here, but does cause Kinto to use a remote collection ID that is different from the extension ID.

> Why this extra .resolve?

There was no real reason. A later patch turns this `Promise.resolve()` into a call to get the current user.

> there seems a slight risk of an "unhandled rejection" happening here, as you don't have control of where the promises end up.
> 
> eg, something like:
> 
> let newp = OpenCollection(...);
> newp.catch(Cu.reportError);
> collectionPromises.set(extensionId, newp);
> 
> would mean you are guaranteed an exception there will be reported *somewhere*.
> 
> You could also consider using the Log.jsm module, as Sync already has fairly good support for that, and we could ensure messages generated by this module end up in sync logs.

Yeah, logging to the sync logs for this would certainly be better. I'm not 100% sure how the interface between the Extension code and the Sync code should work yet, but I'll try to add that in the next version of the patch.

> this is a nit, but IMO "nesting" promises makes them more difficult to read and they should be "chained". Something like:
> 
>     return this.getCollection(extensionId, context).then(coll => {
>       return coll.list();
>     }).then(res => {
>       const records = res.data;
>       const keys = records.map(record => record.key);
>       return this.remove(extensionId, keys);
>     });
> 
> The same applies in a few other places in this patch. You could also consider using Task.jsm ;)

Yes, that's true. This is mostly code that I'm porting over from another person's work and I haven't been vigilant in catching these kinds of style issues. I'll go over it again.
Thanks for the response.

(In reply to Ethan Glasser-Camp (:glasserc) from comment #153)
> Comment on attachment 8757379 [details]
> Bug 1253740 - Implement storage.sync,
> 
> https://reviewboard.mozilla.org/r/55838/#review65316
> 
> Just the server copies. The "remote transformer" is responsible for turning
> local versions into server versions and vice versa.

In that case, I think it probably makes sense to perform the crypto like the rest of Sync. Roughly, that means each collection gets its own crypto keys derived from kB rather than using kB directly. In general, this really just means we reuse more of the Sync encryption code directly. Please ping me if you need more details here.
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review68364

Nice work Ethan! This looks really good. I've added a bunch of comments from my review. It's interesting what you've done for the mock server, but I wonder, is there not something already in place that does this? It doesn't feel right to implement an entire mock server in a test file. If this is not something that's been needed before, I still think it would make sense to implement it somewhere that it can be shared with other code that might need it. I am surprised, however, that this hasn't been needed before (and therefore already written). Are there no unit tests already for Kinto code that needs a mock server? How do tests for Kinto run on our build infratructure?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:19
(Diff revision 3)
>  const Ci = Components.interfaces;
>  const Cc = Components.classes;
>  const Cu = Components.utils;
>  const Cr = Components.results;
>  
> -
> +const KINTO_DEFAULT_SERVER_URL = "https://kinto-ota.dev.mozaws.net/v1"

I'm not crazy about having this in a const in this file, but I guess we need a default defined somewhere in case the preference is not set? Or should we just default to no syncing in the preference is not set?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:20
(Diff revision 3)
>  const Cc = Components.classes;
>  const Cu = Components.utils;
>  const Cr = Components.results;
>  
> -
> +const KINTO_DEFAULT_SERVER_URL = "https://kinto-ota.dev.mozaws.net/v1"
>  const STORAGE_SYNC_ENABLED = "extension.storage.sync.enabled";

I thought we discussed this earlier, but I don't see it changed in the other patch, so maybe not. I think it would be clearer if this were named `STORAGE_SYNC_ENABLED_PREF`.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:44
(Diff revision 3)
> +XPCOMUtils.defineLazyModuleGetter(this, "fxAccounts",
> +                                  "resource://gre/modules/FxAccounts.jsm");
>  XPCOMUtils.defineLazyPreferenceGetter(this, "prefPermitsStorageSync",
>                                        STORAGE_SYNC_ENABLED, false);
> +XPCOMUtils.defineLazyPreferenceGetter(this, "prefStorageSyncServerURL",
> +                                      "extension.storage.sync.serverURL",

Perhaps store the name of this pref in a `const` as you've done with `extension.storage.sync.enabled`?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:83
(Diff revision 3)
> +    const remoteTransformers = [];
> +    // FIXME: if the user isn't logged in, we should add a transformer
> +    // that refuses to sync
> +    if (keyBundle) {
> +      const encryptionTransformer = new EncryptionRemoteTransformer(keyBundle);
> +      remoteTransformers.push(encryptionTransformer);

It looks like there will only ever be one remoteTransformer. Why do we need this array?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:158
(Diff revision 3)
> +  syncAll() {
> +    let promises = [];
> +    // FIXME: this should also iterate over anything with a listener,
> +    // but we can't because we don't have a context to create new collections
> +    for(let extId of collectionPromises.keys()) {
> +      const collPromise = collectionPromises.get(extId);

Why not just iterate over `collectionPromises` in a way that will give you the promises, rather than iterating over the keys and calling `get` for each key?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:188
(Diff revision 3)
> +        strategy: 'client_wins'
> +      }).catch(err => {
> +        // FIXME: this should only ever happen on the dev server.. delete?
> +        if (err && err.message && err.message.includes("flushed")) {
> +          return collection.resetSyncStatus()
> +            .then(_ => collection.sync());

What's with the `_` here?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:191
(Diff revision 3)
> +        if (err && err.message && err.message.includes("flushed")) {
> +          return collection.resetSyncStatus()
> +            .then(_ => collection.sync());
> +        }
> +        throw err;
> +      }).then(syncResults => {

It may just be me, but it feels more idiomatic to put the happy path (the `then`) before the `catch`, rather than the other way around.

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:51
(Diff revision 3)
> +          "key": "remote-key",
> +          "data": 6
> +        });
> +
> +        // Do something to make ExtensionStorageSync aware of our extension
> +        yield ExtensionStorageSync.get(extensionId, "my-key", context);

Why is this needed?

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:72
(Diff revision 3)
> +
> +        equal(calls.length, 0);
> +
> +        // Updating the server causes us to pull down the new value
> +        server.etag = 1000;
> +        server.clearCollection(collectionId);

What does this do?

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:90
(Diff revision 3)
> +      });
> +    });
> +  });
> +});
> +
> +add_task(function* test_storage_sync_pushes_changes() {

If this is a test for pushing changes, should you maybe set a value and then set a different value later? Also, it looks like you're checking the information from `server.getPosts()`, but would it also be worthwhile to manually ask for the remote data from the Kinto server to verify that they key was successfully pushed and that the data is accurate?

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:113
(Diff revision 3)
> +
> +        const posts = server.getPosts();
> +        equal(posts.length, 1);
> +        const post = posts[0];
> +        equal(post.headers.Authorization, "Bearer some-access-token");
> +        equal(post.path, collectionRecordsPath(collectionId) + "/key-my_2D_key");

Where does `key-my_2D_key` come from?

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:131
(Diff revision 3)
> +      });
> +    });
> +  });
> +});
> +
> +function run_test() {

I think this is not needed.
Attachment #8773420 - Flags: review?(bob.silverberg) → review-
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review68378

::: toolkit/components/extensions/ExtensionStorageSync.jsm:118
(Diff revision 2)
>  
> +/**
> + * Hash an extension ID for a given user so that an attacker can't
> + * identify the extensions a user has installed.
> + */
> +function extensionIdToCollectionId(user, extensionId) {

I don't know enough about crypto to comment on this, but I'd like to understand the practical implications. We can discuss it in today's meeting.
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review68364

No, the existing tests for Kinto and especially Kinto.js are either unit tests (meaning we call individual methods and use spies and mocks to see what they do), or integration tests (which run against a live Kinto server that is brought up on localhost:8888).

I didn't want to put this particular blob of code anywhere more central until I knew more about other use cases it would face. (At my last job, there was a guy who was shy about abstracting anything that hadn't shown up at least three times.) In our current build infrastructure, there are some tests around Kinto -- for example https://dxr.mozilla.org/mozilla-central/rev/233ab21b64b5d5e9f2f16ea2d4cfb4c8b293c5c4/services/common/tests/unit/test_kinto.js#277-306, which implements a mock server in a single test file. This is kind of what I based my work on, but seemed sufficiently different from what I actually wanted that it didn't seem worth it to refactor or merge.

> I'm not crazy about having this in a const in this file, but I guess we need a default defined somewhere in case the preference is not set? Or should we just default to no syncing in the preference is not set?

I know almost nothing about defining preferences. We do want to sync even if the pref isn't set. I guess that means this is a default and probably defaults go somewhere else?

> It looks like there will only ever be one remoteTransformer. Why do we need this array?

The idea was because I needed to add a transformer, but only when the user was logged in. In actual fact, this code was wrong; it should always have a remote transformer, and that transformer should reject atempts to sync while not logged in.

> Why not just iterate over `collectionPromises` in a way that will give you the promises, rather than iterating over the keys and calling `get` for each key?

Oops, I didn't know about `entries()`.

> What's with the `_` here?

I inherited this from Michiel, who I guess may have gotten it from Python or Haskell? I guess in JS you would just use () to indicate that you don't care about the arguments.

> It may just be me, but it feels more idiomatic to put the happy path (the `then`) before the `catch`, rather than the other way around.

In this case, the order is significant -- the `catch` is supposed to represent a well-understood failure case (running against a test server that gets periodically "flushed") with a recovery path (try again). The result of that recovery is supposed to go to the `then`. That said, in that particular failure case, there may not be a reason to actually run the `then`, so maybe I can put it the other way. Up to you.

> Why is this needed?

We only sync collections we "know about", and we only know about the ones which we've opened because of a read/write operation. This seemed a reasonable requirement for a while since any extension that uses storage.sync is going to do *something* with it, either reading or writing. However, I've decided that it's also a good idea to "know about" any extension that adds a listener, so I took this out.

> What does this do?

The implementation of my mock server is deliberately as simple as possible, so it doesn't know that the updated record should "replace" the new one (they're kept in a `Set`, not a `Map`), so this is a brute-force way of deleting just that one record (or else the sync logic will see both).

> If this is a test for pushing changes, should you maybe set a value and then set a different value later? Also, it looks like you're checking the information from `server.getPosts()`, but would it also be worthwhile to manually ask for the remote data from the Kinto server to verify that they key was successfully pushed and that the data is accurate?

There is no Kinto server; calling `server.getPosts()` *is* the remote data. Yes, I guess it's a good idea to verify that updates are synced as well as creates. Edit: wow, what a great suggestion! Writing that test uncovered a pretty major bug in my encryption transformer.

> Where does `key-my_2D_key` come from?

This is the `keyToId` of the record key (`my-key`), since Kinto IDs are used in the paths rather than storage.sync keys.

> I think this is not needed.

That's interesting. According to [Writing xpcshell-based unit tests](https://developer.mozilla.org/en-US/docs/Mozilla/QA/Writing_xpcshell-based_unit_tests), "The test is executed by the test harness by calling the run_test function defined in the file. Anything you want to test must be spawned from this function". I took it out, and I think maybe the `add_task` calls work OK? However, I think it won't run the `test_key_to_id` function (or, later, the `test_extension_id_to_collection_id` function).
(In reply to Ethan Glasser-Camp (:glasserc) from comment #166)
> > I think this is not needed.
> 
> That's interesting. According to [Writing xpcshell-based unit
> tests](https://developer.mozilla.org/en-US/docs/Mozilla/QA/Writing_xpcshell-
> based_unit_tests), "The test is executed by the test harness by calling the
> run_test function defined in the file. Anything you want to test must be
> spawned from this function". I took it out, and I think maybe the `add_task`
> calls work OK? However, I think it won't run the `test_key_to_id` function
> (or, later, the `test_extension_id_to_collection_id` function).

Sadly, those docs are now out of date. These days you typically either have a run_test() function, or use |add_test(test_function)| or |add_task(test_generator_function)|. If run_test() exists, and all |test_function|s must use run_next_test() or the test will appear to hang - these functions can be asynchronous, so this tells the test harness the test is complete. test_generator_functions do not need to call run_next_test() - it is assumed the test is over when the generator is exhausted.
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review69260

I'm fine with this change if that's how you prefer to do it, but another alternative would be to just use |const log = Log.repository.getLogger("Sync.Engine.Extension-Storage");| at the top level scope of your module and avoid passing it around as a param.
Attachment #8781178 - Flags: review?(markh) → review+
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review69260

Oh yes, I much prefer doing that.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review69862

As far as I can tell, all of the issues that I have raised, for this patch, have been addressed, so I'm going to mark it as r+ for now. If I'm wrong about that (which I may be, it's been a pretty long and confusing process - no fault of yours Ethan) or if you make more updates, then just flag me for an r? again.
Attachment #8757379 - Flags: review?(bob.silverberg) → review+
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review68364

Should we think about making these intergration tests which spin up a live server, as you describe above? Or is that just overkill iyo?

> I know almost nothing about defining preferences. We do want to sync even if the pref isn't set. I guess that means this is a default and probably defaults go somewhere else?

Yeah, that sounds like a default preference. I'm not sure where they go, but you're right, there is likely a place for them. Can you see if you can track that down?

> I inherited this from Michiel, who I guess may have gotten it from Python or Haskell? I guess in JS you would just use () to indicate that you don't care about the arguments.

Exactly.

> In this case, the order is significant -- the `catch` is supposed to represent a well-understood failure case (running against a test server that gets periodically "flushed") with a recovery path (try again). The result of that recovery is supposed to go to the `then`. That said, in that particular failure case, there may not be a reason to actually run the `then`, so maybe I can put it the other way. Up to you.

What you say makes sense. Feel free to leave it as is.
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review68378

> I don't know enough about crypto to comment on this, but I'd like to understand the practical implications. We can discuss it in today's meeting.

You explained this in the meeting and the result makes sense.
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review69874

This gets my r+ in terms of what you're trying to do. I'm not comfortable reviewing the crypto code itself, but I'm not the final reviewer anyway.
Attachment #8773414 - Flags: review?(bob.silverberg) → review+
Comment on attachment 8773416 [details]
Bug 1253740 - Hash extension ID to obfuscate installed add-ons,

https://reviewboard.mozilla.org/r/66120/#review69876

This looks fine to me. Nice work!
Attachment #8773416 - Flags: review?(bob.silverberg) → review+
(In reply to Bob Silverberg [:bsilverberg] from comment #171)

> Should we think about making these intergration tests which spin up a live
> server, as you describe above? Or is that just overkill iyo?

We do that/have done that for many of the clients in /services/; see /services/common/modules-testing, which includes a mock Sync storage server using httpd.js.

There used to be Bagheera and other services in there, too, and it's a nice property to be able to run the main server's tests against your test server, and vice versa -- /services/common/tests/run_storage_server.js takes care of that.

Similar code exists elsewhere in the tree; go grepping for Cu.import("resource://testing-common/httpd.js").


> Yeah, that sounds like a default preference. I'm not sure where they go, but
> you're right, there is likely a place for them. Can you see if you can track
> that down?

Depends on the app:

/mobile/android/app/mobile.js
/browser/app/profile/firefox.js

(and in the past, b2g/app/b2g.js)
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

I have been doing preliminary reviews of this patch and Ethan has been implementing my suggestions. I think he's addressed all of my comments thus far, so this is now ready for a more final review from Kris.
Attachment #8757379 - Flags: review+ → review?(kmaglione+bmo)
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

Passing this to Kris for final review.
Attachment #8773414 - Flags: review+ → review?(kmaglione+bmo)
Comment on attachment 8773416 [details]
Bug 1253740 - Hash extension ID to obfuscate installed add-ons,

Passing this to Kris for final review.
Attachment #8773416 - Flags: review?(kmaglione+bmo)
Comment on attachment 8773420 [details]
Add code that syncs and tests,

Ethan, I'm trying to figure out where everything stands and what's ready for me to pass up to Kris for review. This patch looks like it's still waiting on a few things from my review to be addressed, so I'm holding off on handing it off. Please let me know when it's ready for another review.
Please hold off on reviewing it for a little while longer. I'm about to push a new version of this series that significantly alters how the crypto is structured, per comments from :markh.
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review68364

Per :markh's comments on https://bugzilla.mozilla.org/show_bug.cgi?id=1253741, it seemed like the best approach was to write tests against a mock server built on httpd.js, like :rnewman describes. Bringing up a live server introduces a bunch of build dependencies, which I didn't want to get involved with.
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review71894

::: toolkit/components/extensions/ExtensionStorageSync.jsm:307
(Diff revision 5)
> +  /**
> +   * Recursive promise that terminates when our local collectionKeys,
> +   * as well as that on the server, have keys for all the extensions
> +   * in extIds.
> +   */
> +  ensureKeysFor(extIds) {

:leplatrem, could you also look at this method for me and see if it seems bonkers to you?
Comment on attachment 8773419 [details]
Add crypto, including tests,

https://reviewboard.mozilla.org/r/66126/#review64308

> ISTM that this entire module should be in the sync directory? I'd prefer to at least keep all the encryption in the same place.

I have moved the per-se crypto into extension-storage.js.
Comment on attachment 8773419 [details]
Add crypto, including tests,

https://reviewboard.mozilla.org/r/66126/#review64308

> These were all Michiel's questions and I'm pretty sure none of them are worth worrying about. Specifically:
> 
> - Deriving a key using HKDF is just as secure as storing a second encryption key which is decrypted using a first encryption key.
> - HKDF is one-way so it doesn't leak information about the key.
> - No WebExtension should be able to access kB, and without kB it's impossible to derive the encryption key for another extension, so you're back to brute force if you want to get access to another extension's data. If a WebExtension can access kB, we would probably have a serious security issue. (I'm checking with #webextensions to verify this.)
> 
> I deleted the questions from this version of the patch. However, I'm happy to ask them on sync-dev if you think they're important.

I ended up implementing a two-level key scheme, per :markh's comments.
OK, I've pushed a new version of the patch series. This alters the sync logic to use a two-level encryption like the existing Sync design does, per :markh's comments. This provides the advantage that when the user changes their password, we don't have to reencrypt every single record in the database -- just the keys.

This version of the series still has a bunch of issues. In particular, one of the tests is currently broken until I cut a release of kinto.js with https://github.com/Kinto/kinto.js/pull/525/ and get it into gecko. There's also another issue having to do with deepEqual not working because the two objects (which are technically equal) are from different "compartments", so don't show up as being instanceof Object. I haven't the slightest idea how to address this but I'm hoping that pushing a new version of the series will make it easier to get help. I'd also like :leplatrem to look at how I'm managing the keyring because it seems like I might be doing too much work. I've tried to make those issues more obvious in the mozreview requests so that we don't subject incomplete code to thorough review.
Depends on: 1298106
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review71900

::: toolkit/components/extensions/ExtensionStorageSync.jsm:162
(Diff revision 5)
> +
> +/**
> + * Retrieve the actual keyring from the crypto collection.
> + * Returns Promise<CollectionKeyManager>.
> + */
> +function getKeyRing() {

We need to listen for a "password changed" signal and re-encrypt the keyring when this happens, as well as writing tests to make sure that if we get an undecipherable keyring when we sync, that we do something sensible. (Sync treats it as a "login failed" scenario.)
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review72176

Looks pretty good, for the most part. Just a few (mostly minor) issues.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:20
(Diff revision 22)
> +XPCOMUtils.defineLazyModuleGetter(this, "loadKinto",
> +                                  "resource://services-common/kinto-offline-client.js");
> +XPCOMUtils.defineLazyModuleGetter(this, "ExtensionStorage",
> +                                  "resource://gre/modules/ExtensionStorage.jsm");
> +XPCOMUtils.defineLazyModuleGetter(this, "AppsUtils",
> +                                  "resource://gre/modules/AppsUtils.jsm");

Please sort imports.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:26
(Diff revision 22)
> +XPCOMUtils.defineLazyPreferenceGetter(this, "prefPermitsStorageSync",
> +                                      STORAGE_SYNC_ENABLED_PREF, false);

We should have a default value for this in `firefox.js` and `mobile.js` (or possibly just in `all.js`).

::: toolkit/components/extensions/ExtensionStorageSync.jsm:68
(Diff revision 22)
> +    });
> +    coll = db.collection(collectionId, {
> +      idSchema: storageSyncIdSchema,
> +    });
> +    context.callOnClose({
> +      close: () => closeCollection(extensionId, coll)

ESLint should complain that this is used before it's defined. Same for `idToKey` and a few other functions.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:110
(Diff revision 22)
> +    return String.fromCodePoint(parseInt(group1, 16));
> +  }
> +  // An escaped ID should match this regex.
> +  // An escaped ID should consist of only letters and numbers, plus
> +  // code points escaped as _[0-9a-f]+_.
> +  const ESCAPED_ID_FORMAT = /[a-zA-Z0-9]*(_[0-9A-F]+_[a-zA-Z0-9]*)*/;

This won't do what you want. For a start, it's not anchored, and everything is qualified with `*`, so it will match any string. I think it should look something more like:

    /^(?:[a-zA-Z0-9]|_[0-9A-F]+_)*$/

::: toolkit/components/extensions/ExtensionStorageSync.jsm:117
(Diff revision 22)
> +  if(!id.startsWith("key-")) {
> +    return null;
> +  }
> +  var unprefixed = id.slice(4);
> +  // Verify that the ID is the correct format.
> +  if(!unprefixed.match(ESCAPED_ID_FORMAT)) {

Please use `pattern.test()` rather than `string.match()` when you only need to test for a match, and not use the match result.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:148
(Diff revision 22)
> +        for (let i in keys) {
> +          const key = keys[i];

`for (let [i, key] of keys.entries())`

::: toolkit/components/extensions/ExtensionStorageSync.jsm:153
(Diff revision 22)
> +            id: id,
> +            key: key,

id,
    key,
    data: item,

::: toolkit/components/extensions/ExtensionStorageSync.jsm:160
(Diff revision 22)
> +            data: item
> +          });
> +          changes[key] = {
> +            // Extract the "data" field from the old record, which
> +            // represents the value part of the key-value store
> +            oldValue: oldRecord && oldRecord.data,

We should omit this property entirely if there's no old record.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:175
(Diff revision 22)
> +        for (let i in keys) {
> +          const key = keys[i];

`for (let [i, key] of keys.entries())`

::: toolkit/components/extensions/ExtensionStorageSync.jsm:182
(Diff revision 22)
> +          const id = ids[i];
> +          const res = txn.deleteAny(id);
> +          if (res.deleted) {
> +            changes[key] = {
> +              oldValue: res.data.data,
> +              newValue: undefined

We should omit this, rather than defining it as `undefined`.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:228
(Diff revision 22)
> +      } else if (Array.isArray(spec)) {
> +        keys = spec;
> +        records = {};
> +      } else {
> +        keys = Object.keys(spec);
> +        records = spec;

We shouldn't modify the object we were passed. We should probably just clone it into the current scope.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:235
(Diff revision 22)
> +
> +      return Promise.all(keys.map(key => {
> +        return coll.getAny(keyToId(key)).then(res => {
> +          if (res.data && res.data._status != "deleted") {
> +            records[res.data.key] = res.data.data;
> +            return res.data;

This return value doesn't appear to be used.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:252
(Diff revision 22)
> +    this.listeners.set(extensionId, listeners);
> +  },
> +
> +  removeOnChangedListener(extensionId, listener) {
> +    let listeners = this.listeners.get(extensionId);
> +    listeners.delete(listener);

We should delete this from the map if it's empty. And it might be better to use a DefaultWeakMap with the extension as the key, rather than its ID.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:256
(Diff revision 22)
> +    let listeners = this.listeners.get(extensionId);
> +    listeners.delete(listener);
> +  },
> +
> +  notifyListeners(extensionId, changes) {
> +    let listeners = this.listeners.get(extensionId);

Nit: `let listeners = ... || new Set();`

::: toolkit/components/extensions/ExtensionStorageSync.jsm:259
(Diff revision 22)
> +        try {
> +          listener(changes);
> +        } catch(e) {
> +          Cu.reportError(e);
> +        }

We usually use the `callSafeSyncWithoutClone` helper for this.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:261
(Diff revision 22)
> +    let listeners = this.listeners.get(extensionId);
> +    if (listeners) {
> +      for (let listener of listeners) {
> +        try {
> +          listener(changes);
> +        } catch(e) {

Nit: Add space before the opening `(`

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:82
(Diff revision 22)
> +      {method: "get", args: ["foo"], result: {}},
> +      {method: "set", args: [{foo: "bar"}], result: undefined},
> +      {method: "remove", args: ["foo"], result: undefined},
> +      {method: "clear", args: [], result: undefined},

The `result` property doesn't seem to actually be used.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:88
(Diff revision 22)
> +      {method: "set", args: [{foo: "bar"}], result: undefined},
> +      {method: "remove", args: ["foo"], result: undefined},
> +      {method: "clear", args: [], result: undefined},
> +    ];
> +    apiTests.forEach(testDef => {
> +      promises.push(new Promise((resolve, reject) => {

No need for `new Promise`. Just return the promise chain created by the test method.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:89
(Diff revision 22)
> +      {method: "remove", args: ["foo"], result: undefined},
> +      {method: "clear", args: [], result: undefined},
> +    ];
> +    apiTests.forEach(testDef => {
> +      promises.push(new Promise((resolve, reject) => {
> +        browser.storage.sync[testDef.method].apply(undefined, testDef.args).then(

`browser.storage.sync[testDef.method](...testDef.args)`

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:90
(Diff revision 22)
> +      {method: "clear", args: [], result: undefined},
> +    ];
> +    apiTests.forEach(testDef => {
> +      promises.push(new Promise((resolve, reject) => {
> +        browser.storage.sync[testDef.method].apply(undefined, testDef.args).then(
> +          () => reject("didn't fail with extension.storage.sync.enabled = false"),

We should use `browser.test.fail` rather than rejecting here, or the test will just time out after `Promise.all` fails to resolve.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:103
(Diff revision 22)
> +    });
> +
> +    Promise.all(promises).then(() => browser.test.notifyPass("flag needed"));
> +  }
> +
> +  let extension = ExtensionTestUtils.loadExtension({

We should test the actual value of the preference first.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:134
(Diff revision 22)
> +      },
> +      background: `(${background})()`,
> +    }, extensionId);
> +  }
> +
> +  SpecialPowers.setBoolPref(STORAGE_SYNC_PREF, true);

`yield SpecialPowers.pushPrefEnv(...)`

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:168
(Diff revision 22)
> -        browser.test.assertEq(obj1[prop].oldValue, obj2[prop].oldValue);
> -        browser.test.assertEq(obj1[prop].newValue, obj2[prop].newValue);
> +          browser.test.assertTrue(obj1[prop] !== undefined && obj2[prop] !== undefined,
> +                                  `checkChanges ${areaName} ${prop} is missing (${message})`);

Separate checks for each object, please.

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:319
(Diff revision 22)
> -  manifest: {
> +    manifest: {
> -    permissions: ["storage"],
> +      permissions: ["storage"],
> -  },
> +    },
> -};
> +  };
>  
> -add_task(function* test_backgroundScript() {
> +  SpecialPowers.setBoolPref(STORAGE_SYNC_PREF, true);

`pushPrefEnv`

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:5
(Diff revision 22)
> +"use strict";
> +
> +const {keyToId, idToKey} = Cu.import("resource://gre/modules/ExtensionStorageSync.jsm");
> +
> +function run_test() {

Please use `add_task(function* ...)`

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:6
(Diff revision 22)
> +  equal(keyToId("foo"), "key-foo");
> +  equal(keyToId("my-new-key"), "key-my_2D_new_2D_key");
> +  equal(keyToId(""), "key-");
> +  equal(keyToId("Kinto's fancy_string"), "key-Kinto_27_s_20_fancy_5F_string");

We should test some multi-byte characters and control characters. And multi-line strings.

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:11
(Diff revision 22)
> +  equal(keyToId("foo"), "key-foo");
> +  equal(keyToId("my-new-key"), "key-my_2D_new_2D_key");
> +  equal(keyToId(""), "key-");
> +  equal(keyToId("Kinto's fancy_string"), "key-Kinto_27_s_20_fancy_5F_string");
> +
> +  const KEYS = ["foo", "my-new-key", "", "Kinto's fancy_string"];

Multi-byte and control characters here, too.

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:12
(Diff revision 22)
> +  equal(keyToId("my-new-key"), "key-my_2D_new_2D_key");
> +  equal(keyToId(""), "key-");
> +  equal(keyToId("Kinto's fancy_string"), "key-Kinto_27_s_20_fancy_5F_string");
> +
> +  const KEYS = ["foo", "my-new-key", "", "Kinto's fancy_string"];
> +  for(let key of KEYS) {

Nit: Please add space before `(`

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:13
(Diff revision 22)
> +  equal(keyToId(""), "key-");
> +  equal(keyToId("Kinto's fancy_string"), "key-Kinto_27_s_20_fancy_5F_string");
> +
> +  const KEYS = ["foo", "my-new-key", "", "Kinto's fancy_string"];
> +  for(let key of KEYS) {
> +    equal(idToKey(keyToId(key)), key);

We should probably also test some invalid keys that we expect to return `null`.

::: toolkit/components/extensions/test/xpcshell/xpcshell.ini:16
(Diff revision 22)
>  [test_ext_contexts.js]
>  [test_ext_json_parser.js]
>  [test_ext_manifest_content_security_policy.js]
>  [test_ext_manifest_incognito.js]
>  [test_ext_schemas.js]
> +[test_storage_sync.js]

`test_ext_storage_sync.js`, and please keep sorted.
Attachment #8757379 - Flags: review?(kmaglione+bmo)
I'm afraid I'm having real trouble following these patches - eg, of the last couple I was flagged on, one added 3 new methods to one file, one added an apparently related single method to a different file, but none of the patches showed how these new methods would be called, which is really necessary to understand the patches.

Could you please roll some of them up? Specifically, I think the first 3 I'm flagged for should be a single patch (ie, the entire engine minus crypto), then the next 3 which all relate to crypto should be rolled up, and finally the last patch (the logging one) should really just be rolled back into that first patch (ie, the engine itself).

The UX question is still pending, and I don't think we will be able to land any of this until it is resolved one way or another.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review72582

::: toolkit/components/extensions/ExtensionStorageSync.jsm:256
(Diff revision 22)
> +    let listeners = this.listeners.get(extensionId);
> +    listeners.delete(listener);
> +  },
> +
> +  notifyListeners(extensionId, changes) {
> +    let listeners = this.listeners.get(extensionId);

Why is that better than checking for nullness of listeners, below?

::: toolkit/components/extensions/test/mochitest/test_ext_storage.html:134
(Diff revision 22)
> +      },
> +      background: `(${background})()`,
> +    }, extensionId);
> +  }
> +
> +  SpecialPowers.setBoolPref(STORAGE_SYNC_PREF, true);

The most recent version of this test is an xpcshell test, so this has to use Services.prefs -- is there a better way to do that?

::: toolkit/components/extensions/test/xpcshell/test_storage_sync.js:5
(Diff revision 22)
> +"use strict";
> +
> +const {keyToId, idToKey} = Cu.import("resource://gre/modules/ExtensionStorageSync.jsm");
> +
> +function run_test() {

Is that preferred even when the tests are synchronous? OK, I will do that.
Attachment #8773413 - Attachment is obsolete: true
Attachment #8773413 - Flags: review?(markh)
Attachment #8773415 - Attachment is obsolete: true
Attachment #8773415 - Flags: review?(markh)
Attachment #8784659 - Attachment is obsolete: true
Attachment #8784659 - Flags: review?(markh)
Attachment #8773419 - Attachment is obsolete: true
Attachment #8773419 - Flags: review?(markh)
Attachment #8773420 - Attachment is obsolete: true
Attachment #8773420 - Flags: review?(bob.silverberg)
I've just pushed a new patch series, folded per :markh's request, and also trying to address :John-Galt's comments. Unfortunately, one of those changes breaks my implementation of syncing, so some of the tests will no longer work, but I wanted to get this out to facilitate review.
Comment on attachment 8773412 [details]
Bug 1253740 - Introduce extension-storage engine with a sanity test,

https://reviewboard.mozilla.org/r/66112/#review73042

This is looking fairly good to me - quite a few comments but most should be easy.

::: services/sync/modules/engines/extension-storage.js:33
(Diff revision 5)
> +  _trackerObj: ExtensionStorageTracker,
> +  // we don't need these since we implement our own sync logic
> +  _storeObj: undefined,
> +  _recordObj: undefined,
> +
> +  syncPriority: 10,

What are the implications of syncing this data after the addons themselves are synced? eg, if a user on a different device adds and addon which also syncs its data, is it likely to be a problem that the data for the addon isn't available as it is initialized?

::: services/sync/modules/engines/extension-storage.js:36
(Diff revision 5)
> +  _recordObj: undefined,
> +
> +  syncPriority: 10,
> +
> +  _sync: function () {
> +    return ExtensionStorageSync.syncAll();

Sync isn't based on promises :( You need to use Async.promiseSpinningly here so the call blocks until the promise resolves or rejects

::: services/sync/modules/engines/extension-storage.js:42
(Diff revision 5)
> +  },
> +};
> +
> +function ExtensionStorageTracker(name, engine) {
> +  Tracker.call(this, name, engine);
> +  Svc.Obs.add("weave:engine:start-tracking", this);

The tracker's constructor already adds these observers.

::: services/sync/modules/engines/extension-storage.js:44
(Diff revision 5)
> +
> +function ExtensionStorageTracker(name, engine) {
> +  Tracker.call(this, name, engine);
> +  Svc.Obs.add("weave:engine:start-tracking", this);
> +  Svc.Obs.add("weave:engine:stop-tracking", this);
> +}

You should consider overriding some additional tracker functions, particularly loadChangedIDs, so we don't try and read a file that will never exist on every startup.

::: services/sync/services-sync.js:34
(Diff revision 5)
>  pref("services.sync.engine.history", true);
>  pref("services.sync.engine.passwords", true);
>  pref("services.sync.engine.prefs", true);
>  pref("services.sync.engine.tabs", true);
>  pref("services.sync.engine.tabs.filteredUrls", "^(about:.*|chrome://weave/.*|wyciwyg:.*|file:.*|blob:.*)$");
> +pref("services.sync.engine.extension-storage", true);

This doesn't seem to match the approach in bug 1290440 - I think we should close that bug as a duplicate and implement that strategy here.

::: services/sync/tests/unit/test_load_modules.js:16
(Diff revision 5)
>    "engines/clients.js",
>    "engines/forms.js",
>    "engines/history.js",
>    "engines/passwords.js",
>    "engines/prefs.js",
> +  "engines/storage-sync.js",

This file doesn't seem to appear in this patch - I assume you meant extension-storage.js?
Attachment #8773412 - Flags: review?(markh)
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

Richard, I'd feel more comfortable if you could find the time to have a look at this.
Attachment #8784658 - Flags: review?(markh) → review?(rnewman)
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review73276

::: toolkit/components/extensions/ExtensionStorageSync.jsm:288
(Diff revision 4)
> +      }
> +      return Promise.all(promises);
> +    });
> +  },
> +
> +  sync(extension, collection) {

Hey :natim, can you look at this method and see if it seems bonkers to you, since :leplatrem is on vacation?
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review73276

> Hey :natim, can you look at this method and see if it seems bonkers to you, since :leplatrem is on vacation?

Hey :glasserc I have looked at the method and it looks good to me.

I am just not sure about the comment saying: // FIXME: this should support syncing to self-hosted

But appart from that it looks good.
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review73620

::: toolkit/components/extensions/ExtensionStorageSync.jsm:368
(Diff revision 4)
> +   *
> +   * @param {Array<string>} extIds
> +   *                        The IDs of the extensions which need keys.
> +   * @returns {Promise<CollectionKeyManager>}
> +   */
> +  ensureKeysFor(extIds) {

Sorry, :natim, can you look at this one too?
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review73276

> Hey :glasserc I have looked at the method and it looks good to me.
> 
> I am just not sure about the comment saying: // FIXME: this should support syncing to self-hosted
> 
> But appart from that it looks good.

Argh, I hate Reviewboard. The change I *actually* meant was the `ensureKeysFor` method.
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review73624

::: services/sync/modules/engines/extension-storage.js:104
(Diff revision 3)
> +    return this.getKeys().then(keyBundle => {
> +      if (record.ciphertext) {
> +        throw new Error("Attempt to reencrypt??");
> +      }
> +      let IV = Svc.Crypto.generateRandomIV();
> +      let id = record.id;

You should check that the ID is sane (non-empty, for example).

::: services/sync/modules/engines/extension-storage.js:154
(Diff revision 3)
> +  /**
> +   * Retrieve keys to use during encryption.
> +   *
> +   * Returns a Promise<KeyBundle>.
> +   */
> +  getKeys() {

`async getKeys`

::: services/sync/modules/engines/extension-storage.js:169
(Diff revision 3)
> +  EncryptionRemoteTransformer.call(this);
> +};
> +UserKeyEncryptionRemoteTransformer.prototype = {
> +  __proto__: EncryptionRemoteTransformer.prototype,
> +  getKeys() {
> +    return this._fxaService.getSignedInUser().then(user => {

Prefer

```
async getKeys() {
  let user = yield this._fxaService.getSignedInUser();
  if (!user) {
    throw Error("U...");
  }
  …
  return bundle;
}
```

throughout. Manual promise usage is error-prone.

::: services/sync/modules/record.js:301
(Diff revision 3)
> +  /**
> +   * Generate a new CollectionKeyManager that has the same attributes
> +   * as this one.
> +   */
> +  clone() {
> +    const c2 = new CollectionKeyManager();

You use `const` here and `let` in `ensureKeysFor`, which isn't all that consistent.

This would be tidier if you add a `CollectionKeyManager` constructor, so this becomes:

```
  let keysCopy = {};
  for ( … ) {
    …
  }
  return new CollectionKeyManager(this.lastModified, this._default, keysCopy);
```

::: services/sync/modules/record.js:416
(Diff revision 3)
>    },
>  
> +  /**
> +   * Generate a new default key, since without one you cannot use setContents.
> +   */
> +  generateDefaultKey: function() {

There's a `newKeys` method that this overlaps with.

::: services/sync/modules/record.js:428
(Diff revision 3)
> +   * collections.
> +   */
> +  hasKeysFor: function(collections) {
> +    // We can't use filter() here because sometimes collections is an iterator.
> +    for (let collection of collections) {
> +      if (!this._collections[collection]) return false;

Always braces and newlines for conditionals.

::: services/sync/modules/record.js:440
(Diff revision 3)
> +   * given collections (creating new ones for collections where we
> +   * don't already have keys).
> +   */
> +  ensureKeysFor: function(collections) {
> +    let c2 = this.clone();
> +    for (let c of collections) {

This will overwrite any existing keys for collections. And again, this becomes clearer if you use a constructor instead:

```
let missingCollections = ...;
let newKeys = this.copyOfKeys();
for (let c of missingCollections) {
  ...
}
return new CollectionKeyManager(this.modified, this.defaultKey, newKeys);
```

::: services/sync/tests/unit/test_extension_storage_crypto.js:1
(Diff revision 3)
> +"use strict";

License block?

::: services/sync/tests/unit/test_extension_storage_crypto.js:82
(Diff revision 3)
> +  yield* throwsGen(Utils.isHMACMismatch, function*() {
> +    yield transformer.decode(tamperedHMAC);
> +  });
> +
> +  const tamperedIV = Object.assign({}, encryptedRecord, {IV: "aaaaaaaaaaaaaaaaaaaaaa=="});
> +  yield* throwsGen(Utils.isHMACMismatch, function*() {

I'm not sure I understand why you need this. What's wrong with

```
let error;
try {
  yield transformer.decode(tamperedIV);
} catch (e) {
  error = e;
}
ok(!!error);
```

as we do elsewhere in the codebase to ensure that async functions throw?

::: services/sync/tests/unit/test_records_crypto.js:163
(Diff revision 3)
> +    do_check_eq(oldFormsKey, Service.collectionKeys._default);
> +    let newKeys = Service.collectionKeys.ensureKeysFor(["forms"]);
> +    do_check_true(newKeys.hasKeysFor(["forms"]));
> +    do_check_true(newKeys.hasKeysFor(["bookmarks", "forms"]));
> +    let newFormsKey = newKeys.keyForCollection("forms");
> +    do_check_neq(newFormsKey, oldFormsKey);

You need another test here: now call `ensureKeysFor(["forms"])` again, and make sure that the key hasn't changed!
Attachment #8784658 - Flags: review?(rnewman) → review-
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review73620

> Sorry, :natim, can you look at this one too?

It looks good to me.
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review74124

::: services/sync/tests/unit/test_extension_storage_crypto.js:82
(Diff revision 3)
> +  yield* throwsGen(Utils.isHMACMismatch, function*() {
> +    yield transformer.decode(tamperedHMAC);
> +  });
> +
> +  const tamperedIV = Object.assign({}, encryptedRecord, {IV: "aaaaaaaaaaaaaaaaaaaaaa=="});
> +  yield* throwsGen(Utils.isHMACMismatch, function*() {

I didn't like that idiom -- it seemed too low-level and not declarative enough. I wanted something more like Assert.throws, but for async functions.
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review73624

> This will overwrite any existing keys for collections. And again, this becomes clearer if you use a constructor instead:
> 
> ```
> let missingCollections = ...;
> let newKeys = this.copyOfKeys();
> for (let c of missingCollections) {
>   ...
> }
> return new CollectionKeyManager(this.modified, this.defaultKey, newKeys);
> ```

Great catch, as well as the test you suggested below.
Comment on attachment 8773412 [details]
Bug 1253740 - Introduce extension-storage engine with a sanity test,

https://reviewboard.mozilla.org/r/66112/#review73042

> What are the implications of syncing this data after the addons themselves are synced? eg, if a user on a different device adds and addon which also syncs its data, is it likely to be a problem that the data for the addon isn't available as it is initialized?

I think it should be mostly OK. For the most part, it should be the same as an addon being newly installed and then reconfigured. Of course, the reconfigurations happen in a seemingly random order, so I guess there are some addons that might break when this happens. Even so, I don't see any alternative -- there's no way to know what addons' data we should be syncing without knowledge of the addons themselves.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55836/#review74504

::: toolkit/components/extensions/test/xpcshell/test_ext_storage.js:157
(Diff revision 29)
> -    browser.test.assertEq("local", storage, "storage is local");
> +      browser.test.assertEq(expectedAreaName, areaName,
> +        "Expected area name received by listener");
> -    Object.assign(globalChanges, changes);
> +      Object.assign(globalChanges, changes);
> -  });
> +    });
>  
> -  function checkChanges(changes) {
> +    function checkChanges(areaName, changes, message) {

At some point the code changed to that a `remove` does not set a `newValue` into `changes` and a `set` which creates a new key does not set an `oldValue` into `changes`. I don't see any tests that verify that fact, so you might want to add that in somewhere.

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:32
(Diff revision 29)
> +    }
> +  }
> +};
> +const collectionId = extensionIdToCollectionId(loggedInUser, extensionId);
> +
> +function uuid() {

You might want to use `defineLazyGetter` for this, but, tbh, I'm never sure when we need that in a test file and when it's ok without it.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55836/#review74508

I don't have a lot of new comments on this. Other than what I mentioned nothing jumps out at me. I'm going to hand this over to Kris for his review.
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review74510

Oops, I think I made that comment on the wrong review. Love this new MozReview interface.
Attachment #8781178 - Flags: review?(bob.silverberg) → review+
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

Handing this patch over to Kris for his review.
Attachment #8781178 - Flags: review?(kmaglione+bmo)
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review73624

> I'm not sure I understand why you need this. What's wrong with
> 
> ```
> let error;
> try {
>   yield transformer.decode(tamperedIV);
> } catch (e) {
>   error = e;
> }
> ok(!!error);
> ```
> 
> as we do elsewhere in the codebase to ensure that async functions throw?

Ugh, I somehow created a separate issue when trying to reply to this.

I didn't like that idiom. It seemed too low-level to me and not declarative enough.

Of course, I just dropped it into the file where I happened to need it at the time, figuring that if other people liked it, I could move it someplace more central, and if people didn't, then I could take it out. I needed it again in another test later in test_ext_storage_sync.js, but I still don't know where a better place for the function should be, so I just did what you said there.
Comment on attachment 8773412 [details]
Bug 1253740 - Introduce extension-storage engine with a sanity test,

https://reviewboard.mozilla.org/r/66112/#review63110

> So does this mean I should change the implementation of the tracker, or that what I have here is good enough for now? I checked the bookmarks tracker and AFAICT it's still listening to events and updating a score. Maybe "moving away" means there's a patch in-progress that hasn't landed yet? If so, where is it? I can try to follow its approach if you prefer.

I'm marking this issue as fixed. It seems like the behavior that I have here is OK for now until someone tells me it isn't.
Attachment #8784658 - Flags: review?(markh) → review?(rnewman)
Comment on attachment 8773412 [details]
Bug 1253740 - Introduce extension-storage engine with a sanity test,

https://reviewboard.mozilla.org/r/66112/#review74944

This looks good to me!

::: services/sync/modules/engines/extension-storage.js:71
(Diff revision 7)
> +    // own, so let's just increment score a bit.
> +    this.score += SCORE_INCREMENT_MEDIUM;
> +  },
> +
> +  get enabled() {
> +    const forced = Svc.Prefs.get("engine." + this.prefName + ".force", undefined);

We should comment what/why we are doing this.

::: services/sync/services-sync.js:34
(Diff revision 7)
>  pref("services.sync.engine.history", true);
>  pref("services.sync.engine.passwords", true);
>  pref("services.sync.engine.prefs", true);
>  pref("services.sync.engine.tabs", true);
>  pref("services.sync.engine.tabs.filteredUrls", "^(about:.*|chrome://weave/.*|wyciwyg:.*|file:.*|blob:.*)$");
> +pref("services.sync.engine.extension-storage", true);

I don't think we need this pref any more, right?
Attachment #8773412 - Flags: review?(markh) → review+
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review72176

> `yield SpecialPowers.pushPrefEnv(...)`

I asked Kris in IRC and he said using Preferences.jsm is good enough, since this has turned into an xpcshell test and SpecialPowers isn't available.
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review73624

> Ugh, I somehow created a separate issue when trying to reply to this.
> 
> I didn't like that idiom. It seemed too low-level to me and not declarative enough.
> 
> Of course, I just dropped it into the file where I happened to need it at the time, figuring that if other people liked it, I could move it someplace more central, and if people didn't, then I could take it out. I needed it again in another test later in test_ext_storage_sync.js, but I still don't know where a better place for the function should be, so I just did what you said there.

I spoke with rnewman on IRC and I got the impression that he didn't love this but he didn't hate it enough to make me change it.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review52522

> Now that I have the possibility of using transactions at this level, I could pull the `notifyListeners` into the transaction. That feels kind of wrong to me because I'm not sure how long the listeners will take to run and I'll be blocking the main event loop, plus I just generically feel bad for having more stuff in a transaction than I need to. But that might solve this issue as well. What do you think?

I spoke about this with Kris on IRC. It seems like what we have here is OK. Only one readwrite transaction can run at a time see https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB), and tasks are run in the order that they are queued, so the notifications for the first transaction should be run before any notifications for the second transaction.
The most recent version of this patch series should address all the issues reported in review. There is still one issue that I uncovered during testing, and that's the fact that I seem not to be getting notified when the user changes their password. I'm hoping to talk to :markh about this when he comes online. (It's the last patch.)

For the other patches, :John-Galt, :rnewman, could you please take another look?
Comment on attachment 8773420 [details]
Add code that syncs and tests,

https://reviewboard.mozilla.org/r/66128/#review68364

> Yeah, that sounds like a default preference. I'm not sure where they go, but you're right, there is likely a place for them. Can you see if you can track that down?

I put it in all.js.
Attachment #8784658 - Flags: review?(markh) → review?(rnewman)
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review76454

r+ with comments.

::: services/sync/modules/engines/extension-storage.js:4
(Diff revision 6)
>  /* This Source Code Form is subject to the terms of the Mozilla Public
>   * License, v. 2.0. If a copy of the MPL was not distributed with this
>   * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
>  

And reason not to `"use strict";`?

::: services/sync/modules/engines/extension-storage.js:8
(Diff revision 6)
>   * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
>  
> -this.EXPORTED_SYMBOLS = ['ExtensionStorageEngine'];
> +this.EXPORTED_SYMBOLS = ['ExtensionStorageEngine', 'EncryptionRemoteTransformer',
> +                         'UserKeyEncryptionRemoteTransformer'];
>  
>  var {classes: Cc, interfaces: Ci, utils: Cu} = Components;

`const`

::: services/sync/modules/engines/extension-storage.js:126
(Diff revision 6)
> + *
> + * This is an "abstract base class". Subclass this and override
> + * getKeys() to use it.
> + */
> +this.EncryptionRemoteTransformer = function() {
> +}

Semicolon after assignment.

::: services/sync/modules/engines/extension-storage.js:135
(Diff revision 6)
> +    const keyBundle = yield this.getKeys();
> +    if (record.ciphertext) {
> +      throw new Error("Attempt to reencrypt??");
> +    }
> +    let id = record.id;
> +    if (! record.id) {

Nit: whitespace.

::: services/sync/modules/engines/extension-storage.js:148
(Diff revision 6)
> +    const encryptedResult = {ciphertext, IV, hmac, id};
> +    if (record.hasOwnProperty("last_modified")) {
> +      encryptedResult.last_modified = record.last_modified;
> +    }
> +    return encryptedResult;
> +  }),

Newlines between methods.

::: services/sync/modules/engines/extension-storage.js:161
(Diff revision 6)
> +
> +    if (computedHMAC != record.hmac) {
> +      Utils.throwHMACMismatch(record.hmac, computedHMAC);
> +    }
> +
> +      // Handle invalid data here. Elsewhere we assume that cleartext is an object.

Nit: fix indenting.

::: services/sync/modules/record.js:414
(Diff revision 6)
> +  /**
> +   * Create a new default key.
> +   *
> +   * @returns {BulkKeyBundle}
> +   */
> +  newDefaultKey: function() {

`newDefaultKeyBundle` would be clearer, I think.

Nit: `function` spacing is, IIRC:

```
method() {
   ...
   let x = function () {...}
}

foo: function () {
  ...
}
foo: function namedFunction() {
}
```

That is, there's always a space after `function`, then a name (which might be zero-length), then parens.

::: services/sync/modules/record.js:450
(Diff revision 6)
> +   */
> +  ensureKeysFor: function(collections) {
> +    const newKeys = Object.assign({}, this._collections);
> +    for (let c of collections) {
> +      // if (newKeys[c]) {
> +      //   continue;  // don't replace existing keys

Shouldn't this be uncommented?

::: services/sync/modules/record.js:519
(Diff revision 6)
>        for (let k in colls) {
>          let v = colls[k];
>          if (v) {
>            let keyObj = new BulkKeyBundle(k);
>            keyObj.keyPairB64 = v;
>            if (keyObj) {

I wonder why this would ever be falsy?
Attachment #8784658 - Flags: review?(rnewman) → review+
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review76458

This seems fine, modulo the obvious concern that you need to see when we detect that kB has changed, not when the password has changed per se.

I don't know which event is fired when the user chooses to reset their password. Mark, do you?

::: services/sync/modules/engines/extension-storage.js:20
(Diff revision 2)
>  Cu.import("resource://services-common/async.js");
>  XPCOMUtils.defineLazyModuleGetter(this, "ExtensionStorageSync",
>                                    "resource://gre/modules/ExtensionStorageSync.jsm");
>  XPCOMUtils.defineLazyModuleGetter(this, "fxAccounts",
>                                    "resource://gre/modules/FxAccounts.jsm");
> +XPCOMUtils.defineLazyModuleGetter(this, "ON_PASSWORD_CHANGED_NOTIFICATION",

Per IRC chat yesterday: this should be in reference to password reset, right?

You probably *do* care about when the password changes -- at least indirectly -- because you won't have valid FxA credentials to make requests, but in crypto terms you care about when `kB` has changed.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:449
(Diff revision 2)
>        yield cryptoColl.clear();
>      });
>    },
>  
>    /**
> +   * Call to signal that a user has just changed their password.

s/changed/reset

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:1
(Diff revision 2)
>  "use strict";

Missing license block.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review76460

Looks good to me for the most part. There's just the open question about whether `.clear()` needs to happen in an atomic transaction. I think it probably does, but I could be convinced otherwise.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:103
(Diff revision 27)
> +    Cu.reportError(`Internal error: trying to close extension ${extension.id}` +
> +                   "that doesn't have a collection");

This should use `new Error()` so we get a proper stack trace.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:125
(Diff revision 27)
> + * @returns {Promise<()>} Promise that resolves when everything is clean.
> + */
> +function cleanUpForContext(extension, context) {
> +  const contexts = extensionContexts.get(extension);
> +  if (!contexts) {
> +    Cu.reportError(`Internal error: cannot find any contexts for extension ${extension.id}`);

Same as above.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:130
(Diff revision 27)
> +    Cu.reportError(`Internal error: cannot find any contexts for extension ${extension.id}`);
> +    // Try to shut down cleanly anyhow?
> +    return closeExtensionCollection(extension);
> +  }
> +  contexts.delete(context);
> +  if (contexts.size === 0) {

We should also remove the extension from the contexts map.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:264
(Diff revision 27)
> +    const res = yield coll.list();
> +    const records = res.data;
> +    const keys = records.map(record => record.key);
> +    yield this.remove(extension, keys, context);

It seems like we should be doing this in an atomic transaction?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:320
(Diff revision 27)
> +
> +  notifyListeners(extension, changes) {
> +    let listeners = this.listeners.get(extension) || new Set();
> +    if (listeners) {
> +      for (let listener of listeners) {
> +        ExtensionUtils.runSafeSyncWithoutClone(listener, changes);

We'd typically just import this into the global scope near the top of the file with:

    const {
      runSafeSyncWithoutClone,
    } = ExtensionUtils;

::: toolkit/components/extensions/test/xpcshell/test_ext_storage.js:276
(Diff revision 27)
> -    browser.test.assertEq(true, obj.bool, "bool part correct");
> +        browser.test.assertEq(true, obj.bool, "bool part correct");
> -    browser.test.assertEq(undefined, obj.undef, "undefined part correct");
> +        browser.test.assertEq(undefined, obj.undef, "undefined part correct");
> -    browser.test.assertEq(undefined, obj.func, "function part correct");
> +        browser.test.assertEq(undefined, obj.func, "function part correct");
> -    browser.test.assertEq(undefined, obj.window, "window part correct");
> +        browser.test.assertEq(undefined, obj.window, "window part correct");
> -    browser.test.assertEq("1970-01-01T00:00:00.000Z", obj.date, "date part correct");
> +        browser.test.assertEq("1970-01-01T00:00:00.000Z", obj.date, "date part correct");
> -    browser.test.assertEq("/regexp/", obj.regexp, "date part correct");
> +        browser.test.assertEq("/regexp/", obj.regexp, "regexp part correct");

Thanks :)
Attachment #8757379 - Flags: review?(kmaglione+bmo)
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

Clearing flag until you figure out what to do here :D
Attachment #8787767 - Flags: review?(rnewman)
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review76458

I was able to listen to `ON_ACCOUNT_STATE_CHANGE_NOTIFICATION`, which was fired when the password was reset, but before I was logged in again. I guess I need to keep track of some kind of global state relating to whether I've successfully uploaded keys, rather than just responding immediately to signals being fired. Does that sound right to you?

> Per IRC chat yesterday: this should be in reference to password reset, right?
> 
> You probably *do* care about when the password changes -- at least indirectly -- because you won't have valid FxA credentials to make requests, but in crypto terms you care about when `kB` has changed.

It seems like when the password *changes*, the OAuth credentials don't change -- I verified this empirically on my Nightly. The code I have in this Mozreview request is using the same Bearer token before and after the password change and Kinto seems to validate it OK.

When I reset my password, it seems like I get logged out, but when I get logged back in again, my code seems to work OK -- the OAuth client produces new tokens automatically, without my having to flush a cache or anything.
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review76454

> Semicolon after assignment.

Ugh, `./mach eslint` doesn't catch this.

> `newDefaultKeyBundle` would be clearer, I think.
> 
> Nit: `function` spacing is, IIRC:
> 
> ```
> method() {
>    ...
>    let x = function () {...}
> }
> 
> foo: function () {
>   ...
> }
> foo: function namedFunction() {
> }
> ```
> 
> That is, there's always a space after `function`, then a name (which might be zero-length), then parens.

Is that true? In this file, I see a couple `foo: function(...)` (like `asWBO` or `generateNewKeysWBO`) but no `foo: function (...)`. But maybe this file isn't the greatest example. In this case, I'm just going to go ahead and turn it into `newDefaultKeyBundle()` since we can see eye-to-eye on that form..

> Shouldn't this be uncommented?

Ouch. Yes. Actually, I had a test for this, and the test was failing, but the way I was running the test wasn't exposing that failure. See https://bugzilla.mozilla.org/show_bug.cgi?id=1225781 .

> I wonder why this would ever be falsy?

I can't see any reason why it should. I took out the check and tests seem to pass.
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review76460

> It seems like we should be doing this in an atomic transaction?

I don't think we can. The transaction abstraction ofered by Kinto requires you to know what record IDs you want to be available in the transaction, and we can't determine that without another transaction. Put another way, the list() call isn't available in a transaction. We also can't remove() any records without having their data be available in the transaction.

> We'd typically just import this into the global scope near the top of the file with:
> 
>     const {
>       runSafeSyncWithoutClone,
>     } = ExtensionUtils;

OK, I'll do that. But doesn't that defeat the point of defining a lazy getter?
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review76460

> I don't think we can. The transaction abstraction ofered by Kinto requires you to know what record IDs you want to be available in the transaction, and we can't determine that without another transaction. Put another way, the list() call isn't available in a transaction. We also can't remove() any records without having their data be available in the transaction.

OK. It's probably not a big deal, anyway.

> OK, I'll do that. But doesn't that defeat the point of defining a lazy getter?

Yeah, we typically don't import ExtensionTestUtils lazily. It will always be loaded before this module, in any case.
Attachment #8784658 - Flags: review?(markh) → review?(rnewman)
I've posted a new version of this patch series that handles 401s on token expiry and checks whether it needs to reupload keys on every user log in. Unit tests pass, except that I can't make the actual ONLOGIN_NOTIFICATION event occur when I log in and out. Is this the right notification to use? Am I listening to it wrong? Needinfoing rfkelly in yet another cry for help.
Flags: needinfo?(rfkelly)
> I can't make the actual ONLOGIN_NOTIFICATION event occur when
> I log in and out. Is this the right notification to use? Am I listening to it wrong?

I'm afraid I'm not familiar enough with the client-side sync/fxa code to answer questions at this depth; deferring to :markh
Flags: needinfo?(rfkelly) → needinfo?(markh)
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review77486

::: toolkit/components/extensions/ExtensionStorageSync.jsm:77
(Diff revision 5)
>  let cryptoCollectionPromise = null;
>  // Borrow logger from Sync.
>  const log = Log.repository.getLogger("Sync.Engine.Extension-Storage");
> +// Hash of the last kB we saw. If this changes, the user has reset
> +// their password, and we need to re-upload our keyring.
> +let lastKBHash = null;

As a high-level comment, I feel like this is being complicated by the fact that you're using Kinto's transformers to apply the encryption magically.

The question you really want to answer is, "what kB was used to encrypt the data that's currently stored, and does it differ from the one I currently have?".  Can you answer that by inspecting what's actually stored in Kinto, rather than keeping track of it in code here?

For example, could you store kbHash as a plaintext attribute on your keyring record?  That would let you inspect it directly, and also avoid having to maintain a synthetic "passwordChanged" attribute to force Kinto to re-sync it.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:581
(Diff revision 5)
> +      // different password. In this case, we will fail (throw an
> +      // exception) because we can't decrypt the record. This behavior
> +      // is correct because we have absolutely no chance of resolving
> +      // conflicts intelligently -- in particular, we can't prove that
> +      // uploading our key won't erase keys that have already been
> +      // used on remote data.

If a user is unfortunate enough to get into this state, how do they get themselves out of it?
(In reply to Ethan Glasser-Camp (:glasserc) from comment #288)
> Unit tests pass, except that I can't make the actual ONLOGIN_NOTIFICATION
> event occur when I log in and out. Is this the right notification to use? Am
> I listening to it wrong? Needinfoing rfkelly in yet another cry for help.

The problem is that you are adding the observer once Sync has asked your engine to start tracking - but Sync will never do that until the user has actually logged in (although there may be an edge case where it will work - when a profile is running and configured for Sync and the password is changed on a different device)

You should just listen for this notification as your module loads rather than in start-tracking. However, it seems possible that your module will be loaded by Sync - in which case you also will not see the notification either - you probably need to ensure at module load time that the keys are what you last had - you may already do this, but I didn't check. If you can take Ryan's suggestion and work out a cheap way to verify the key directly, you can probably just do this unconditionally as you Sync and avoid all this complication.
Flags: needinfo?(markh)
(In reply to Mark Hammond [:markh] from comment #291)
> (In reply to Ethan Glasser-Camp (:glasserc) from comment #288)
> > Unit tests pass, except that I can't make the actual ONLOGIN_NOTIFICATION
> > event occur when I log in and out. Is this the right notification to use? Am
> > I listening to it wrong? Needinfoing rfkelly in yet another cry for help.
> 
> The problem is that you are adding the observer once Sync has asked your
> engine to start tracking - but Sync will never do that until the user has
> actually logged in (although there may be an edge case where it will work -
> when a profile is running and configured for Sync and the password is
> changed on a different device)
> 
> You should just listen for this notification as your module loads rather
> than in start-tracking. However, it seems possible that your module will be
> loaded by Sync - in which case you also will not see the notification either
> - you probably need to ensure at module load time that the keys are what you
> last had - you may already do this, but I didn't check. If you can take
> Ryan's suggestion and work out a cheap way to verify the key directly, you
> can probably just do this unconditionally as you Sync and avoid all this
> complication.

I'd like to do both -- check on every sync, but also upload a new keyring ASAP when a user's kB changed. This seems like the right thing to do to avoid conflicts. What do you think about my calling a "checkSyncKeyring" method in `startTracking`?
Flags: needinfo?(markh)
(In reply to Ethan Glasser-Camp (:glasserc) from comment #292)
> I'd like to do both -- check on every sync, but also upload a new keyring
> ASAP when a user's kB changed. This seems like the right thing to do to
> avoid conflicts. What do you think about my calling a "checkSyncKeyring"
> method in `startTracking`?

startTracking doesn't really seem like the correct place - typically you can expect to see that called exactly once, immediately after your engine is created - so (a) you might as well do it as the engine is created and (b) being called once doesn't seem like often enough to meet your requirements. Is there any reason you can't do it at the start of the actual Sync process?
Flags: needinfo?(markh)
(In reply to Mark Hammond [:markh] from comment #293)
> startTracking doesn't really seem like the correct place - typically you can
> expect to see that called exactly once, immediately after your engine is
> created - so (a) you might as well do it as the engine is created and (b)
> being called once doesn't seem like often enough to meet your requirements.

I just did some testing and it seems like the engine and tracker are created once at what I assume is module load time, but the tracker is started/stopped when the user logs in and out. This is what I did in my most recent patch. It may be a little slapdash, sorry, I'm late for dinner!

> Is there any reason you can't do it at the start of the actual Sync process?

No, like I said, I'd like to do both.
(In reply to Ethan Glasser-Camp (:glasserc) from comment #299)
> I just did some testing and it seems like the engine and tracker are created
> once at what I assume is module load time, but the tracker is
> started/stopped when the user logs in and out. This is what I did in my most
> recent patch.

Yes, that's correct - but in reality, that's an edge case. The probably more important case is a user who doesn't actually "disconnect" from Sync, but still need to re-authenticate (ie, the ONLOGIN case). IOW, doing this in startTracking isn't going to hurt, but I doubt it will help much.
(In reply to Mark Hammond [:markh] from comment #300)
> Yes, that's correct - but in reality, that's an edge case. The probably more
> important case is a user who doesn't actually "disconnect" from Sync, but
> still need to re-authenticate (ie, the ONLOGIN case). IOW, doing this in
> startTracking isn't going to hurt, but I doubt it will help much.

Ah! OK. You also suggested I could listen to the notification at module load time. I tried playing around with this approach -- does this mean adding to module-level in extension-storage.js something like the following?

const LoginListener = {
  observe(subject, topic, data) {
    switch (topic) {
    case ONVERIFIED_NOTIFICATION:
      return Async.promiseSpinningly(ExtensionStorageSync.checkSyncKeyRing());
    }
  },
};
Svc.Obs.add(ONVERIFIED_NOTIFICATION, LoginListener);
(In reply to Ethan Glasser-Camp (:glasserc) from comment #301)
> Ah! OK. You also suggested I could listen to the notification at module load
> time. I tried playing around with this approach -- does this mean adding to
> module-level in extension-storage.js something like the following?

Yep. Alternatively, you could stick the observe method directly on the ExtensionStorageSync object and add the listener in the constructor. Alternatively(2), the observer can actually a be a function (ie, you can pass a function to the .add() method rather than an object).

Also worth considering is that ideally you shouldn't block in that call. For example, you could do something like:

    case ONVERIFIED_NOTIFICATION:
      this.promiseKeyRing = ExtensionStorageSync.checkSyncKeyRing();

and in your actual sync method, do something like: this.promiseKeyRing.then(() => { actual sync code })

obviously it would be a little more complicated than that, but you get the idea. At some point in the next year we want to make Sync promise-based, at which point promiseSpinningly would die.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review76458

> It seems like when the password *changes*, the OAuth credentials don't change -- I verified this empirically on my Nightly. The code I have in this Mozreview request is using the same Bearer token before and after the password change and Kinto seems to validate it OK.
> 
> When I reset my password, it seems like I get logged out, but when I get logged back in again, my code seems to work OK -- the OAuth client produces new tokens automatically, without my having to flush a cache or anything.

I've added support for handling 401s, which I think addresses what you meant by password changes.
The ExtensionStorageSync object is created using a literal so doesn't have a constructor. I've opted for dumping a listener at module scope in extension-storage.js (and pushed a version with this change).

I've removed the call to `promiseSpinningly`. The most recent version just calls the method and ignores the result. My understanding is that this will "fork off" the promise but not block on its creation. That's good enough for my purposes, since this is meant to just be an optimistic re-upload as soon as a change happens. Another harder check is done at the beginning of each sync cycle anyhow.
(In reply to Ethan Glasser-Camp (:glasserc) from comment #303)
> I've added support for handling 401s, which I think addresses what you meant
> by password changes.

I'm not sure exactly where you added this check, but note that a 401 might be an expired oauth token. So what you want is probably something like: 1st 401 == drop cached token and request a new token. If that new token *also* 401s, then you are probably dealing with a changed password.

(In reply to Ethan Glasser-Camp (:glasserc) from comment #305)

> I've removed the call to `promiseSpinningly`. The most recent version just
> calls the method and ignores the result. My understanding is that this will
> "fork off" the promise but not block on its creation. That's good enough for
> my purposes, since this is meant to just be an optimistic re-upload as soon
> as a change happens. Another harder check is done at the beginning of each
> sync cycle anyhow.

That's fine as long as nothing bad can happen if multiple of these are running at the same time. IOW, you probably want to ensure your code isn't racy. IF races *are* a problem, storing the promise would help prevent that.
(In reply to Mark Hammond [:markh] from comment #306)
> I'm not sure exactly where you added this check, but note that a 401 might
> be an expired oauth token. So what you want is probably something like: 1st
> 401 == drop cached token and request a new token. If that new token *also*
> 401s, then you are probably dealing with a changed password.

It's in https://reviewboard.mozilla.org/r/71650/diff/12#index_header. I'd like to link to the exact line but I don't know how to do that with Mozreview. Just search for 401 and you should be able to find it.

> (In reply to Ethan Glasser-Camp (:glasserc) from comment #305)
> 
> That's fine as long as nothing bad can happen if multiple of these are
> running at the same time. IOW, you probably want to ensure your code isn't
> racy. IF races *are* a problem, storing the promise would help prevent that.

Oh, yeah, I guess that could be a problem.. I'll store the promise like you suggest.
(In reply to Mark Hammond [:markh] from comment #306)

> I'm not sure exactly where you added this check, but note that a 401 might
> be an expired oauth token. So what you want is probably something like: 1st
> 401 == drop cached token and request a new token. If that new token *also*
> 401s, then you are probably dealing with a changed password.

This reminds me very much of Old Sync's node reassignment code. One 401 is probably a node reassignment; another one likely means our password changed. The more things change, the more they stay the same…



> That's fine as long as nothing bad can happen if multiple of these are
> running at the same time. IOW, you probably want to ensure your code isn't
> racy. IF races *are* a problem, storing the promise would help prevent that.

Note that you likely also want a .catch on that promise, even if only to log. Otherwise you'll drop an unhandled promise exception into the event loop, and Firefox will complain at you.
Status: NEW → ASSIGNED
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review78154

LGTM. Suggestion for simplifying below.

::: services/sync/modules/engines/extension-storage.js:130
(Diff revision 10)
> + * getKeys() to use it.
> + */
> +this.EncryptionRemoteTransformer = function() {
> +};
> +EncryptionRemoteTransformer.prototype = {
> +  _fxaService: fxAccounts,   // you can inject this

`_fxaService` is only used by your overridden method.

Why not have `EncryptionRemoteTransformer` take `function getKeys()` as a constructor argument, and when you instantiate your transformer, capture the FxA service in the `getKeys` you provide?

That is: don't make this an abstract base class; instead, use composition. No injection, no overriding, no "override this" exception…

::: services/sync/tests/unit/test_extension_storage_crypto.js:45
(Diff revision 10)
> +/**
> + * An EncryptionRemoteTransformer that uses a fixed key bundle,
> + * suitable for testing.
> + */
> +function StaticKeyEncryptionRemoteTransformer(keyBundle) {
> +  EncryptionRemoteTransformer.call(this);

Then this can turn into:

```
EncryptionRemoteTransformer.call(this, () => Promise.resolve(keyBundle));
```

or, indeed, we can throw away this class altogether and just:

```
const transformer = new EncryptionRemoteTransformer(() => Promise.resolve(keyBundle));
```

which is nice and simple, no?
Attachment #8784658 - Flags: review?(rnewman) → review+
Comment on attachment 8773416 [details]
Bug 1253740 - Hash extension ID to obfuscate installed add-ons,

Passing this over to Kris for his review
Attachment #8773416 - Flags: review?(kmaglione+bmo)
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

Passing this over to Kris for his review.
Attachment #8781178 - Flags: review?(kmaglione+bmo)
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review78184

That'll do for commentary from me, I think; markh knows this lifecycle better than me, so over to him.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:500
(Diff revision 7)
> +
> +    let kBbytes = CommonUtils.hexToBytes(signedInUser.kB);
> +    let hasher = Cc["@mozilla.org/security/hash;1"]
> +                    .createInstance(Ci.nsICryptoHash);
> +    hasher.init(hasher.SHA256);
> +    return CommonUtils.bytesAsHex(CryptoUtils.digestBytes(kBbytes, hasher));

No harm in salting this with something unique-per-user (e.g., the user's identifier itself).

::: toolkit/components/extensions/ExtensionStorageSync.jsm:555
(Diff revision 7)
> +      // extensions later, we'll pick this up on a subsequent sync.
> +      log.info("Tried to check keyring, but no extensions are loaded. Ignoring.");
> +      return;
> +    }
> +
> +    yield this.checkKBChanged();

You're not looking at the return value!

::: toolkit/components/extensions/ExtensionStorageSync.jsm:582
(Diff revision 7)
> +      // uploading our key won't erase keys that have already been
> +      // used on remote data. If this happens, hopefully the user will
> +      // relogin so that all devices have a consistent kB; this will
> +      // ensure that the most recent version on the server is
> +      // encrypted with the same kB that other devices have, and they
> +      // will sync the keyring successfully on subsequent syncs.

This feels like you're getting into the world we were in in old sync -- grep the codebases for "HMAC mismatch".
Attachment #8787767 - Flags: review?(rnewman)
Looking more at this, I think I might be safe from race conditions. The only thing I'd be worried about is clobbering keys, but I don't think that can happen. `checkSyncKeyRing` just does a call to update, which happens in a transaction. The "ensure keys" function isn't in a transaction, so it could grab the keyring and wipe out an update made by `checkSyncKeyRing`, but if that happens, it will mark the record as needing sync, which was the goal of `checkSyncKeyRing` anyhow.

I'll put a catch() on the promise.
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review78154

> `_fxaService` is only used by your overridden method.
> 
> Why not have `EncryptionRemoteTransformer` take `function getKeys()` as a constructor argument, and when you instantiate your transformer, capture the FxA service in the `getKeys` you provide?
> 
> That is: don't make this an abstract base class; instead, use composition. No injection, no overriding, no "override this" exception…

I like the design's idea, but I don't understand how I could use it in tests. Right now, I can update this `_fxaService` to make it seem like a certain user is logged in.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review77486

> As a high-level comment, I feel like this is being complicated by the fact that you're using Kinto's transformers to apply the encryption magically.
> 
> The question you really want to answer is, "what kB was used to encrypt the data that's currently stored, and does it differ from the one I currently have?".  Can you answer that by inspecting what's actually stored in Kinto, rather than keeping track of it in code here?
> 
> For example, could you store kbHash as a plaintext attribute on your keyring record?  That would let you inspect it directly, and also avoid having to maintain a synthetic "passwordChanged" attribute to force Kinto to re-sync it.

The most recent version of the patch takes this approach. I shied away at first because I felt weird storing a password to a server, but it does simplify this code a certain amount.

> If a user is unfortunate enough to get into this state, how do they get themselves out of it?

I've added a comment explaining what I hope will happen (the user will use the new password on all their devices and whichever one had synced last will successfully push a version of their record encrypted with the same kB that all the other devices had, so then they can pull and decrypt successfully).
Comment on attachment 8784658 [details]
Bug 1253740 - Add crypto, including tests,

https://reviewboard.mozilla.org/r/74010/#review78154

> I like the design's idea, but I don't understand how I could use it in tests. Right now, I can update this `_fxaService` to make it seem like a certain user is logged in.

I spoke with rnewman on IRC and I believe that he's comfortable with this approach. He hadn't seen the tests, which are in the next commit (look for `withSignedInUser`). I'm marking these as fixed.
Comment on attachment 8773416 [details]
Bug 1253740 - Hash extension ID to obfuscate installed add-ons,

https://reviewboard.mozilla.org/r/66120/#review78834

::: toolkit/components/extensions/ExtensionStorageSync.jsm:297
(Diff revision 14)
>                                             "identity.mozilla.com/picl/v1/chrome.storage.sync.collectionIds", 2 * 32);
>    let converter = Cc["@mozilla.org/intl/scriptableunicodeconverter"]
>                      .createInstance(Ci.nsIScriptableUnicodeConverter);
>    converter.charset = "UTF-8";
>    // Data is an array of bytes.
>    let data = converter.convertToByteArray(userFingerprint + extensionId, {});

`nsIScriptableUnicodeConverter` is pretty gross... Would be better to use something like `Array.from(new TextEncoder().encode(data))`
Attachment #8773416 - Flags: review?(kmaglione+bmo) → review+
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review78836

::: toolkit/components/extensions/ExtensionStorageSync.jsm:296
(Diff revision 14)
> + * @returns {string} A collection ID suitable for use to sync to.
> + */
> +function extensionIdToCollectionId(user, extensionId) {
> +  const userFingerprint = CryptoUtils.hkdf(user.kB, undefined,
> +                                           "identity.mozilla.com/picl/v1/chrome.storage.sync.collectionIds", 2 * 32);
> +  let converter = Cc["@mozilla.org/intl/scriptableunicodeconverter"]

Sorry, from other patch:

`nsIScriptableUnicodeConverter` is pretty gross. It would be better to use something like `let data = new TextEncoder().encode(userFingerprint + extensionId);`
Attachment #8773414 - Flags: review+
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review78974

The code looks fine, but I don't understand the process when we detect the key changes. I don't really need to understand it fully, but I do need to know that someone who does understand it fully can assure me this is indeed the desired behavior and we don't end up with records encrypted using different (and now lost) keys and that users aren't going to find themselves in a situation where we permanently throw.

::: services/sync/modules/engines/extension-storage.js:233
(Diff revision 8)
>  };
> +
> +const LoginListener = {
> +  observe(subject, topic, data) {
> +    return ExtensionStorageSync.checkSyncKeyRing().catch(err => {
> +      Cu.reportError("Syncing the keyring failed");

You should probably log.error here too so users who upload a sync error log but don't look at the console have it recorded for them.

`log.error("Syncing the keyrin failed", err)`

will log the error with a stack etc.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:516
(Diff revision 8)
> +      // logged in.
> +      //
> +      // If we aren't logged in, we don't have any information about
> +      // the user's kB, so we can't be sure that the user changed
> +      // their kB, so just return false.
> +      return false;

return false looks strange here - we return undefined in the other case and the result isn't used - so plain `return;` seems better.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:571
(Diff revision 8)
> +      // uploading our key won't erase keys that have already been
> +      // used on remote data. If this happens, hopefully the user will
> +      // relogin so that all devices have a consistent kB; this will
> +      // ensure that the most recent version on the server is
> +      // encrypted with the same kB that other devices have, and they
> +      // will sync the keyring successfully on subsequent syncs.

I don't understand what's going on here and how the user gets out of this state? ie, what happens when we find a record uploaded with a different password - but it is the old password? Much like Sync itself, I expect you really want to arrange that (a) all records use the same key and (b) this implies re-uploading all records when that key changes.

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:269
(Diff revision 8)
> +        ok(body.keys);
> +        ok(body.keys.default);
> +        deepEqual(body.keys.collections[extensionId], extensionKey);
> +      });
> +
> +      // Verify that with the old kB, we can't decrypt the record.

similarly to the above - the old kB can't decrypt the new record, but ISTM the new kB can't decrypt old records.
Attachment #8787767 - Flags: review?(markh)
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review78984

::: toolkit/components/extensions/ExtensionStorageSync.jsm:295
(Diff revision 14)
> + * @param {string} extensionId The extension ID to obfuscate.
> + * @returns {string} A collection ID suitable for use to sync to.
> + */
> +function extensionIdToCollectionId(user, extensionId) {
> +  const userFingerprint = CryptoUtils.hkdf(user.kB, undefined,
> +                                           "identity.mozilla.com/picl/v1/chrome.storage.sync.collectionIds", 2 * 32);

When kB changes, all the collectionIds will change, right?  How do you purge all the old collectionIds when this happens?
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review78988

::: toolkit/components/extensions/ExtensionStorageSync.jsm:571
(Diff revision 8)
> +      // uploading our key won't erase keys that have already been
> +      // used on remote data. If this happens, hopefully the user will
> +      // relogin so that all devices have a consistent kB; this will
> +      // ensure that the most recent version on the server is
> +      // encrypted with the same kB that other devices have, and they
> +      // will sync the keyring successfully on subsequent syncs.

I'm trying to think through the edge-cases here.  What happens if I have one device syncing addons and populating the server, but I lose it.  It leaves a keys record on the server encrypted with kB #1.

I get a new device, and login so I can get all my settings back.  But I can't remember my FxA password, so I reset it.  So the new device goes to sync for the first time with kB #2.  But the record on the server is encrypted with kB #1.  What happens?

IMHO this code really needs to be accompanied by a prose description of how the crypto is structured, and maybe even some diagrams.  A link to a wiki page would probably be better than inline in the code, but  having it available for reference somewhere will make it much easier to think through all the possibilities.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review78988

> I'm trying to think through the edge-cases here.  What happens if I have one device syncing addons and populating the server, but I lose it.  It leaves a keys record on the server encrypted with kB #1.
> 
> I get a new device, and login so I can get all my settings back.  But I can't remember my FxA password, so I reset it.  So the new device goes to sync for the first time with kB #2.  But the record on the server is encrypted with kB #1.  What happens?
> 
> IMHO this code really needs to be accompanied by a prose description of how the crypto is structured, and maybe even some diagrams.  A link to a wiki page would probably be better than inline in the code, but  having it available for reference somewhere will make it much easier to think through all the possibilities.

:rnewman had a similar critique. I've added some prose to https://wiki.mozilla.org/CloudServices/Sync/ExtensionStorage_Design_Doc . I'm not sure what diagrams to draw but I'd be happy to draw them. As I think you've intuited, what happens in this case is that the device refuses to sync forever. Per :markh's comments, I guess this isn't acceptable, but I'm not 100% sure how to fix it either. It seems like I have to eventually decide at what point to lose data or whether we throw forever. Even once we decide to lose data, I'm not sure how best to choose what data to lose. For example, in this case, if the first device has synced data for some extension, we won't know that until we add that same extension in the new device and fail to sync it. Because that can happen a long time after we finally give up on the crypto record, we can't easily associate the two events. So I guess we'd have to treat any failed sync as potentially the beginning of a process that ends in throwing away data.

As a result of thinking through all the stuff in the design doc, I've decided to get rid of the "optimistic" resync of kB on login. Having two "threads" potentially touching the keyring at once seems scary to me. My next push will include this change but in the meantime, if you have any thoughts on the wiki page I wrote, feel free to mention them.
(FWIW, and despite my previous comment about "thinking through the edge-cases", the sequence of "syncing on an old device, throw it away and get a new one, reset password to login on new device" is quite common user behavior, and definitely not an edge-case.)

> if the first device has synced data for some extension, we won't know that until we add that same
> extension in the new device and fail to sync it.

It's not obvious why it will fail to sync in this case.  Does it fail at pulling down the synced data because it doesn't have the collection key for that extension?

It's often useful to explicitly enumerate the states your device can be in, so here's a rough attempt. With respect to the keyring, I see the following states:

1) It's encrypted with an older kB than the one I have
2) It's encrypted with a newer kB than the one I have
-) It's encrypted with the same kB that I have, and:
    3) I have addons whose collectionKeys are not in the keyring
    4) All addons that I have installed have a colectionKey in the keyring

Are there others?  I guess maybe:

    5) It contains a conflicting collectionKeys to the ones I think we should be using

Is there something sensible the device can do in each state?  Maybe:

1) Re-upload the keyring using my current set of collectionKeys and the new kB.  Other devices move to either (2), (3) or (4)
2) Error out, and wait for the browser to discover it needs to re-auth.  Other devices are unchanged.
3) Re-upload with the missing collectionKeys.  Other devices may move from (3) to (4)
4) The happy state, there's no action required

Not sure what to do about (5) though.  And it's all predicated on being able to distinguish between each of these states.
(In reply to Ryan Kelly [:rfkelly] from comment #324)
> It's not obvious why it will fail to sync in this case.  Does it fail at
> pulling down the synced data because it doesn't have the collection key for
> that extension?

Yes. 

> It's often useful to explicitly enumerate the states your device can be in,
> so here's a rough attempt.

Thanks, this was helpful. I think I found the problematic state by splitting your state 1) into:

1a) It's encrypted with an older kB than I have, with a subset of the keys that I have
1b) It's encrypted with an older kB than I have, with some keys that I don't have

Also, note that "older" doesn't necessarily mean "a kB that you previously had" -- it just means older in time.

You can also split 2) in this way but I don't know that it helps, since it's even harder to detect what case you're in, since you definitely never had the kB in question.

> Are there others?  I guess maybe:
> 
>     5) It contains a conflicting collectionKeys to the ones I think we
> should be using

I don't think it's possible to get into state 5 at the moment (at least, not assuming that we're in the "encrypted with the same kB" case). During the transition from 3 to 4, we upload a new keyring with an "if-match" sort of behavior, which means only one client can successfully upload a keyring. Others will instead replace their keyring with the one that was uploaded. So nobody should be able to upload a key that replaces the existing key for that collection.

> 1) Re-upload the keyring using my current set of collectionKeys and the new
> kB.  Other devices move to either (2), (3) or (4)

This is only safe in 1a. In 1b, you can't do this because you'll erase keys that were used to encrypt data on the server. If you had access to the kB used to encrypt the record on the server, you can pull the record down, merge keys, and re-upload with the new kB, but that may not be possible (if the only device with the kB was lost).

> 2) Error out, and wait for the browser to discover it needs to re-auth. 
> Other devices are unchanged.

Ah, so does this mean that I shouldn't really worry about this case? My sync code is run after the browser has verified that it has the newest kB, so any error like this is transient. Is that right?

> Not sure what to do about (5) though.  And it's all predicated on being able
> to distinguish between each of these states.

I think the only thing we can ever do for 1b is either wait (and hope that the device which uploaded the keyring that has the most keys will be updated to have the new kB) or discard the server keyring (and all data encrypted with it). I'm assuming I'll have to implement something like this, perhaps after a delay (we've failed to sync for >1hr?).

Here are the states for a collection's key:

A. We don't have a key for this collection.
B. We do have a key for this collection, and it corresponds to the data that is uploaded for this collection.
C. We do have a key, but the collection was encrypted with an older key.
D. We do have a key, but the collection was encrypted with a newer key.
E. We do have a key and the data in the collection is mostly encrypted with this key, but some records were tampered with.

C happens if a device gives up on waiting for the old keyring to get updated with the new kB. I think you can get to D if another device assumes you're dead, overwrites the keyring, and then syncs data to a collection using that key. I don't think there's any way to detect whether you're in states C, D, or E. I think the only thing you can do in any of these cases is log a complaint and drop the record.
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review78984

> When kB changes, all the collectionIds will change, right?  How do you purge all the old collectionIds when this happens?

Per discussion on https://bugzilla.mozilla.org/show_bug.cgi?id=1276933, it seems like I don't need to change collection IDs when kB changes. I've updated this code to use the user's ID as a salt instead, which means collection IDs shouldn't change when a user resets their password.
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review78836

> Sorry, from other patch:
> 
> `nsIScriptableUnicodeConverter` is pretty gross. It would be better to use something like `let data = new TextEncoder().encode(userFingerprint + extensionId);`

Sorry about the grossness. I copied the whole thing from https://dxr.mozilla.org/mozilla-central/source/services/fxaccounts/FxAccountsWebChannel.jsm#377.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review78974

I tried to elaborate on the process per comments from :rfkelly and :rnewman. The result is https://wiki.mozilla.org/CloudServices/Sync/ExtensionStorage_Design_Doc. The sneaky bit that might be confusing because it isn't done in this code is that kinto.js tracks the synced/not-synced status of every record, and additionally, tries to detect when we make a no-op update. So every time we sync, we tell kinto.js to update the hashed kB that we store in the keyring. If this is a no-op, kinto.js will preserve the old status of the keyring, and its status will be "synced". If the hashed kB that we're storing is different, it will update the record's status to be "updated". Finally, we always try to sync if we find that our keyring is not "synced", because this lets us recover if the sync failed last time around.

In the happy path, the keys in the keyring grow only monotonically. When we try to add a new key to the keyring, we immediately sync using server_wins semantics, so if someone else modified the keyring in the meantime, we will retrieve their keys and throw away the potential key we were about to add. We only sync once this process has successfully terminated, so we will never upload data without a key on the server. This means we never end up with data that's encrypted using lost keys.

However, :rfkelly criticized the way I handle the case where a user loses their only device and uses password reset to log in to a new device. As written right now, the code will throw forever. It seems like I have to decide whether I discard the existing keys (which corresponds to "ending up with records encrypted using lost keys") or to hold out hope that the old device comes back online (which corresponds to "throwing forever"). It seems like after a while, the first option is better than the second one. We encrypt the data with kB so that's already a tacit admission that we can't regain access to data if you have to reset your password, and I've also heard it said that "Sync is not a backup". I'm eager to hear your thoughts, but in the meantime I'll try to implement this, where if we fail to sync the keyring due to decryption failures for, say, two hours, then we give up and overwrite it.

Why doesn't the rest of Sync encounter anything odd like this? I think the reason is that when a password is reset, Sync reassigns the user to a different node, and that means that there's nothing in the old data to conflict with the new data.

It might be possible to have the best of both worlds if the old device comes back online and discovers that the keyring is completely different and tries to reupload whatever data it still has lying around. I will try to take a swing at this too.

> You should probably log.error here too so users who upload a sync error log but don't look at the console have it recorded for them.
> 
> `log.error("Syncing the keyrin failed", err)`
> 
> will log the error with a stack etc.

I ended up getting rid of the "on login" check, since it seemed to enhance the likelihood of race conditions.
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review78984

> Per discussion on https://bugzilla.mozilla.org/show_bug.cgi?id=1276933, it seems like I don't need to change collection IDs when kB changes. I've updated this code to use the user's ID as a salt instead, which means collection IDs shouldn't change when a user resets their password.

To elaborate, there is no way as yet to clear out old collections, but I thought the best way would be to write a "garbage collection" script that looks at the last-accessed times of collections and delete any that are older than two weeks.
(In reply to Ethan Glasser-Camp (:glasserc) from comment #332)

> Why doesn't the rest of Sync encounter anything odd like this? I think the
> reason is that when a password is reset, Sync reassigns the user to a
> different node, and that means that there's nothing in the old data to
> conflict with the new data.

Yes: if kB changes, X-Client-State changes, and the user will be assigned a new node (and user ID, I think).

We used to hit problems like this: Bug 646269 is the end of the sweater you'll unravel. See Comment 7 for the conclusion that describes the two fixes.

Sync assumes that one of the clients will eventually have every piece of data that's on the server, so it's always fine to reassign or force a reupload -- the same thing happens if the storage format changes.

Durability was not a design goal, so even though we've got closer to that, the iceberg is still visible.
>  I don't need to change collection IDs when kB changes. I've updated this code to use the user's
> ID as a salt instead, which means collection IDs shouldn't change when a user resets their password.

Another option would be to derive a salt from the collection's collectionKey.  This would keep the collection ID stable as long as you have successfully kept track of the collectionKey, but would reset the collection ID if you ever lose it due to circumstances like Comment 325.  This could make it easier to handle the "we've lost the keys to that collection and there's nothing we can do about it" case - you just generate a fresh key and start again, leaving garbage behind in the old collection ID for eventual collection.

This will almost give you the same behaviour as sync, starting afresh whenever your encryption keys change, except at the individual collection level.
> In 1b, you can't do this because you'll erase keys that were used to encrypt data on the server.

*nod*.  I don't have a good solution to suggest but I'm glad to have a clear view of where the problem is :-)

Sync would solve this case by just going ahead and erasing the keys, uploading new ones, and letting the other device figure it out when it comes back online.  In other words, it would eliminates (1b) at the cost of introducing my state (5) from Comment 324.  The tokenserver's re-assign-you-on-key-change behaviour helps it to do this cleanly.
I spoke at length with :rnewman on IRC. In light of :rfkelly's comments about being able to detect what state we're in, he points out that if the key is "newer" than the one we have, we can detect that by asking FxA if our kB is still current. If it is, then we are in the case where we have the newest kB. The only path to recovery here is if the device that has the existing kB keyring is to come back online, which is pretty rare, so it's OK if we have to reupload everything in that case. So what he suggests is that if we grab a keyring and it doesn't have the same kB as we have, AND FxA says our kB is valid, then delete everything off of the Kinto server and upload a new keyring.

Additionally, we have to notice when we are on the machine that just had its keyring replaced and is then brought up to date. In this case, we need to resync everything.

I'm going to try to implement this. To detect that the keyring was replaced, I'm going to have to use some kind of identifier on the keyring (and then it will be obvious because the old identifier is different from the new one). I could use the "default" key on the keyring, but that seems like it's being too clever.
:rfkelly, can you recommend an FXA API that I can use to check whether kB is still valid or if the password got reset on another device? Or is there some documentation that I failed to read? :)
Flags: needinfo?(rfkelly)
There is no direct "is this kB still valid?" API, but you can check whether the browser's sessionToken is still valid by calling this:

  https://dxr.mozilla.org/mozilla-central/rev/66a77b9bfe5dcacd50eccf85de7c0e7e15ce0ffd/services/fxaccounts/FxAccounts.jsm#828

If you sessionToken is still valid, then you can be sure that the corresponding keys are also still valid.  (barring bugs in the way that the browser manages them, that is).

Under what circumstances would you need to perform this check?  I'm hoping it's less often than "every sync" which would produce a huge amount of new traffic to the FxA servers.
Flags: needinfo?(rfkelly) → needinfo?(eglassercamp)
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review80514

The code itself looks fine, I'm just a little concerned about the behaviour and key management. It's not clear to me yet what limitations exist, and which is those limitations are considered acceptable for a first cut. Eg, https://wiki.mozilla.org/CloudServices/Sync/ExtensionStorage_Design_Doc mentions "*I'm not sure what the right fix here is.*" for this case - I'd expect we need to know the answer to that before landing? Or maybe even just telemetry to help measure how much of a problem is is in practice?

It would also be great if you could update that doc to state how the keys are managed on the local device. I'm trying to make sure that the per-collection keys aren't available locally after a password reset.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:270
(Diff revision 9)
>    const coll = db.collection(collectionId, {
>      idSchema: storageSyncIdSchema,
>      remoteTransformers: [new CollectionKeyEncryptionRemoteTransformer(extension.id)],
>    });
>    yield coll.db.open();
>    if (!cryptoCollectionPromise) {

I'm not sure how this is called, but there seems a risk this code (or getCollection) could race with closeExtensionCollection - that cryptoCollectionPromise could be cleared in between the check here and when the promise is added to extensionContexts?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:441
(Diff revision 9)
>      const newKeys = yield collectionKeys.ensureKeysFor(extIds);
>      const cryptoCollection = yield cryptoCollectionPromise;
>      const newRecord = {
>        id: STORAGE_SYNC_CRYPTO_KEYRING_RECORD_ID,
>        keys: newKeys.asWBO().cleartext,
> +      // Add a field for the current kB hash.

I'm trying to understand some of this better - it looks like this is being added to the local kinto DB, which means it is locally unencrypted?

::: toolkit/components/extensions/ExtensionStorageSync.jsm:474
(Diff revision 9)
> +   *
> +   * @returns sha256 of the user's kB as a hex string
> +   */
> +  getKBHash: Task.async(function* () {
> +    const signedInUser = yield this._fxaService.getSignedInUser();
> +    if (!signedInUser) {

I'd think we should throw here? I doubt we ever want to try and put undefined as kbHash in the crypto collection.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:557
(Diff revision 9)
> +      // uploading our key won't erase keys that have already been
> +      // used on remote data. If this happens, hopefully the user will
> +      // relogin so that all devices have a consistent kB; this will
> +      // ensure that the most recent version on the server is
> +      // encrypted with the same kB that other devices have, and they
> +      // will sync the keyring successfully on subsequent syncs.

I still don't quite understand how the user gets out of this state. Assuming the last version written to the server is using a now stale kB, at some point, some client is going to need to overwrite this to get unstuck. How does that happen?

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:232
(Diff revision 9)
>          notEqual(body.keys.collections[extensionId], RANDOM_KEY.keyPairB64);
>        });
>      });
>    });
>  });
>  

it seems that we probably want tests that demonstrate how some of these use-cases interact - ie, simulating the recovery after a keychange.
Attachment #8787767 - Flags: review?(markh)
Another possibility might be to default services.sync.engine.extension-storage to false, and have a followup bug to address some of this before we default it back to true. That would be enough for it to land IMO and have people play with it. I'm off for the next few days, but that last patch is r=me with that or a similar change.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review81224

These new changes look fine, but IIUC they don't address any of the previous concerns.
Attachment #8787767 - Flags: review?(markh)
I've pushed another series that sketches out some of the proposal that :rnewman made to me last week. There's still a few very obvious "FIXME" comments for stuff that isn't implemented yet, and I haven't addressed :markh's most recent feedback, but hopefully this will give everyone a chance to tell me how I should split up the commits or otherwise offer early feedback. Right now there's a checkSyncKeyRing commit that adds stuff to make sure the keyring is reuploaded when kB changes, and there's a WIP commit that adds stuff to add a UUID to the keyring and detect changed UUIDs/outdated kBHashes and handle them appropriately. This won't work without some upstream changes to Kinto.js, which is also undergoing some changes at the moment.

:rfkelly, no, this wouldn't happen every sync. This would only happen on password resets, and then it would happen once per device. Hopefully that's rare enough that it wouldn't be a problem?
Flags: needinfo?(eglassercamp)
> This would only happen on password resets, and then it would happen once per device.
> Hopefully that's rare enough that it wouldn't be a problem?

That sounds fine to me.
Attachment #8797020 - Flags: review?(markh)
Summary: [kinto] Port over the code from the prototype → [kinto] Implement storage.sync
Comment on attachment 8773414 [details]
Bug 1253740 - Introduce extensionIdToCollectionId,

https://reviewboard.mozilla.org/r/66116/#review81508
Attachment #8773414 - Flags: review+
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review81512

::: toolkit/components/extensions/ExtensionStorageSync.jsm:179
(Diff revision 14)
> +function CollectionKeyEncryptionRemoteTransformer(extensionId) {
> +  EncryptionRemoteTransformer.call(this);
> +  this.extensionId = extensionId;
> +}
> +CollectionKeyEncryptionRemoteTransformer.prototype = {
> +  __proto__: EncryptionRemoteTransformer.prototype,
> +  getKeys: Task.async(function* () {
> +    // FIXME: cache the crypto record for the duration of a sync cycle?
> +    const collectionKeys = yield getKeyRing();
> +    if (!collectionKeys.hasKeysFor([this.extensionId])) {
> +      // This should never happen. Keys should be created (and
> +      // synced) at the beginning of the sync cycle.
> +      throw new Error(`tried to encrypt records for ${this.extensionId}, but key is not present`);
> +    }
> +    return collectionKeys.keyForCollection(this.extensionId);
> +  }),
> +};

Please use an ES6 class for this.

::: toolkit/components/extensions/ExtensionStorageSync.jsm:291
(Diff revision 14)
> +  _fxaService: fxAccounts,
>    listeners: new WeakMap(),
>  
> +  syncAll: Task.async(function* () {
> +    const extensions = collectionPromises.keys();
> +    const extIds = [];

`Array.from(extensions, extension => extension.id)`

::: toolkit/components/extensions/ExtensionStorageSync.jsm:302
(Diff revision 14)
> +    for (let [extension, collPromise] of collectionPromises.entries()) {
> +      const coll = yield collPromise;
> +      yield this.sync(extension, coll);
> +    }

It seems like we should be able to do most of this in parallel:

    let promises = Array.from(collectionPromises.entries(),
                              ([extension, collPromise]) => {
      return collPromise.then(coll => {
        return this.sync(extension, coll);
      });
    });
    
    yield Promise.all(promises);

::: toolkit/components/extensions/ExtensionStorageSync.jsm:329
(Diff revision 14)
> +      log.warn("Syncing failed", err);
> +      throw err;
> +    }
> +
> +    let changes = {};
> +    syncResults.created.map(record => {

Please don't use `.map()` when you don't need the resulting array. Either `.forEach()` or (preferably) a for-of loop would be better.

Same goes for the code below.

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:13
(Diff revision 14)
> +  keyToId,
> +  idToKey,
> +  CollectionKeyEncryptionRemoteTransformer

Please keep sorted, and add a comma at the end of the last line (ESLint will insist on this).

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:78
(Diff revision 14)
> +        ok(newKeys.hasKeysFor([extensionId]));
> +
> +        let posts = server.getPosts();
> +        equal(posts.length, 1);
> +        const post = posts[0];
> +        equal(post.headers.Authorization, "Bearer some-access-token");
> +        equal(post.headers["If-None-Match"], "*");
> +        equal(post.path, collectionRecordsPath("storage-sync-crypto") + "/keys");
> +
> +        let body = yield new KeyRingEncryptionRemoteTransformer().decode(post.body.data);
> +        ok(body.keys);
> +        ok(body.keys.default);
> +        ok(body.keys.collections[extensionId]);

Please add descriptions for all assertions.

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:332
(Diff revision 14)
> +      });
> +    });
> +  });
> +});
> +
> +function* withServer(f) {

ESLint requires that identifiers not be used earlier than they're declared.

::: toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:333
(Diff revision 14)
> +    });
> +  });
> +});
> +
> +function* withServer(f) {
> +  let s = new KintoServer();

Please no one-letter variable names.
Attachment #8781178 - Flags: review+
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review80514

I've updated the docs with the behavior of the most recent branch. Although this branch should handle more different situations gracefully, I think there is still at least one in which it can get stuck failing to sync forever. I have no idea what limitations are considered acceptable for a first cut, but based on the reviews I got for this bug, I assumed the answer was "no limitations are acceptable". :)

All data is stored "in the clear" on the device itself. Encryption happens during syncing. That means that yes, per-collection keys are available locally after a password reset. My understanding was that trying to keep data encrypted on the device itself didn't buy us much in the way of security.

> I'm not sure how this is called, but there seems a risk this code (or getCollection) could race with closeExtensionCollection - that cryptoCollectionPromise could be cleared in between the check here and when the promise is added to extensionContexts?

Hmm, let's see. This promise is created on the first time someone tries to get the collection for an extension, and the promise is stored in `collectionPromises`. We don't yield to anything, so I think it's not possible to come between the creation of the promise and its storage in `collectionPromises`. But I guess it's possible for the asynchronous code that fulfills the promise to race with the code that tears down `collectionPromises`. I think it might even be possible to have the last cleanup happen before the asynchronous code opens the crypto collection in the first place. I'm not sure how best to fix this. I was originally trying to keep these collections open as long as possible as an optimization, but maybe it's a premature optimization and I should just open/close them when I need them? Or is there some clever way I can share global resources like this and still ensure they get cleaned up correctly?

> I'm trying to understand some of this better - it looks like this is being added to the local kinto DB, which means it is locally unencrypted?

Yes, that's correct.

> I'd think we should throw here? I doubt we ever want to try and put undefined as kbHash in the crypto collection.

Good catch!

> I still don't quite understand how the user gets out of this state. Assuming the last version written to the server is using a now stale kB, at some point, some client is going to need to overwrite this to get unstuck. How does that happen?

The most recent patch series includes an implementation of the design that :rnewman gave me, where as soon as we try to sync and pull down a keyring written using a stale kB, we give up on the data on the server, wipe the entire Kinto installation, and start over, reuploading any data we have. I put that in the next patch, but I'm open to rearranging the patch series according to what you think is best.

> it seems that we probably want tests that demonstrate how some of these use-cases interact - ie, simulating the recovery after a keychange.

The next patch adds some tests to ensure that we can start over, and that other devices handle it correctly when start-over happens underneath them. Are there other tests you'd like to see?
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

https://reviewboard.mozilla.org/r/71650/#review81512

> `Array.from(extensions, extension => extension.id)`

Thanks, I didn't know about `Array.from(foo, someFunction)`

> Please keep sorted, and add a comma at the end of the last line (ESLint will insist on this).

Sorry, I didn't realize that eslint was run on test files too.
(In reply to Ethan Glasser-Camp (:glasserc) from comment #354)
> > I'm not sure how this is called, but there seems a risk this code (or getCollection) could race with closeExtensionCollection - that cryptoCollectionPromise could be cleared in between the check here and when the promise is added to extensionContexts?
> 
> Hmm, let's see. This promise is created on the first time someone tries to
> get the collection for an extension, and the promise is stored in
> `collectionPromises`. We don't yield to anything, so I think it's not
> possible to come between the creation of the promise and its storage in
> `collectionPromises`. But I guess it's possible for the asynchronous code
> that fulfills the promise to race with the code that tears down
> `collectionPromises`.

Yeah. closeExtensionCollection has:

>  // If there's nobody left, clean up the cryptoCollectionPromise.
>  if (collectionPromises.size == 0) {
>    const cryptoCollection = yield cryptoCollectionPromise;
>    yield cryptoCollection.db.close();
>    cryptoCollectionPromise = null;

so there are 2 yields between checking the size is zero and setting the promise to null. ISTM there's a chance that collectionPromises.size will transition to non-zero during these which might cause bad things to happen. The simplest approach might be to simply not cleanup the crypto collection in this case - allow it to remain alive until the browser is restarted - it seems an edge-case with fairly minimal overhead when it does happen. I guess it's possible that might cause tests to fail reporting leaks, in which case you might be able to observe a shutdown observer and cleanup then.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review82110

I forgot that webextensions.storage.sync.enabled defaults to false, so I think this is fine to land allowing some of these edge-cases to be shaken out before the pref is flipped to true. I spent some time trying to test it, but sadly failed - which will also become much simpler once it lands. I think the new changes to the tests are good, thanks!
Attachment #8787767 - Flags: review?(markh) → review+
Comment on attachment 8797020 [details]
Bug 1253740 - Handle password resets more correctly,

https://reviewboard.mozilla.org/r/82634/#review82398

This looks fine too. Sorry for the delay in these reviews, but it's taken quite some time to get my head around things.

::: services/sync/modules/engines/extension-storage.js:264
(Diff revision 2)
> +    });
> +  }
> +
> +  // Generator and discriminator for KB-is-outdated exceptions.
> +  static throwOutdatedKB(shouldBe, is) {
> +    throw "kB hash on record is outdated: should be " + shouldBe + ", is " + is

We should probably |throw new Error(...)| - I know Sync still has a number of places where strings are thrown, but we are trying to move away from that.
Attachment #8797020 - Flags: review?(markh) → review+
Component: WebExtensions: Untriaged → General
Priority: -- → P2
To try to address the re-entrancy concerns in https://reviewboard.mozilla.org/r/76460/#comment104158, I thought it might be possible to get rid of the `cryptoCollection` variable and just open it for the duration of a single sync (and then close it). This approach ran into problems because the remote transformers that do encryption for a single collection are instantiated before we do any syncs, and they need access to the crypto collection. So I fell back to having a global. I thought I could have a global that gets "turned on" at the beginning of a sync process, and then "turned off" again at the end. However, it seemed like I could get into trouble if two sync processes ran at the same time. So I fell back again to having a global crypto collection that essentially gets reference counted. I extracted this reference-counting/open-and-closing behavior into a global `cryptoCollection` object. I'm not really sure that this "solves" any of the re-entrancy concerns but it feels cleaner.

I've just pushed this series for review. :kmag, could you please let me know if you prefer this or would rather I go back to the old version?
Flags: needinfo?(kmaglione+bmo)
New version that also DRYs up some of the test code to make things a little more semantic and higher-level. Also, fix a bunch of eslint errors. There are still a handful of fixes and tweaks I need to make.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review78974

> I don't understand what's going on here and how the user gets out of this state? ie, what happens when we find a record uploaded with a different password - but it is the old password? Much like Sync itself, I expect you really want to arrange that (a) all records use the same key and (b) this implies re-uploading all records when that key changes.

I've updated this documentation to match the algorithm proposed by :rnewman, which is what is currently implemented.
Comment on attachment 8787767 [details]
Bug 1253740 - Define checkSyncKeyRing() which reuploads keys when passwords change,

https://reviewboard.mozilla.org/r/76460/#review80514

> Hmm, let's see. This promise is created on the first time someone tries to get the collection for an extension, and the promise is stored in `collectionPromises`. We don't yield to anything, so I think it's not possible to come between the creation of the promise and its storage in `collectionPromises`. But I guess it's possible for the asynchronous code that fulfills the promise to race with the code that tears down `collectionPromises`. I think it might even be possible to have the last cleanup happen before the asynchronous code opens the crypto collection in the first place. I'm not sure how best to fix this. I was originally trying to keep these collections open as long as possible as an optimization, but maybe it's a premature optimization and I should just open/close them when I need them? Or is there some clever way I can share global resources like this and still ensure they get cleaned up correctly?

The new version of this code manually reference-counts the crypto collection. I think this particular problem got solved by switching the order of the statements around a little bit. I now clear the global variable before yielding to the promise and closing the collection.
Newest, most correct version of this series, which DRYs up some of the "try the request and retry with a new token if we got a 401" logic in ExtensionStorageSync, as well as addressing a final few points that were raised during review. Final review would be great.
No longer depends on: 1277612
Comment on attachment 8773416 [details]
Bug 1253740 - Hash extension ID to obfuscate installed add-ons,

Redirecting the review to Kris
Attachment #8773416 - Flags: review+ → review?(kmaglione+bmo)
Comment on attachment 8781178 [details]
Bug 1253740 - Add code that syncs and tests,

Redirecting review to Kris.
Attachment #8781178 - Flags: review+ → review?(kmaglione+bmo)
Attachment #8757379 - Flags: review?(kmaglione+bmo)
Attachment #8773414 - Flags: review?(kmaglione+bmo)
Attachment #8757379 - Flags: review?(kmaglione+bmo)
Comment on attachment 8757379 [details]
Bug 1253740 - Implement storage.sync,

https://reviewboard.mozilla.org/r/55838/#review88640

lgtm
Attachment #8757379 - Flags: review?(kmaglione+bmo) → review+
The try builds seem to have failed for reasons unrelated to my change. I'm marking this checkin-needed.
Flags: needinfo?(kmaglione+bmo)
Keywords: checkin-needed
Pushed by ryanvm@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/8f100caf82bf
Implement storage.sync, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/eb537ef54d55
Introduce extension-storage engine with a sanity test, r=markh
https://hg.mozilla.org/integration/autoland/rev/28cc5db83380
Add crypto, including tests, r=rnewman
https://hg.mozilla.org/integration/autoland/rev/f890f42b44c4
Add code that syncs and tests, r=bsilverberg,kmag,markh
https://hg.mozilla.org/integration/autoland/rev/5cf17eb2fefe
Introduce extensionIdToCollectionId, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/97a6ee1fddfc
Hash extension ID to obfuscate installed add-ons, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/486a200fd6f0
Define checkSyncKeyRing() which reuploads keys when passwords change, r=markh
https://hg.mozilla.org/integration/autoland/rev/5e234fe1099c
Handle password resets more correctly, r=markh
Keywords: checkin-needed
Oops! Sorry about that, friends.. I thought it was an intermittent that was unrelated to what I had changed. Turns out I had added something to the webextension test "head=" property when what I meant was for it to be in the "support-files=" property. Rebased to master and fixed in https://reviewboard.mozilla.org/r/66112/diff/18-19/. Redoing `try`...
I was going to port over this change to async/await, but based on :mconley's suggestion on https://bugzilla.mozilla.org/show_bug.cgi?id=1314176#c6, I'm going to leave it as is for now.

I know it's suspect for me to say this again, but the failed build seemed related to build infrastructure and restarting the Win8 build made it go away.

I've seen an intermittent failure on my own computer running the tests where Sqlite can't open a file. I'm not sure what that's about but it hasn't shown up on the build infrastructure, so I'm assuming it's something specific to my setup.
Keywords: checkin-needed
Pushed by cbook@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/396333163897
Implement storage.sync, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/39e08d903b48
Introduce extension-storage engine with a sanity test, r=markh
https://hg.mozilla.org/integration/autoland/rev/77be2faa619f
Add crypto, including tests, r=rnewman
https://hg.mozilla.org/integration/autoland/rev/d7e4e2a7c0a6
Add code that syncs and tests, r=bsilverberg,kmag,markh
https://hg.mozilla.org/integration/autoland/rev/155b13cc953d
Introduce extensionIdToCollectionId, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/b8bd9da79a5e
Hash extension ID to obfuscate installed add-ons, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/1042e602ba84
Define checkSyncKeyRing() which reuploads keys when passwords change, r=markh
https://hg.mozilla.org/integration/autoland/rev/a73da05c87ba
Handle password resets more correctly, r=markh
Keywords: checkin-needed
Backed out for failing xpcshell /test_ext_storage.js on Android and Linux:

https://hg.mozilla.org/integration/autoland/rev/1114da4aefff8549c75274c2822c5cfc766edfce
https://hg.mozilla.org/integration/autoland/rev/64f9cf45538081ef4461eee114e372a94f603e48
https://hg.mozilla.org/integration/autoland/rev/d1c480f715a7dc444df6d26f81fd8d5580344a83
https://hg.mozilla.org/integration/autoland/rev/1284bbd58ca15fef59ec6e63ac6660a0b87ac4fe
https://hg.mozilla.org/integration/autoland/rev/587333d3f52cbd0188880285d8f7c40e236bece0
https://hg.mozilla.org/integration/autoland/rev/cf4388acc62047224bfd4f154ecce7f7f3807e38
https://hg.mozilla.org/integration/autoland/rev/677187bc8588a5f11ec6f95e40bc5b98b7ee15fd
https://hg.mozilla.org/integration/autoland/rev/289db9779bc846cd11adfb3ceedeec9e44899299

Push with failure: https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=a73da05c87ba4479ab9b9f1c75893469eb8c8521
Failure log Android: https://treeherder.mozilla.org/logviewer.html#?job_id=6013700&repo=autoland
Failure log Linux: https://treeherder.mozilla.org/logviewer.html#?job_id=6022085&repo=autoland

Android
=======

[task 2016-11-02T12:53:35.852312Z] 12:53:35     INFO -  TEST-PASS | toolkit/components/extensions/test/xpcshell/test_ext_storage.js | test_config_flag_needed - [test_config_flag_needed : 101] true == true
[task 2016-11-02T12:53:35.852383Z] 12:53:35     INFO -  "Extension loaded"
[task 2016-11-02T12:53:35.852434Z] 12:53:35     INFO -  (xpcshell/head.js) | test run_next_test 1 finished (2)
[task 2016-11-02T12:53:35.852570Z] 12:53:35     INFO -  PROCESS | toolkit/components/extensions/test/xpcshell/test_ext_storage.js | JavaScript strict warning: chrome://global/content/bindings/browser.xml, line 389: ReferenceError: reference to undefined property tabBrowser.localName
[task 2016-11-02T12:53:35.852699Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (warn) [JavaScript Warning: "ReferenceError: reference to undefined property tabBrowser.localName" {file: "chrome://global/content/bindings/browser.xml" line: 389}]"
[task 2016-11-02T12:53:35.852818Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "princ: moz-extension://0e486b63-3229-460f-8bbc-ccd872dc6e36^addonId=%7Beaf2e578-e4d6-4fa2-9f16-7e8ba058e271%7D" {file: "chrome://extensions/content/ext-c-storage.js" line: 22}]
[task 2016-11-02T12:53:35.852909Z] 12:53:35     INFO -  sanitize@chrome://extensions/content/ext-c-storage.js:22:5
[task 2016-11-02T12:53:35.852982Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.set@chrome://extensions/content/ext-c-storage.js:53:19
[task 2016-11-02T12:53:35.853102Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.853420Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.853751Z] 12:53:35     INFO -  @moz-extension://0e486b63-3229-460f-8bbc-ccd872dc6e36/%7Bfa4f90c2-d743-4f85-9ee6-4e4b1dca94c6%7D.js:10:57
[task 2016-11-02T12:53:35.854085Z] 12:53:35     INFO -  background@moz-extension://0e486b63-3229-460f-8bbc-ccd872dc6e36/%7Bfa4f90c2-d743-4f85-9ee6-4e4b1dca94c6%7D.js:9:5
[task 2016-11-02T12:53:35.854423Z] 12:53:35     INFO -  @moz-extension://0e486b63-3229-460f-8bbc-ccd872dc6e36/%7Bfa4f90c2-d743-4f85-9ee6-4e4b1dca94c6%7D.js:1:11
[task 2016-11-02T12:53:35.854596Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.854864Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.855046Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.855250Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.855690Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "Failed to load module resource://services-sync/engines/extension-storage.js." {file: "resource://gre/modules/XPCOMUtils.jsm" line: 279}]
[task 2016-11-02T12:53:35.856056Z] 12:53:35     INFO -  XPCU_moduleLambda@resource://gre/modules/XPCOMUtils.jsm:279:9
[task 2016-11-02T12:53:35.856405Z] 12:53:35     INFO -  XPCU_defineLazyGetter/<.get@resource://gre/modules/XPCOMUtils.jsm:198:21
[task 2016-11-02T12:53:35.856735Z] 12:53:35     INFO -  @resource://gre/modules/ExtensionStorageSync.jsm:317:1
[task 2016-11-02T12:53:35.857046Z] 12:53:35     INFO -  XPCU_moduleLambda@resource://gre/modules/XPCOMUtils.jsm:273:9
[task 2016-11-02T12:53:35.857394Z] 12:53:35     INFO -  XPCU_defineLazyGetter/<.get@resource://gre/modules/XPCOMUtils.jsm:198:21
[task 2016-11-02T12:53:35.857727Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.get@chrome://extensions/content/ext-storage.js:36:11
[task 2016-11-02T12:53:35.858001Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.858304Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.858624Z] 12:53:35     INFO -  call@resource://gre/modules/Extension.jsm:528:51
[task 2016-11-02T12:53:35.858868Z] 12:53:35     INFO -  receiveMessage@resource://gre/modules/Extension.jsm:452:9
[task 2016-11-02T12:53:35.859149Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.860171Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.860238Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.860291Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.860442Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "Failed to load module resource://gre/modules/ExtensionStorageSync.jsm." {file: "resource://gre/modules/XPCOMUtils.jsm" line: 279}]
[task 2016-11-02T12:53:35.860548Z] 12:53:35     INFO -  XPCU_moduleLambda@resource://gre/modules/XPCOMUtils.jsm:279:9
[task 2016-11-02T12:53:35.861350Z] 12:53:35     INFO -  XPCU_defineLazyGetter/<.get@resource://gre/modules/XPCOMUtils.jsm:198:21
[task 2016-11-02T12:53:35.861443Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.get@chrome://extensions/content/ext-storage.js:36:11
[task 2016-11-02T12:53:35.861526Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.861763Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.861860Z] 12:53:35     INFO -  call@resource://gre/modules/Extension.jsm:528:51
[task 2016-11-02T12:53:35.861923Z] 12:53:35     INFO -  receiveMessage@resource://gre/modules/Extension.jsm:452:9
[task 2016-11-02T12:53:35.863345Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.864003Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.864611Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.864859Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.865652Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "[Exception... "Component returned failure code: 0x80040111 (NS_ERROR_NOT_AVAILABLE) [nsIXPCComponents_Utils.import]"  nsresult: "0x80040111 (NS_ERROR_NOT_AVAILABLE)"  location: "JS frame :: resource://gre/modules/XPCOMUtils.jsm :: XPCU_moduleLambda :: line 273"  data: no]"]
[task 2016-11-02T12:53:35.865852Z] 12:53:35     INFO -  XPCU_moduleLambda@resource://gre/modules/XPCOMUtils.jsm:273:9
[task 2016-11-02T12:53:35.866279Z] 12:53:35     INFO -  XPCU_defineLazyGetter/<.get@resource://gre/modules/XPCOMUtils.jsm:198:21
[task 2016-11-02T12:53:35.866585Z] 12:53:35     INFO -  @resource://gre/modules/ExtensionStorageSync.jsm:317:1
[task 2016-11-02T12:53:35.866795Z] 12:53:35     INFO -  XPCU_moduleLambda@resource://gre/modules/XPCOMUtils.jsm:273:9
[task 2016-11-02T12:53:35.867026Z] 12:53:35     INFO -  XPCU_defineLazyGetter/<.get@resource://gre/modules/XPCOMUtils.jsm:198:21
[task 2016-11-02T12:53:35.868023Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.get@chrome://extensions/content/ext-storage.js:36:11
[task 2016-11-02T12:53:35.868099Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.868162Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.868222Z] 12:53:35     INFO -  call@resource://gre/modules/Extension.jsm:528:51
[task 2016-11-02T12:53:35.868286Z] 12:53:35     INFO -  receiveMessage@resource://gre/modules/Extension.jsm:452:9
[task 2016-11-02T12:53:35.868426Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.868694Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.868886Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.869175Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.869458Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "ExtensionStorageSync is not defined" {file: "chrome://extensions/content/ext-storage.js" line: 39}]
[task 2016-11-02T12:53:35.869681Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.set@chrome://extensions/content/ext-storage.js:39:11
[task 2016-11-02T12:53:35.869954Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.870148Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.870397Z] 12:53:35     INFO -  call@resource://gre/modules/Extension.jsm:528:51
[task 2016-11-02T12:53:35.870637Z] 12:53:35     INFO -  receiveMessage@resource://gre/modules/Extension.jsm:452:9
[task 2016-11-02T12:53:35.870938Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.871265Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.871555Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.871823Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.872186Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "ExtensionStorageSync is not defined" {file: "chrome://extensions/content/ext-storage.js" line: 42}]
[task 2016-11-02T12:53:35.872648Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.remove@chrome://extensions/content/ext-storage.js:42:11
[task 2016-11-02T12:53:35.872780Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.873106Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.873226Z] 12:53:35     INFO -  call@resource://gre/modules/Extension.jsm:528:51
[task 2016-11-02T12:53:35.873429Z] 12:53:35     INFO -  receiveMessage@resource://gre/modules/Extension.jsm:452:9
[task 2016-11-02T12:53:35.873639Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.873933Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.874218Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.874391Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.874760Z] 12:53:35     INFO -  "CONSOLE_MESSAGE: (error) [JavaScript Error: "ExtensionStorageSync is not defined" {file: "chrome://extensions/content/ext-storage.js" line: 45}]
[task 2016-11-02T12:53:35.875203Z] 12:53:35     INFO -  storageApiFactory/<.storage.sync.clear@chrome://extensions/content/ext-storage.js:45:11
[task 2016-11-02T12:53:35.875339Z] 12:53:35     INFO -  callAsyncFunction@resource://gre/modules/ExtensionUtils.jsm:1735:41
[task 2016-11-02T12:53:35.875564Z] 12:53:35     INFO -  inject/stub@resource://gre/modules/Schemas.jsm:1580:16
[task 2016-11-02T12:53:35.875867Z] 12:53:35     INFO -  call@resource://gre/modules/Extension.jsm:528:51
[task 2016-11-02T12:53:35.876114Z] 12:53:35     INFO -  receiveMessage@resource://gre/modules/Extension.jsm:452:9
[task 2016-11-02T12:53:35.876351Z] 12:53:35     INFO -  _do_main@/mnt/sdcard/tests/xpc/head.js:210:5
[task 2016-11-02T12:53:35.876614Z] 12:53:35     INFO -  _execute_test@/mnt/sdcard/tests/xpc/head.js:545:5
[task 2016-11-02T12:53:35.876794Z] 12:53:35     INFO -  @-e:1:1
[task 2016-11-02T12:53:35.876974Z] 12:53:35     INFO -  "
[task 2016-11-02T12:53:35.877399Z] 12:53:35  WARNING -  TEST-UNEXPECTED-FAIL | toolkit/components/extensions/test/xpcshell/test_ext_storage.js | test_config_flag_needed - [test_config_flag_needed : 86] storage.sync.get is behind a flag - Expected: Please set webextensions.storage.sync.enabled to true in about:config, Actual: An unexpected error occurred - false == true
[task 2016-11-02T12:53:35.877852Z] 12:53:35     INFO -  resource://testing-common/ExtensionXPCShellUtils.jsm:attachListeners/<:86
[task 2016-11-02T12:53:35.877992Z] 12:53:35     INFO -  resource://gre/modules/ExtensionUtils.jsm:runSafeSyncWithoutClone:60
[task 2016-11-02T12:53:35.878221Z] 12:53:35     INFO -  resource://gre/modules/ExtensionUtils.jsm:emit/promises<:705
[task 2016-11-02T12:53:35.878535Z] 12:53:35     INFO -  resource://gre/modules/ExtensionUtils.jsm:emit:704
[task 2016-11-02T12:53:35.878879Z] 12:53:35     INFO -  resource://gre/modules/Extension.jsm:emit:1475
[task 2016-11-02T12:53:35.879115Z] 12:53:35     INFO -  chrome://extensions/content/ext-test.js:testApiFactory/<.test.assertEq:71
[task 2016-11-02T12:53:35.879329Z] 12:53:35     INFO -  resource://gre/modules/ExtensionUtils.jsm:callFunctionNoReturn:1729
[task 2016-11-02T12:53:35.879505Z] 12:53:35     INFO -  resource://gre/modules/Schemas.jsm:inject/stub:1586
[task 2016-11-02T12:53:35.879751Z] 12:53:35     INFO -  resource://gre/modules/Extension.jsm:call:528
[task 2016-11-02T12:53:35.880171Z] 12:53:35     INFO -  resource://gre/modules/Extension.jsm:receiveMessage:452
[task 2016-11-02T12:53:35.880260Z] 12:53:35     INFO -  /mnt/sdcard/tests/xpc/head.js:_do_main:210
[task 2016-11-02T12:53:35.880523Z] 12:53:35     INFO -  /mnt/sdcard/tests/xpc/head.js:_execute_test:545


Linux
=====

[task 2016-11-02T15:01:06.527068Z] 15:01:06     INFO -  TEST-PASS | toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js | test_extension_id_to_collection_id - [test_extension_id_to_collection_id : 446] "6584b0153336fb274912b31a3225c15a92b703cdc3adfe1917c1aa43122a52b8" == "6584b0153336fb274912b31a3225c15a92b703cdc3adfe1917c1aa43122a52b8"
[task 2016-11-02T15:01:06.528639Z] 15:01:06     INFO -  (xpcshell/head.js) | test run_next_test 1 finished (2)
[task 2016-11-02T15:01:06.530309Z] 15:01:06     INFO -  (xpcshell/head.js) | test run_next_test 2 pending (2)
[task 2016-11-02T15:01:06.532977Z] 15:01:06     INFO -  (xpcshell/head.js) | test test_extension_id_to_collection_id finished (2)
[task 2016-11-02T15:01:06.534834Z] 15:01:06     INFO -  toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js | Starting ensureKeysFor_posts_new_keys
[task 2016-11-02T15:01:06.536762Z] 15:01:06     INFO -  (xpcshell/head.js) | test ensureKeysFor_posts_new_keys pending (2)
[task 2016-11-02T15:01:06.540801Z] 15:01:06     INFO -  PROCESS | 11586 | JavaScript strict warning: resource://services-common/kinto-http-client.js, line 131: ReferenceError: reference to undefined property desc.initializer
[task 2016-11-02T15:01:06.542681Z] 15:01:06     INFO -  (xpcshell/head.js) | test run_next_test 2 finished (2)
[task 2016-11-02T15:01:06.544583Z] 15:01:06     INFO -  "CONSOLE_MESSAGE: (warn) [JavaScript Warning: "ReferenceError: reference to undefined property desc.initializer" {file: "resource://services-common/kinto-http-client.js" line: 131}]"
[task 2016-11-02T15:01:06.546521Z] 15:01:06     INFO -  Unexpected exception Error: Could not open connection to /tmp/xpc-profile-165C_w/storage-sync.sqlite: 2153971713 at resource://gre/modules/Sqlite.jsm:945
[task 2016-11-02T15:01:06.548331Z] 15:01:06     INFO -  openConnection/</<@resource://gre/modules/Sqlite.jsm:945:16
[task 2016-11-02T15:01:06.551173Z] 15:01:06     INFO -  _do_main@/home/worker/workspace/build/tests/xpcshell/head.js:210:5
[task 2016-11-02T15:01:06.553224Z] 15:01:06     INFO -  _execute_test@/home/worker/workspace/build/tests/xpcshell/head.js:545:5
[task 2016-11-02T15:01:06.554841Z] 15:01:06     INFO -  @-e:1:1
[task 2016-11-02T15:01:06.556538Z] 15:01:06     INFO -  open@resource://services-common/kinto-offline-client.js:194:12
[task 2016-11-02T15:01:06.559417Z] 15:01:06     INFO -  this.cryptoCollection.incrementUses<@resource://gre/modules/ExtensionStorageSync.jsm:203:38
[task 2016-11-02T15:01:06.561256Z] 15:01:06     INFO -  this.cryptoCollection.getKeyRing<@resource://gre/modules/ExtensionStorageSync.jsm:258:35
[task 2016-11-02T15:01:06.563020Z] 15:01:06     INFO -  this.ExtensionStorageSync.ensureKeysFor<@resource://gre/modules/ExtensionStorageSync.jsm:585:34
[task 2016-11-02T15:01:06.565206Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys/</<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:460:27
[task 2016-11-02T15:01:06.567420Z] 15:01:06     INFO -  withSignedInUser@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:332:12
[task 2016-11-02T15:01:06.571541Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys/<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:453:12
[task 2016-11-02T15:01:06.573472Z] 15:01:06     INFO -  withContextAndServer/</<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:310:14
[task 2016-11-02T15:01:06.575301Z] 15:01:06     INFO -  withServer@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:298:12
[task 2016-11-02T15:01:06.577181Z] 15:01:06     INFO -  withContextAndServer/<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:309:12
[task 2016-11-02T15:01:06.579185Z] 15:01:06     INFO -  withContext@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/head_sync.js:43:12
[task 2016-11-02T15:01:06.582030Z] 15:01:06     INFO -  withSyncContext@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/head_sync.js:63:12
[task 2016-11-02T15:01:06.585312Z] 15:01:06     INFO -  withContextAndServer@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:308:10
[task 2016-11-02T15:01:06.587308Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:452:10
[task 2016-11-02T15:01:06.589669Z] 15:01:06     INFO -  _run_next_test@/home/worker/workspace/build/tests/xpcshell/head.js:1566:9
[task 2016-11-02T15:01:06.592987Z] 15:01:06     INFO -  do_execute_soon/<.run@/home/worker/workspace/build/tests/xpcshell/head.js:713:9
[task 2016-11-02T15:01:06.594799Z] 15:01:06     INFO -  _do_main@/home/worker/workspace/build/tests/xpcshell/head.js:210:5
[task 2016-11-02T15:01:06.596575Z] 15:01:06     INFO -  _execute_test@/home/worker/workspace/build/tests/xpcshell/head.js:545:5
[task 2016-11-02T15:01:06.598361Z] 15:01:06     INFO -  @-e:1:1
[task 2016-11-02T15:01:06.600479Z] 15:01:06     INFO -  exiting test
[task 2016-11-02T15:01:06.603043Z] 15:01:06  WARNING -  TEST-UNEXPECTED-FAIL | toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js | ensureKeysFor_posts_new_keys - [ensureKeysFor_posts_new_keys : 221] A promise chain failed to handle a rejection: Error: Could not open connection to /tmp/xpc-profile-165C_w/storage-sync.sqlite: 2153971713 - rejection date: Wed Nov 02 2016 15:01:06 GMT+0000 (UTC) - stack: openConnection/</<@resource://gre/modules/Sqlite.jsm:945:16
[task 2016-11-02T15:01:06.605354Z] 15:01:06     INFO -  _do_main@/home/worker/workspace/build/tests/xpcshell/head.js:210:5
[task 2016-11-02T15:01:06.607437Z] 15:01:06     INFO -  _execute_test@/home/worker/workspace/build/tests/xpcshell/head.js:545:5
[task 2016-11-02T15:01:06.609411Z] 15:01:06     INFO -  @-e:1:1
[task 2016-11-02T15:01:06.611398Z] 15:01:06     INFO -  open@resource://services-common/kinto-offline-client.js:194:12
[task 2016-11-02T15:01:06.613525Z] 15:01:06     INFO -  this.cryptoCollection.incrementUses<@resource://gre/modules/ExtensionStorageSync.jsm:203:38
[task 2016-11-02T15:01:06.616238Z] 15:01:06     INFO -  this.cryptoCollection.getKeyRing<@resource://gre/modules/ExtensionStorageSync.jsm:258:35
[task 2016-11-02T15:01:06.618075Z] 15:01:06     INFO -  this.ExtensionStorageSync.ensureKeysFor<@resource://gre/modules/ExtensionStorageSync.jsm:585:34
[task 2016-11-02T15:01:06.620056Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys/</<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:460:27
[task 2016-11-02T15:01:06.625267Z] 15:01:06     INFO -  withSignedInUser@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:332:12
[task 2016-11-02T15:01:06.627482Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys/<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:453:12
[task 2016-11-02T15:01:06.629321Z] 15:01:06     INFO -  withContextAndServer/</<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:310:14
[task 2016-11-02T15:01:06.631114Z] 15:01:06     INFO -  withServer@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:298:12
[task 2016-11-02T15:01:06.633064Z] 15:01:06     INFO -  withContextAndServer/<@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:309:12
[task 2016-11-02T15:01:06.634909Z] 15:01:06     INFO -  withContext@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/head_sync.js:43:12
[task 2016-11-02T15:01:06.636710Z] 15:01:06     INFO -  withSyncContext@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/head_sync.js:63:12
[task 2016-11-02T15:01:06.638775Z] 15:01:06     INFO -  withContextAndServer@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:308:10
[task 2016-11-02T15:01:06.640957Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:452:10
[task 2016-11-02T15:01:06.643048Z] 15:01:06     INFO -  _run_next_test@/home/worker/workspace/build/tests/xpcshell/head.js:1566:9
[task 2016-11-02T15:01:06.645118Z] 15:01:06     INFO -  do_execute_soon/<.run@/home/worker/workspace/build/tests/xpcshell/head.js:713:9
[task 2016-11-02T15:01:06.649781Z] 15:01:06     INFO -  _do_main@/home/worker/workspace/build/tests/xpcshell/head.js:210:5
[task 2016-11-02T15:01:06.652024Z] 15:01:06     INFO -  _execute_test@/home/worker/workspace/build/tests/xpcshell/head.js:545:5
[task 2016-11-02T15:01:06.653698Z] 15:01:06     INFO -  @-e:1:1
[task 2016-11-02T15:01:06.655413Z] 15:01:06     INFO -  closeExtensionCollection<@resource://gre/modules/ExtensionStorageSync.jsm:357:9
[task 2016-11-02T15:01:06.657689Z] 15:01:06     INFO -  cleanUpForContext@resource://gre/modules/ExtensionStorageSync.jsm:380:12
[task 2016-11-02T15:01:06.661104Z] 15:01:06     INFO -  getCollection/<.close@resource://gre/modules/ExtensionStorageSync.jsm:783:22
[task 2016-11-02T15:01:06.663379Z] 15:01:06     INFO -  unload@resource://gre/modules/ExtensionUtils.jsm:494:7
[task 2016-11-02T15:01:06.667124Z] 15:01:06     INFO -  withContext@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/head_sync.js:45:11
[task 2016-11-02T15:01:06.669594Z] 15:01:06     INFO -  withSyncContext@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/head_sync.js:63:12
[task 2016-11-02T15:01:06.672161Z] 15:01:06     INFO -  withContextAndServer@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:308:10
[task 2016-11-02T15:01:06.675008Z] 15:01:06     INFO -  ensureKeysFor_posts_new_keys@/home/worker/workspace/build/tests/xpcshell/tests/toolkit/components/extensions/test/xpcshell/test_ext_storage_sync.js:452:10
[task 2016-11-02T15:01:06.678721Z] 15:01:06     INFO -  TaskImpl_run@resource://gre/modules/Task.jsm:323:42
[task 2016-11-02T15:01:06.681093Z] 15:01:06     INFO -  Handler.prototype.process@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:940:21
[task 2016-11-02T15:01:06.683680Z] 15:01:06     INFO -  this.PromiseWalker.walkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:816:7
[task 2016-11-02T15:01:06.689608Z] 15:01:06     INFO -  Promise*this.PromiseWalker.scheduleWalkerLoop@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:747:11
[task 2016-11-02T15:01:06.692100Z] 15:01:06     INFO -  this.PromiseWalker.schedulePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:779:7
[task 2016-11-02T15:01:06.695575Z] 15:01:06     INFO -  this.PromiseWalker.completePromise@resource://gre/modules/Promise.jsm -> resource://gre/modules/Promise-backend.js:714:7
[task 2016-11-02T15:01:06.697711Z] 15:01:06     INFO -  _do_main@/home/worker/workspace/build/tests/xpcshell/head.js:210:5
[task 2016-11-02T15:01:06.699778Z] 15:01:06     INFO -  _execute_test@/home/worker/workspace/build/tests/xpcshell/head.js:545:5
[task 2016-11-02T15:01:06.704816Z] 15:01:06     INFO -  @-e:1:1
[task 2016-11-02T15:01:06.707056Z] 15:01:06     INFO -   - false == true
[task 2016-11-02T15:01:06.709305Z] 15:01:06     INFO -  resource://testing-common/PromiseTestUtils.jsm:assertNoUncaughtRejections:221
[task 2016-11-02T15:01:06.711570Z] 15:01:06     INFO -  /home/worker/workspace/build/tests/xpcshell/head.js:_execute_test:546
[task 2016-11-02T15:01:06.713761Z] 15:01:06     INFO -  -e:null:1
[task 2016-11-02T15:01:06.716120Z] 15:01:06     INFO -  exiting test
Flags: needinfo?(eglassercamp)
Comment on attachment 8809119 [details]
Bug 1253740 - Try to build and function even on Android,

https://reviewboard.mozilla.org/r/91760/#review91714

I guess this is fine...
Attachment #8809119 - Flags: review?(kmaglione+bmo) → review+
All right! The fundamental problem is that Android doesn't ship services/sync, which is a dependency of ExtensionStorageSync.jsm, and FxAccounts.jsm isn't present on Android either. I worked around that in the last patch by making code conditional on Android. chrome.storage.sync will be present on Android, but it won't actually sync. I've punted this work to bug 1316442. For now I hope to get this merged without anything breaking.
Flags: needinfo?(eglassercamp)
Keywords: checkin-needed
Autoland couldn't rebase this for landing.
Keywords: checkin-needed
Rebased and pushed to mozreview. Also, I noticed a test (the mochitest test_ext_storage_content.html) that should have been updated at the same time as the xpcshell test_ext_storage.js, so I did that too.
Keywords: checkin-needed
Autoland couldn't rebase again. Please make sure to rebase it against the autoland repo.
Keywords: checkin-needed
I have rebased it against the state of autoland, and the subsequent try build seems to be going OK.
Keywords: checkin-needed
Pushed by ryanvm@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/b98fa03d8c15
Implement storage.sync, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/653788e146f2
Introduce extension-storage engine with a sanity test, r=markh
https://hg.mozilla.org/integration/autoland/rev/4fbff333994f
Add crypto, including tests, r=rnewman
https://hg.mozilla.org/integration/autoland/rev/96da5e6944b8
Add code that syncs and tests, r=bsilverberg,kmag,markh
https://hg.mozilla.org/integration/autoland/rev/d3d89d4e2d87
Introduce extensionIdToCollectionId, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/5b5338a2baeb
Hash extension ID to obfuscate installed add-ons, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/e1312ab53299
Define checkSyncKeyRing() which reuploads keys when passwords change, r=markh
https://hg.mozilla.org/integration/autoland/rev/1b13fe394b66
Handle password resets more correctly, r=markh
https://hg.mozilla.org/integration/autoland/rev/903890f218dd
Try to build and function even on Android, r=kmag
Keywords: checkin-needed
This conflicted with changeset 4e31a2cb26bd, which landed sometime between launching the try build and getting merged. Rebased and fixed the error.
Flags: needinfo?(eglassercamp)
Keywords: checkin-needed
Pushed by maglione.k@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/bbfdfc4befb4
Implement storage.sync, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/6e838e559321
Introduce extension-storage engine with a sanity test, r=markh
https://hg.mozilla.org/integration/autoland/rev/bd3541139b99
Add crypto, including tests, r=rnewman
https://hg.mozilla.org/integration/autoland/rev/b1405d5fda46
Add code that syncs and tests, r=bsilverberg,kmag,markh
https://hg.mozilla.org/integration/autoland/rev/608fb9db68b9
Introduce extensionIdToCollectionId, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/d4b969307f46
Hash extension ID to obfuscate installed add-ons, r=bsilverberg,kmag
https://hg.mozilla.org/integration/autoland/rev/aa6abbd36dd1
Define checkSyncKeyRing() which reuploads keys when passwords change, r=markh
https://hg.mozilla.org/integration/autoland/rev/1c73de6813cd
Handle password resets more correctly, r=markh
https://hg.mozilla.org/integration/autoland/rev/acd74a4affe1
Try to build and function even on Android, r=kmag
Keywords: checkin-needed
Depends on: 1319742
Depends on: 1320324
See Also: → 1331179
You need to log in before you can comment on or make changes to this bug.