Closed Bug 1368209 Opened 7 years ago Closed 7 years ago

Break up `Engine::_processIncoming`

Categories

(Firefox :: Sync, enhancement, P1)

enhancement

Tracking

()

RESOLVED FIXED
Firefox 58
Tracking Status
firefox58 --- fixed

People

(Reporter: lina, Assigned: lina)

References

(Depends on 1 open bug, Blocks 2 open bugs)

Details

Attachments

(3 files)

Between the massive record handler that's no longer needed now that we have download batching, to leftover mobile branches (removed in bug 1366081), to re-fetching and applying records that failed during the last sync...`Engine::_processIncoming` was at one time sensible, but now incomprehensible.

Let's break it up into smaller chunks, so that we understand what it does.
This doesn't pass all tests yet, but the integration tests in
`test_telemetry` work, so getting this up as a first cut. I don't think
the two-way diff view is going to be helpful here...it's best to look
at the "before" and "after" independently.

This also removes the second `.get` fork ("Mobile: check if we got the
maximum that we requested"), which seems like it's doing the wrong thing.
You pointed out on IRC that we're fetching IDs for the same collection
again, with the same download limit, but a different sort order. On
Desktop, this suggests we do a lot of busy work that doesn't tell us
what we already know...and, thanks to that download limit, we still do
a partial sync for history.

I think this means we can also remove `toFetch` and just use
`previousFailed`. I didn't do that in this patch because at least
`test_processIncoming_error` relies on it, and maybe others.

The re-buffering in `recordHandler` is totally unnecesary, but I didn't
want to make this patch bigger than it already was. We can do this in
part 2.

The names and return values could also use some work. I tried to break
this up into roughly discrete stages, but between the recovery
handler and exceptions everywhere, it's a bit annoying. I really miss
Rust's `Result` type.

Anyway, please have a look, and let me know what you think, and if this
is an improvement. Part of the rewrite was me just trying to
understand how this function works.

Review commit: https://reviewboard.mozilla.org/r/143534/diff/#index_header
See other reviews: https://reviewboard.mozilla.org/r/143534/
Attachment #8871999 - Flags: feedback?(eoger)
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review147274

::: services/sync/modules/engines.js:1067
(Diff revision 1)
>    /**
>     * Process incoming records.
>     * In the most awful and untestable way possible.
>     * This now accepts something that makes testing vaguely less impossible.
>     */
>    _processIncoming(newitems) {

`newitems` isn't used anymore, except in tests. Might be worth changing `fetchAndApply{New, PreviousFailed}` to use it, or maybe have them override `itemSource` instead.

::: services/sync/modules/engines.js:1076
(Diff revision 1)
>      let count = {applied: 0, failed: 0, newFailed: 0, reconciled: 0};
> -    let handled = [];
> -    let applyBatch = [];
> -    let failed = [];
>      let failedInPreviousSync = this.previousFailed;
>      let fetchBatch = Utils.arrayUnion(this.toFetch, failedInPreviousSync);

`toFetch` can go, but it's still used by tests. There may be value in keeping it, but not persisting to disk.

::: services/sync/modules/engines.js:1174
(Diff revision 1)
> +    newitems.ids   = ids;
> +
> +    let records = [];
> +    newitems.recordHandler = record => records.push(record);
> +
> +    let resp = Async.promiseSpinningly(newitems.get());

I think this could also use `getBatched`, but I used `get` to match what was there before. OTOH, that makes the `recordHandler` semantics different, so it's probably worth changing, anyway.

::: services/sync/modules/engines.js:1202
(Diff revision 1)
> -      if (self.lastModified == null || item.modified > self.lastModified)
> -        self.lastModified = item.modified;
> +      if (this.lastModified == null || item.modified > this.lastModified) {
> +        this.lastModified = item.modified;
> +      }
>  
>        // Track the collection for the WBO.
> -      item.collection = self.name;
> +      item.collection = this.name;

I'm not sure if this is necessary anymore. `CryptoWrapper` objects should know their collection.

::: services/sync/modules/engines.js:1217
(Diff revision 1)
> -            }
> -          }
>  
> -          switch (strategy) {
> -            case null:
> -              // Retry succeeded! No further handling.
> +        case SyncEngine.kFilterResult.abort:
> +          this._noteApplyFailure();
> +          aborting = error.cause;

Just throw here, we don't need any cleanup logic to run before aborting.

::: services/sync/modules/engines.js:1237
(Diff revision 1)
> -      }
> +    }
>  
> -      let shouldApply;
> +    // At this point, we've filtered out duplicates and records that failed to
> +    // decrypt, and re-threw exceptions from reconciliation. Apply the records
> +    // in chunks.
> +    for (let records of Utils.chunkArray(recordsToApply, this.applyIncomingBatchSize)) {

The default `applyIncomingBatchSize` is 1 (o_O), which is what the bookmarks engine uses. Should we consider bumping that?

Alternatively, as we start using async storage APIs (which I think is now...mostly everything), do we even need this extra chunking?

::: services/sync/modules/engines.js:1257
(Diff revision 1)
> -      self._store._sleep(0);
> -    };
>  
> -    // Only bother getting data from the server if there's new things
> -    if (this.lastModified == null || this.lastModified > this.lastSync) {
> -      let resp = Async.promiseSpinningly(newitems.getBatched());
> +    // Persist the IDs of any records that failed to apply. We'll try to fetch
> +    // and apply them again on the next sync.
> +    if (failedIDs.length) {

Move this into a separate function, and then we can remove this use of `aborting` and just throw above, too.

::: services/sync/modules/engines.js:1280
(Diff revision 1)
> +   *
> +   * This function filters out records that fail to decrypt, records that should
> +   * be deleted on the server without applying, records that fail to reconcile,
> +   * and reconciled duplicates.
> +   */
> +  _tryFilter(item) {

There are probably better names for these functions. The fact that they have so many side effects is a bit spooky, but that's par for the course when it comes to Sync. :-)

::: services/sync/modules/engines.js:1327
(Diff revision 1)
> +   */
> +  _tryDecryptOrRecover(item) {
> +    let result;
> +    try {
> +      if (this._tryDecrypt(item)) {
> +        result = SyncEngine.kFilterResult.apply;

I wish I could just return here, but, if I don't return outside the try-catch block, ESLint complains that the function doesn't have a consistent return value.
Attachment #8871999 - Flags: feedback?(markh)
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review147370

I think this is heading towards being an improvement, but probably still requires some refactoring to make it a massive improvement :) In particular, `_tryApply` has introduced some confusion (or maybe better said as "isn't as clear as it could be"). I agree with your comments about some of the batching being strange, which we should see if we can rationalize - which Ed is also doing - which leads to my next point - I'm not sure it makes sense for both these patches to be in-flight at the same time, as we run the risk of introducing bugs as they are rebased against each other.

IIUC, Ed is looking at a "part 1" to his async patch which attempts to clarify some of the batching semantics - I'm not sure where he is with that, but ISTM it might make more sense to help him get that "part 1" landed before going too much further with this (I think his "part 2" patch probably doesn't have the same problem - he'll still need to rebase against this, but that should remain mechanical)

::: services/common/utils.js:635
(Diff revision 1)
>      converter.onStopRequest(null, null, null);
>  
>      return result;
>    },
> +
> +  *chunkArray(array, chunkLength) {

we probably want docs here (and I find generators difficult to name - on one hand, I think something like "generateArrayChunks()" or something would be better, but OTOH that's obviously redundant given it's clearly a generator (just musing rather than saying it needs to change).

::: services/sync/modules/engines.js:1067
(Diff revision 1)
>    /**
>     * Process incoming records.
>     * In the most awful and untestable way possible.
>     * This now accepts something that makes testing vaguely less impossible.
>     */
>    _processIncoming(newitems) {

this has confused me before too - consider tweaking the comment above to make that more obvious.

::: services/sync/modules/engines.js:1076
(Diff revision 1)
>      let count = {applied: 0, failed: 0, newFailed: 0, reconciled: 0};
> -    let handled = [];
> -    let applyBatch = [];
> -    let failed = [];
>      let failedInPreviousSync = this.previousFailed;
>      let fetchBatch = Utils.arrayUnion(this.toFetch, failedInPreviousSync);

toFetch confuses me

::: services/sync/modules/engines.js:1083
(Diff revision 1)
>      this.previousFailed = [];
>  
> -    // Used (via exceptions) to allow the record handler/reconciliation/etc.
> -    // methods to signal that they would like processing of incoming records to
> -    // cease.
> -    let aborting = undefined;
> +    // Fetch new records from the server, if the engine's last modified time is
> +    // newer than our last sync time, or we've never synced before.
> +    if (this._hasNewRecords()) {
> +      let { applied, failed, reconciled } = this._fetchAndApplyNew();

I'd have expected you to pass |newitems| here (or maybe just remove the param from this function, although I expect that might make testing tricky)

::: services/sync/modules/engines.js:1174
(Diff revision 1)
> +    newitems.ids   = ids;
> +
> +    let records = [];
> +    newitems.recordHandler = record => records.push(record);
> +
> +    let resp = Async.promiseSpinningly(newitems.get());

yeah, I think it is worth changing, and will probably make Ed's async patch simpler.

::: services/sync/modules/engines.js:1198
(Diff revision 1)
> +    let recordsToApply = [], failedIDs = [], reconciled = 0;
> +    let aborting;
> +    for (let item of items) {
>        // Grab a later last modified if possible
> -      if (self.lastModified == null || item.modified > self.lastModified)
> -        self.lastModified = item.modified;
> +      if (this.lastModified == null || item.modified > this.lastModified) {
> +        this.lastModified = item.modified;

Although your patch doesn't intorudce it, ISTM that adjusting lastModified here *might* be wrong, particularly in the isShutdownException case - specifically, we might have bumped this, then hit a shutdown trying to apply them, and fail to write those items to failed, so next sync will not find then in the failed list, and will not re-fect them as we've already bumped lastModified.

Not sure though...

::: services/sync/modules/engines.js:1202
(Diff revision 1)
> -      if (self.lastModified == null || item.modified > self.lastModified)
> -        self.lastModified = item.modified;
> +      if (this.lastModified == null || item.modified > this.lastModified) {
> +        this.lastModified = item.modified;
> +      }
>  
>        // Track the collection for the WBO.
> -      item.collection = self.name;
> +      item.collection = this.name;

yeah, agree it's a smell.

::: services/sync/modules/engines.js:1280
(Diff revision 1)
> +   *
> +   * This function filters out records that fail to decrypt, records that should
> +   * be deleted on the server without applying, records that fail to reconcile,
> +   * and reconciled duplicates.
> +   */
> +  _tryFilter(item) {

It also seems a little confusing re what this function does - it seems to both take some actions *and* return what actions should be taken by the caller - eg, at face value, SyncEngine.kFilterResult.apply means "caller should apply" whereas SyncEngine.kFilterResult.reconcile means "I reconciled".

::: services/sync/modules/engines.js:1283
(Diff revision 1)
> +   * and reconciled duplicates.
> +   */
> +  _tryFilter(item) {
> +    let result = this._tryDecryptOrRecover(item);
> +    if (result != SyncEngine.kFilterResult.apply) {
> +      return { result, error: null };

This seems confusing - we have an error case that is returning error: null. It seems it might be better to have `_tryDecryptOrRecover` just throw? Also, the "or" in the name seems misleading too - maybe something like `_tryDecryptWithRecovery` or something?

::: services/sync/modules/engines.js:1369
(Diff revision 1)
> +        return true;
> +      } catch (ex) {
> +        if (!Utils.isHMACMismatch(ex)) {
> +          throw ex;
> -      }
> +        }
> +        let strategy = this.handleHMACMismatch(item, true);

I'm a little confused why both this function and the one above appear to be doing some degree of recovery?
Attachment #8871999 - Flags: feedback?(markh) → feedback+
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review147646

This is a good first attempt to break this very-long _processIncoming function :)
Like Mark said, I think _tryFilter could gain a little clarity towards the error handling.
I am okay with blocking my async patch with this one.

::: services/sync/modules/engines.js:1067
(Diff revision 1)
>    /**
>     * Process incoming records.
>     * In the most awful and untestable way possible.
>     * This now accepts something that makes testing vaguely less impossible.
>     */
>    _processIncoming(newitems) {

I'd rather have a simplier way to mock itemSource in tests :)

::: services/sync/modules/engines.js:1067
(Diff revision 1)
>    /**
>     * Process incoming records.
>     * In the most awful and untestable way possible.
>     * This now accepts something that makes testing vaguely less impossible.
>     */
>    _processIncoming(newitems) {

Way clearer method :thumbsup:

::: services/sync/modules/engines.js:1076
(Diff revision 1)
>      let count = {applied: 0, failed: 0, newFailed: 0, reconciled: 0};
> -    let handled = [];
> -    let applyBatch = [];
> -    let failed = [];
>      let failedInPreviousSync = this.previousFailed;
>      let fetchBatch = Utils.arrayUnion(this.toFetch, failedInPreviousSync);

Kill it if possible

::: services/sync/modules/engines.js:1135
(Diff revision 1)
> +   */
> +  _fetchAndApplyNew() {
> +    let newitems = this.itemSource();
> +
> +    if (this.downloadLimit) {
> +      newitems.limit = this.downloadLimit;

Are we fine with the fact that this can result in partial syncs for the history engine? It is not a problem per-se, the current code already does that (although it was 10k items (5+5), with your changes it would be only 5k).

::: services/sync/modules/engines.js:1161
(Diff revision 1)
> +   * Fetches and applies all records that failed to apply during the last sync.
> +   */
> +  _fetchAndApplyPreviousFailed(ids) {
> +    let newitems = this.itemSource();
> +
> +    // Reuse the original query, but get rid of the restricting params

Is this comment still correct?

::: services/sync/modules/engines.js:1194
(Diff revision 1)
> +   * Returns a triple containing the number of applied, failed, and reconciled
> +   * records.
> +   */
> +  _tryApply(items) {
> +    let recordsToApply = [], failedIDs = [], reconciled = 0;
> +    let aborting;

Could we name that variable |error| instead?

::: services/sync/modules/engines.js:1248
(Diff revision 1)
>            throw ex;
>          }
> +        // Catch any error that escapes from applyIncomingBatch. At present
> +        // those will all be abort events.
> +        this._log.warn("Got exception, aborting processIncoming", ex);
> +        aborting = ex;

Maybe this loop could be in a helper method, we could throw there and move away from |aborting|, but I know that tcsc is not very fond of having too many methods so I'll leave that to your discretion.

::: services/sync/modules/engines.js:1327
(Diff revision 1)
> +   */
> +  _tryDecryptOrRecover(item) {
> +    let result;
> +    try {
> +      if (this._tryDecrypt(item)) {
> +        result = SyncEngine.kFilterResult.apply;

If you add a default case to your switch statement line 1330, this eslint rule will pass.
Attachment #8871999 - Flags: feedback?(eoger) → feedback+
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review148124

::: services/sync/modules/engines.js:1265
(Diff revision 1)
> -    }
> -
> -    // Mobile: check if we got the maximum that we requested; get the rest if so.
> -    if (handled.length == newitems.limit) {
> -      let guidColl = new Collection(this.engineURL, null, this.service);
>  

I mentioned this briefly in IRC, but (a) I believe this block will also be entered in some cases for history and (b) I believe that zero tests enter this block.

If both of the above are true, I sadly think it would be prudent to add tests for this block *before* trying to land this patch to ensure that the semantics remain the same.
Priority: P1 → P3
Priority: P3 → P2
Assignee: kit → nobody
Status: ASSIGNED → NEW
Priority: P2 → P3
Assignee: nobody → kit
Status: NEW → ASSIGNED
Priority: P3 → P1
Comment on attachment 8924472 [details]
Bug 1368209 - Refactor `Engine::_processIncoming` into three stages.

https://reviewboard.mozilla.org/r/195754/#review200870

::: commit-message-620fa:8
(Diff revision 1)
> +* In the first stage, we fetch changed records, newest first, up to the
> +  download limit. We keep track of the oldest record modified time we
> +  see.
> +* Once we've fetched all records, we reconcile, noting records that
> +  fail to decrypt or reconcile for the next sync. We then ask the store
> +  to apply all remaining records. Previously, `applyIncomingBatchSize`

Note to self: remove remaining mentions of `applyIncomingBatchSize` from the engines, and jettison the `_STORE_BATCH_SIZE` constants.
Blocks: 1366067
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review201064

LGTM. It's good to fill out our test server's support also.

::: services/sync/tests/unit/head_helpers.js:661
(Diff revision 2)
>      _(`Actual structure for ${rootGuid}`, JSON.stringify(actual));
>      throw new Assert.constructor.AssertionError({ actual, expected, message });
>    }
>  }
> +
> +function shuffle(array) {

Don't we already have one of these somewhere? I guess just in test_sync_utils...
Attachment #8871999 - Flags: review?(tchiovoloni) → review+
Comment on attachment 8924471 [details]
Bug 1368209 - Add a test for fetching backlogged history records.

https://reviewboard.mozilla.org/r/195752/#review201062

Oof, shame we never looked into this before. Oh well, good thing we are getting to it now.
Attachment #8924471 - Flags: review?(tchiovoloni) → review+
Comment on attachment 8924472 [details]
Bug 1368209 - Refactor `Engine::_processIncoming` into three stages.

https://reviewboard.mozilla.org/r/195754/#review201076

Awesome! Let's make sure TPS also works with that change.
Attachment #8924472 - Flags: review?(eoger) → review+
Comment on attachment 8924472 [details]
Bug 1368209 - Refactor `Engine::_processIncoming` into three stages.

https://reviewboard.mozilla.org/r/195754/#review201074

I think this is a very good patch! It removes everything I thought was dumb or bad about that function that I understood, and manages to make the things I previously didn't understand much clearer. I have a few nits with the code, but nothing serious.

I think the approach Android and iOS use here, e.g. a high water mark (which is AIUI effectively a persisted version of this patch's oldestModified timestamp) is simpler, but I also think this approach sounds like it would be more robust. I haven't thought about the difference very much though, and it's possible I don't properly understand the iOS/Android sync HWM.

Anyway, please make sure it passes a full TPS run before landing.

::: services/sync/modules-testing/rotaryengine.js:103
(Diff revision 1)
>  
>  
>  this.RotaryEngine = function RotaryEngine(service) {
>    SyncEngine.call(this, "Rotary", service);
>    // Ensure that the engine starts with a clean slate.
> -  this.toFetch        = [];
> +  this.backlog        = [];

I'm not sure it's worth renaming this. toFetch is reasonably clear, and if we name it backlog then the folder we save them into will have a different names than the variable.

I suspect we don't want to add complexity for handling a rename of that folder also.

If you feel very strongly about this, I suppose using a constant for the folder name would probably be good enough, although I still feel that it would be a little confusing that the name was different in the profile.

::: services/sync/modules/bookmark_repair.js:199
(Diff revision 1)
>      for (let id of validationInfo.problems.serverMissing) {
>        engine.addForWeakUpload(id);
>      }
> -    let toFetch = engine.toFetch.concat(validationInfo.problems.clientMissing,
> +    let newBacklog = engine.backlog.concat(validationInfo.problems.clientMissing,
> -                                        validationInfo.problems.serverDeleted);
> +                                           validationInfo.problems.serverDeleted);
> -    engine.toFetch = Array.from(new Set(toFetch));
> +    engine.backlog = Array.from(new Set(newBacklog));

Uh, hm. This was already this way, but is it okay that we're stomping on the existing data here, instead of appending?

::: services/sync/modules/engines.js:889
(Diff revision 1)
>  
> -  get toFetch() {
> -    return this._toFetch;
> +  get backlog() {
> +    return this._backlog;
>    },
> -  set toFetch(val) {
> +  set backlog(val) {
>      // Coerce the array to a string for more efficient comparison.

I doubt this is more efficient than most implementations of a function like `arrayEquals`.

The common case will be the arrays have different lengths, and so you could bail out immediately. Even in the case where they are the same length, doing it like this forces traversal of both arrays to build up strings, and then compares those strings. Compared to just traversing the array once and comparing each item.

I mean, this isn't particularly hot code (I hope), so it doesn't matter beyond the comment being wrong.

So, even though you didn't write it, while you're here, can you make the comment not claim this is for perf?

::: services/sync/modules/record.js:621
(Diff revision 1)
>    _rebuildURL: function Coll__rebuildURL() {
>      // XXX should consider what happens if it's not a URL...
>      this.uri.QueryInterface(Ci.nsIURL);
>  
>      let args = [];
> -    if (this.older)
> +    if (this.older) {

Can you either add braces to the rest of these, or not add them here (preferrably the first, but I care more that it's consistent than how it's consistent).

::: services/sync/tests/unit/head_http_server.js:334
(Diff revision 1)
>            throw new Error("Unknown sort order");
>          }
> -        // If the client didn't request a sort order, sort newest first,
> -        // since `test_history_engine` currently depends on this.
> -        data.sort((a, b) => b.modified - a.modified);
> +        // If the client didn't request a sort order, shuffle the records
> +        // to ensure that tests don't accidentally depend on the default
> +        // order.
> +        shuffle(data);

This is clever, IIRC golang does this for it's hashtable iterators.

One exceedingly small nit though, I think we care if the code being tested by our tests depends on it more than the tests themselves do, so maybe s/ensure that tests/ensure that we/ in the comment.

::: toolkit/components/places/PlacesSyncUtils.jsm:28
(Diff revision 1)
>   * records. The calls are similar to those in `Bookmarks.jsm` and
>   * `nsINavBookmarksService`, with special handling for smart bookmarks,
>   * tags, keywords, synced annotations, and missing parents.
>   */
> -var PlacesSyncUtils = {};
> +var PlacesSyncUtils = {
> +  /**

It's surpring to me that firefox doesn't have a more general set of utility js array functions that could house things like this, the shuffle from the previous patch, and a number of sync's util.js functions (arrayUnion and friends, for example -- although probably less idiosyncradic implementations of those).

Maybe someday someone will get fed up and add toolkit/modules/ArrayUtils.jsm or something. Oh well.
Attachment #8924472 - Flags: review?(tchiovoloni) → review+
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review201064

> Don't we already have one of these somewhere? I guess just in test_sync_utils...

I dug some more, and found https://searchfox.org/mozilla-central/rev/423b2522c48e1d654e30ffc337164d677f934ec3/testing/modules/TestUtils.jsm, which looks perfect for our needs. Thanks!
Comment on attachment 8871999 [details]
Bug 1368209 - Add support for `sortindex` and `older` to the mock Sync server.

https://reviewboard.mozilla.org/r/143534/#review147370

> Although your patch doesn't intorudce it, ISTM that adjusting lastModified here *might* be wrong, particularly in the isShutdownException case - specifically, we might have bumped this, then hit a shutdown trying to apply them, and fail to write those items to failed, so next sync will not find then in the failed list, and will not re-fect them as we've already bumped lastModified.
> 
> Not sure though...

Now that I understand this function better, I think this is still correct. `lastModified` is in-memory only, and we won't fast-forward `lastSync` until we've added all IDs to the backlog and queued a write. Bug 1366067 should make this more robust by registering a shutdown blocker, to ensure the write actually finishes. There was an issue where we'd read and clear the previous failed IDs at the beginning of the sync, but I moved this to just before the third stage.
Comment on attachment 8924472 [details]
Bug 1368209 - Refactor `Engine::_processIncoming` into three stages.

https://reviewboard.mozilla.org/r/195754/#review201074

Thanks for the reviews!

> I'm not sure it's worth renaming this. toFetch is reasonably clear, and if we name it backlog then the folder we save them into will have a different names than the variable.
> 
> I suspect we don't want to add complexity for handling a rename of that folder also.
> 
> If you feel very strongly about this, I suppose using a constant for the folder name would probably be good enough, although I still feel that it would be a little confusing that the name was different in the profile.

Not at all strongly. You bring up a good point about the folder name being different, and I certainly don't want to add logic to rename it. Reverted.

> Uh, hm. This was already this way, but is it okay that we're stomping on the existing data here, instead of appending?

I think we append: the line above does `engine.toFetch.concat(...)`.

> I doubt this is more efficient than most implementations of a function like `arrayEquals`.
> 
> The common case will be the arrays have different lengths, and so you could bail out immediately. Even in the case where they are the same length, doing it like this forces traversal of both arrays to build up strings, and then compares those strings. Compared to just traversing the array once and comparing each item.
> 
> I mean, this isn't particularly hot code (I hope), so it doesn't matter beyond the comment being wrong.
> 
> So, even though you didn't write it, while you're here, can you make the comment not claim this is for perf?

Agreed. I renamed `backlog` back to `toFetch`, so this part of the code doesn't change at all, and I'll remove it entirely in bug 1366067. Soon!
Pushed by kcambridge@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/4b8e56844ae9
Add support for `sortindex` and `older` to the mock Sync server. r=tcsc
https://hg.mozilla.org/integration/autoland/rev/b88c681ccdc1
Add a test for fetching backlogged history records. r=tcsc
https://hg.mozilla.org/integration/autoland/rev/1b868efa368f
Refactor `Engine::_processIncoming` into three stages. r=eoger,tcsc
It turns out the record shuffling I added to the test server, to make sure we weren't depending on record order, caught a test that depended on the order. :-) Fixed and verified in chaos mode: https://treeherder.mozilla.org/#/jobs?repo=try&revision=33cc174821b8c5018edafc2ee35f9da4d7714ffd
Flags: needinfo?(kit)
Pushed by kcambridge@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/c60dd6b7be17
Add support for `sortindex` and `older` to the mock Sync server. r=tcsc
https://hg.mozilla.org/integration/autoland/rev/88e0245c3c1e
Add a test for fetching backlogged history records. r=tcsc
https://hg.mozilla.org/integration/autoland/rev/bf2c707a1a16
Refactor `Engine::_processIncoming` into three stages. r=eoger,tcsc
Blocks: 1414435
Blocks: 1414436
Blocks: 1414437
Blocks: 1414438
Depends on: 1494848
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: