Closed Bug 735102 Opened 12 years ago Closed 10 years ago

Figure out how to truncate and/or paginate results

Categories

(Cloud Services Graveyard :: Server: Sync, defect)

x86_64
Linux
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: rfkelly, Assigned: rfkelly)

References

Details

(Whiteboard: [qa?][sync2.1])

Attachments

(3 files, 2 obsolete files)

We need a spec-compliant way for the server to give up on returning all the items in a collection, to avoid the current situation where it tries (and fails) to return many hundreds of thousands of records.

Ops have indicating that doing *something* to handle this case is a very strong operational need.

Simplest thing that could possibly work: Truncate.  We just truncate results at a certain point and return a flag indicating that this happened.  Clients are left to figure out how to deal with this on their own.  If they can't do anything useful in this situation, they're out of luck.

More complicated: Paginate.  We truncate results at a certain point and return an opaque 'offset' token.  Clients can re-issue their request with this additional parameter to retrieve the next batch of results.  Internally this is implemented by enforcing a total ordering over the items for any query, and returning a compound key giving the unique "next item" in this ordering.  Requires care from both client and server to ensure that items aren't lost due to updates while the client is paging through.

Thoughts?

(I know we *eliminated* a lot of ideas in discussion on services-dev, but we didn't actually come up with something useful to do in this situation)
Blocks: 720964
Whiteboard: [qa?]
Whiteboard: [qa?] → [qa?][sync2.1]
Since this is marked sync2.1, I am removing it as a blocker for sync2.0
No longer blocks: 720964
The RESTful way to do this is for the server to advertise a URL that when queried will return the next "page."

I think I'd be happy with just having the server provide the query string with the appropriate values to a GET /storage/<collection> request. 

Unfortunately, we don't have a full "offset" query parameter (any more), only timestamp and index sorting. There could be multiple records with the values, so paging would fall apart here. Can we bring back a "reliable" "offset" parameter?
As stated on IRC, I'm not too worried about correctness guarantees in the case where the server is mutated between requests because properly behaving clients will issue X-I-M-S for everything. If the offset is constant as long as the server isn't mutated, I'm happy.
Previous services-dev discussion on this topic, for completeness:

  https://mail.mozilla.org/pipermail/services-dev/2012-February/001002.html
It's been too long since I last talked about the operational implications of this, but I don't think using a simple LIMIT/OFFSET approach will be sufficient for truly huge collections.

Supposing the client sends GET /storage/collection?limit=100&offset=10000, this will translate to a DB query like:

    SELECT * FROM wbo WHERE collection=:collectionid AND userid=:userid ORDER BY <whatever> LIMIT 100 OFFSET 10000

The problem IIRC is that we can't use an index to efficiently jump to offset 10000, the database actually has to sort all those rows and page through them to get to the results you want.

This may be acceptable if we're happy to declare a "don't store too many items in your collection constraint.

Alternately, with a server-generated offset cursor, we can embed "next value" fields so that it results in a query more like:

    SELECT * FROM wbo  WHERE collection=:collectionid AND userid=:userid AND modified > <recently-seen-value> ORDER BY <whatever> LIMIT 100

Then we can use the indexed columns to get to the desired results more efficiently.  Obviously the server will have to be very careful if the limit happens to split at rows with the same timestamp, sortindex etc.  But I think it should be doable.

So I thinking:

  * if you don't specify ?limit on your GET, then the behaviour stays as it currently is
  * if you do specify ?limit on your GET, then the server will construct and return an offset token
      * in a header?
      * in the JSON response body?
  * subsequent requests can pass both ?limit and ?offset with their GET to page through results
  * the client is responsible for sending X-If-Unmodified-Since to ensure that the offset is still valid
      * should we double-check this on the server side?
Adding this link for my own reference, has some good discussion of this style of paging:

 http://use-the-index-luke.com/sql/partial-results/fetch-next-page
(In reply to Ryan Kelly [:rfkelly] from comment #5)
> The problem IIRC is that we can't use an index to efficiently jump to offset
> 10000, the database actually has to sort all those rows and page through
> them to get to the results you want.

Well, that's a bummer.

> Alternately, with a server-generated offset cursor, we can embed "next
> value" fields so that it results in a query more like:
> 
>     SELECT * FROM wbo  WHERE collection=:collectionid AND userid=:userid AND
> modified > <recently-seen-value> ORDER BY <whatever> LIMIT 100
> 
> Then we can use the indexed columns to get to the desired results more
> efficiently.  Obviously the server will have to be very careful if the limit
> happens to split at rows with the same timestamp, sortindex etc.  But I
> think it should be doable.

So there would still be a problem if a large number (possibly every) BSO in the collection had identical timestamps or sortindexes. I don't suppose the DB has a primary key that could break ties and allow offsetting into identical data sets?

 
> So I thinking:
> 
>   * if you don't specify ?limit on your GET, then the behaviour stays as it
> currently is

I think it would be nice for a client to know that returned data is partial. In theory, data is streamed over the wire all the way through and bits have been released by the web head before the database timeout hits. It would be nice for the web head to be able to communicate this somehow. Trailing HTTP header, perhaps? (I can't believe I'm actually advocating using trailers.)

>   * if you do specify ?limit on your GET, then the server will construct and
> return an offset token
>       * in a header?
>       * in the JSON response body?

Either would be fine. How would this work with newline-delimited responses? FWIW, Firefox Sync will switch to newline-delimited for everything so it can start invoking processing as soon as the first \n is received.

>   * subsequent requests can pass both ?limit and ?offset with their GET to
> page through results

Sounds like a plan!

>   * the client is responsible for sending X-If-Unmodified-Since to ensure
> that the offset is still valid
>       * should we double-check this on the server side?

I plan on doing this anyway. I suppose you /could/ embed this value in the token so you don't have to persist temporary data on the server. I don't think it is necessary for the server to validate it though - X-I-U-S should be sufficient.
(In reply to Gregory Szorc [:gps] from comment #7)
> > Alternately, with a server-generated offset cursor, we can embed "next
> > value" fields so that it results in a query more like:
> > 
> >     SELECT * FROM wbo  WHERE collection=:collectionid AND userid=:userid AND
> > modified > <recently-seen-value> ORDER BY <whatever> LIMIT 100
> > 
> > Then we can use the indexed columns to get to the desired results more
> > efficiently.  Obviously the server will have to be very careful if the limit
> > happens to split at rows with the same timestamp, sortindex etc.  But I
> > think it should be doable.
> 
> So there would still be a problem if a large number (possibly every) BSO in
> the collection had identical timestamps or sortindexes.

This is not possible, because you can only upload X many BSOs at a time.  Currently X is 100, so there will be no more than 100 items in a collection with the same modified time.

> I don't suppose the
> DB has a primary key that could break ties and allow offsetting into
> identical data sets?

It does not.  But I was thinking of adding the item id as a secondary order-by field, like this:

  SELECT * FROM wbo WHERE [blah] ORDER BY modified, id

And then including the last-seen id in the offset token as well, so that your offset-including query becomes:

  SELECT * FROM wbo WHERE [blah] AND modified >= [last-seen-ts] AND (id > [last-seen-id] OR modified > [last-seen-ts]+1) ORDER BY modified, id

Or something like that.

> >   * if you don't specify ?limit on your GET, then the behaviour stays as it
> > currently is
> 
> I think it would be nice for a client to know that returned data is partial.

By "stays as it currently is" I meant "you either get the full result set, or an error response".

> In theory, data is streamed over the wire all the way through and bits have
> been released by the web head before the database timeout hits.

In practice, the entire response is built up in memory before being returned to the client.  Bug 694221.

To be clear, the timeout is not currently generated in the database.  It's generated by the loadbalancer if it does not get a timely response from the downstream app.  So I don't think there's the possibility of timing out an in-progress response.

> It would be
> nice for the web head to be able to communicate this somehow. Trailing HTTP
> header, perhaps? (I can't believe I'm actually advocating using trailers.)

The server sends an X-Num-Records header telling you how many results to expect, which can be used to detect truncated responses.  Again though, this partially defeats streaming because it requires the server to know how many records will be sent.

It would be good to enable full-on streaming of rows.  Since we couldn't send a content-length header in this scenario anyway, chunked-encoding with X-Num-Records in a trailer may be quite a sensible solution.

> >   * if you do specify ?limit on your GET, then the server will construct and
> > return an offset token
> >       * in a header?
> >       * in the JSON response body?
> 
> Either would be fine. How would this work with newline-delimited responses?
> FWIW, Firefox Sync will switch to newline-delimited for everything so it can
> start invoking processing as soon as the first \n is received.

Oh, right, newlines.  I was thinking of POST where we already use a JSON body to send multiple pieces of data.  It will have to be a header.

> >   * the client is responsible for sending X-If-Unmodified-Since to ensure
> > that the offset is still valid
> >       * should we double-check this on the server side?
> 
> I plan on doing this anyway. I suppose you /could/ embed this value in the
> token so you don't have to persist temporary data on the server. I don't
> think it is necessary for the server to validate it though - X-I-U-S should
> be sufficient.

I tend to agree that validating it server-side is overkill.  If the client expects consistent results from multiple requests without X-I-U-S, that's the client's problem.
(In reply to Ryan Kelly [:rfkelly] from comment #8)
> > >   * the client is responsible for sending X-If-Unmodified-Since to ensure
> > > that the offset is still valid
> > >       * should we double-check this on the server side?
> > 
> > I plan on doing this anyway. I suppose you /could/ embed this value in the
> > token so you don't have to persist temporary data on the server. I don't
> > think it is necessary for the server to validate it though - X-I-U-S should
> > be sufficient.
> 
> I tend to agree that validating it server-side is overkill.  If the client
> expects consistent results from multiple requests without X-I-U-S, that's
> the client's problem.

Or this could be as simple as sending a 400 Bad Request is you specify offset without X-I-U-S.
(In reply to Ryan Kelly [:rfkelly] from comment #8)
> > So there would still be a problem if a large number (possibly every) BSO in
> > the collection had identical timestamps or sortindexes.
> 
> This is not possible, because you can only upload X many BSOs at a time. 
> Currently X is 100, so there will be no more than 100 items in a collection
> with the same modified time.

I forgot about this. Nice!

I think everything else in your comments makes perfect sense and trust you to do the right thing.

I think the changes required to support true streaming can be pushed back to version 2.1 of the protocol. I think the big streaming wins involve the pipe between the web heads and the client, not inside the data center.
(In reply to Ryan Kelly [:rfkelly] from comment #8)
> This is not possible, because you can only upload X many BSOs at a time. 
> Currently X is 100, so there will be no more than 100 items in a collection
> with the same modified time.

To point out an edge case that probably is NOT relevant, but remains interesting: if 10 clients simultaneously post 100 objects, they would all get the same timestamp, yes?

> > I don't suppose the
> > DB has a primary key that could break ties and allow offsetting into
> > identical data sets?
> 
> It does not.  But I was thinking of adding the item id as a secondary
> order-by field, like this:
> 
>   SELECT * FROM wbo WHERE [blah] ORDER BY modified, id
> 
> And then including the last-seen id in the offset token as well, so that
> your offset-including query becomes:
> 
>   SELECT * FROM wbo WHERE [blah] AND modified >= [last-seen-ts] AND (id >
> [last-seen-id] OR modified > [last-seen-ts]+1) ORDER BY modified, id
> 
> Or something like that.

Primary key is automatically and secretly appended to indexes, but it's unclear if ORDER BY can access it for an "index sort" or not.

> In practice, the entire response is built up in memory before being returned
> to the client.  Bug 694221.

This will cause problems for some clients behind NAT, as ..

> To be clear, the timeout is not currently generated in the database.  It's
> generated by the loadbalancer if it does not get a timely response from the
> downstream app.  So I don't think there's the possibility of timing out an
> in-progress response.

.. we have very long timeouts set in the backends (560 seconds before gunicorn aborts, 575 seconds before nginx proxy aborts, 600 seconds before zeus aborts), and various network devices between us and them that are not under our control may simply presume the connection is dead if we are not streaming responses.
(In reply to Richard Soderberg [:atoll] from comment #11)
> (In reply to Ryan Kelly [:rfkelly] from comment #8)
> > This is not possible, because you can only upload X many BSOs at a time. 
> > Currently X is 100, so there will be no more than 100 items in a collection
> > with the same modified time.
> 
> To point out an edge case that probably is NOT relevant, but remains
> interesting: if 10 clients simultaneously post 100 objects, they would all
> get the same timestamp, yes?

Currently yes, but I'm in the process of killing that with Bug 764214.  By the time this goes live there will be strictly sequential writes to each collection.

> > > I don't suppose the
> > > DB has a primary key that could break ties and allow offsetting into
> > > identical data sets?
> > 
> > It does not.  But I was thinking of adding the item id as a secondary
> > order-by field, like this:
> > 
> >   SELECT * FROM wbo WHERE [blah] ORDER BY modified, id
> > 
> Primary key is automatically and secretly appended to indexes, but it's
> unclear if ORDER BY can access it for an "index sort" or not.

That will be very interesting if it can!  But even if not, I am imagining this sorting to still be pretty efficient due to the small-number-of-ids-for-each-timestamp thing mentioned above.

IOW, I'm imagining that "ORDER BY indexed_column, unindexed_column" can still take advantage of the index to do the bulk of the work.  But my imagination doesn't always align with reality on these matters...
ORDER by indexed, unindexed dumps the entire un-limited result set into a temporary table, sorts it, and then calculates your LIMIT.

If this was a true queue, and we could only scan it from oldest to newest, the only way to implement pagination on arbitrary sorts would be to scan every row from oldest to newest, ordered by and offset by key.

I operated a mysql table once with 300 million time-ordered rows and ~20 daemons with arbitrarily-strange filters that couldn't be expressed in SQL. We implemented it using this algorithm, approximately:


@result_set := [ ];
$next_row := 0;
$seconds_per_batch = 60;
$data_count = 0;
$stop_after_rows = 1000;
$stop_at_epoch = time.epoch + 120;
$requested_offset = ______ or 0;

$max_modified := (select max(modified) where u=?, c=?);

$current_offset = $requested_offset;

$did_break = 0;
$future_offset = 0;

while $future_offset < $max_modified and $next_offset := (select modified where u=?, c=?, modified > $current_offset order by modified limit 1):
  $future_offset = min($next_offset + $seconds_per_batch, $max_modified);
  foreach $data_row := (select data, modified where u=?, c=?, modified between $next_offset and $future_offset order by ______):
    push @result_set, $data_row;
    $data_count++;
    $future_offset = $data_row.modified;
  }
  break while if $data_count >= $stop_after;
  break while if time.epoch > $stop_at_epoch;
}

return @result_set, $future_offset;


In short, this permits filtering the entire dataset in indexed order, no matter how few or how many rows are present, even if only 2 rows out of a million match. $seconds_per_batch ensures that we only process a few seconds of database at a time, $next_offset := (select ... limit 1) ensures that we skip large gaps using an index-optimized query, and $stop_at_epoch ensures that the process stops cleanly within a reasonable time.

This algorithm scaled for us up to ~300 million rows before we hit 7 disk block reads per primary key lookup, at which point we added pruning. Sync has less rows per table, and there's no harm in doing lots of fast indexed selects to enable true pagination.

Buried in this is the assumption that we can say to the client data_rows = [ empty list ], next_offset = [ some value ], because we couldn't find anything in the time allotted, but the client should continue paginating anyways. So it's critical that a pagination API define "pagination required, here's your new offset" without using the shortcut of "no rows returned".
(In reply to Ryan Kelly [:rfkelly] from comment #12)
> IOW, I'm imagining that "ORDER BY indexed_column, unindexed_column" can
> still take advantage of the index to do the bulk of the work.  But my
> imagination doesn't always align with reality on these matters...

Yep, my intuition above is probably not accurate according to: http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html

But MySQL does appear to do the right thing here, using the hidden primary key fields from the index.  Compare:

  mysql> EXPLAIN SELECT * FROM wbo0 where username=1 and collection=1 and modified > 12 ORDER BY modified, id;
  +-------------+-------+...+-----------------+...+-------------+
  | select_type | table |...| key             |...| Extra       |
  +-------------+-------+...+-----------------+...+-------------+
  | SIMPLE      | wbo0  |...| usr_col_mod_idx |...| Using where |
  +-------------+-------+...+-----------------+...+-------------+

Versus adding a field that's not in the index:

  mysql> EXPLAIN SELECT * FROM wbo0 where username=1 and collection=1 and modified > 12 ORDER BY modified, ttl;
  +-------------+-------+...+---------+...+-----------------------------+
  | select_type | table |...| key     |...| Extra                       |
  +-------------+-------+...+---------+...+-----------------------------+
  | SIMPLE      | wbo0  |...| PRIMARY |...| Using where; Using filesort |
  +-------------+-------+...+---------+...+-----------------------------+


We may need to turn the index on (userid, collection, sortindex) into one on (userid, collection, sortindex, modified) to make this operate well when results are sorted by sortindex.
Some additional notes from IRC:
<atoll> rfkelly: one thing i forgot to mention - if you separate 2 matching rows by a million non-matching rows, LIMIT 2 is useless
<atoll> and that's how i ended up with BETWEEN $curr and $future_timestamp
<atoll> it's the only way to ensure you have any hope of returning in constant-ish time
Blocks: 775395
Attaching my first attempt at formalizing the client-facing API for this.  

What's nice about making the "offset" value an opaque string is that we can try out different things on the server without affecting the client.  For example, the JS test server can just use integer offsets, while the mysql backend can use some fancy paging like :atoll describes above.
Assignee: nobody → rfkelly
Attachment #645170 - Flags: review?(gps)
Comment on attachment 645170 [details] [diff] [review]
docs patch describing server-driven paging API

Review of attachment 645170 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good to me.

::: source/storage/apis-2.0.rst
@@ +649,5 @@
> +    next_offset = r.headers.get("X-Next-Offset")
> +
> +    while next_offset:
> +        headers = {"X-If-Unmodified-Since": last_modified}
> +        r = server.get("/collection?limit=100&offset=" + next_offset, headers)

Should probably percent encode in this example.
Attachment #645170 - Flags: review?(gps) → review+
> @@ +649,5 @@
> > +    next_offset = r.headers.get("X-Next-Offset")
> > +
> > +    while next_offset:
> > +        headers = {"X-If-Unmodified-Since": last_modified}
> > +        r = server.get("/collection?limit=100&offset=" + next_offset, headers)
> 
> Should probably percent encode in this example.

In the interests of making this as easy as possible to get right, what if we mandate that the server can only produce X-Next-Offset values that are urlsafe strings?
Attaching first implementation of this idea.

This implementation uses integer offsets and passes them straight through to an "OFFSET X" clause in the sql query.  As discussed in Comment 5, this will work but is not optimal.

I think it makes sense to land this initial version so that client has something to develop against, to shake out any API problems etc.  As long as the offset tokens are treated as opaque strings, we should be able to transparently upgrade the server to a more efficient scheme like the one in Comment 13, without breaking anything from the client's POV.
Attachment #645615 - Flags: review?(telliott)
A couple concerns on this patch:

1) 
+        try:
+            if full:
+                res = storage.get_items(user_id, collection_name, **filters)
+            else:
+                res = storage.get_item_ids(user_id, collection_name, **filters)
+        except ValueError:
+            raise HTTPBadRequest("Invalid query parameter.")

The addition of this try block seems to be a ridealong, as I can't see where it's relevant in the rest of the patch. I'm also not sure that we should be making query parameter assessments this low in the stack, so I may be missing something

2) With the new limit+1 to determine offsets, do we now need to do an explicit check that limit > 0?

3) We'll need to do a ton of loadtesting on the new sorting to allow consistent offsets. It may be more efficient to give all the items different timestamps or something.
(In reply to Toby Elliott [:telliott] from comment #21)
> +        try:
> +            if full:
> +                res = storage.get_items(user_id, collection_name, **filters)
> +            else:
> +                res = storage.get_item_ids(user_id, collection_name,
> **filters)
> +        except ValueError:
> +            raise HTTPBadRequest("Invalid query parameter.")
> 
> The addition of this try block seems to be a ridealong, as I can't see where
> it's relevant in the rest of the patch. I'm also not sure that we should be
> making query parameter assessments this low in the stack, so I may be
> missing something

It's not a ridealong, it was added to handle badly-formed offset tokens.  But it is certainly inelegant.

The difficulty is that "offset" is an opaque token, and different backends may put different things in there.  So the controller can't sanity-check the query parameter by e.g. converting it to an integer.  Instead it just passes through whatever value of "offset" was provided by the user, lets the backend try to parse and use it, and catches the ValueError if that token turns out to be badly-formed.

We could take a few different approaches here.  For example, we could raise a specific InvalidOffsetToken error so that it's clearer what is going on.  Or we could add a "parse_offset_token" method to each backend, which the controller explicitly calls while it is validating the query parameters.


> 2) With the new limit+1 to determine offsets, do we now need to do an
> explicit check that limit > 0?

Yes, we should probably be doing this anyway.

> 3) We'll need to do a ton of loadtesting on the new sorting to allow
> consistent offsets. It may be more efficient to give all the items different
> timestamps or something.

Indeed.  I am confident that MySQL will do the right thing here, but we should carefully verify it.
(In reply to Ryan Kelly [:rfkelly] from comment #22)
> It's not a ridealong, it was added to handle badly-formed offset tokens. 
> But it is certainly inelegant.
> 

My concern is that we're adding an exception catch that the patch doesn't throw anywhere, which means that either we're piggybacking on something else, or it should have been there already.


> The difficulty is that "offset" is an opaque token, and different backends
> may put different things in there.  So the controller can't sanity-check the
> query parameter by e.g. converting it to an integer.  Instead it just passes
> through whatever value of "offset" was provided by the user, lets the
> backend try to parse and use it, and catches the ValueError if that token
> turns out to be badly-formed.

Processing of an opaque token feels like it ought to have a different exception, since it could actually be our fault. "Invalid Query parameter" is a pretty unhelpful error and could lead folks down the wrong path. This one should probably have its own "Malformed offset token" explanation.
(In reply to Toby Elliott [:telliott] from comment #23)
> My concern is that we're adding an exception catch that the patch doesn't
> throw anywhere, which means that either we're piggybacking on something
> else, or it should have been there already.

Right, the ValueError was being thrown from the newly-added calls to int(offset). 

> Processing of an opaque token feels like it ought to have a different
> exception, since it could actually be our fault. "Invalid Query parameter"
> is a pretty unhelpful error and could lead folks down the wrong path. This
> one should probably have its own "Malformed offset token" explanation.

I've updated the patch with a custom exception type and corresponding custom error message, as well as an explicit check that "limit" is a positive integer.
Attachment #645615 - Attachment is obsolete: true
Attachment #645615 - Flags: review?(telliott)
Attachment #647835 - Flags: review?(telliott)
Comment on attachment 647835 [details] [diff] [review]
updated patch for initial implementation of server-generated offset

r+ with the trivial note that if we're calling it an offset token in all the docs we should call it an offset token in the error message.
Attachment #647835 - Flags: review?(telliott) → review+
Committed with that change in https://github.com/mozilla-services/server-syncstorage/commit/84d38ed3678d9c573ac46f8e8fac778b556f39e8

I will leave the bug open while we explore the more efficient implementation options suggested above.
Blocks: 1007987
Attached patch sync15-index-driven-offset.diff (obsolete) — Splinter Review
Here's an index-driver implementation of the offset-token idea suitable for sync1.5.

All fetches are ordered by either (modified, id) or by (sortindex, id).  The offset token is an "integer:string" pair giving the next item in the sequence.  When we see one, we add the equivalent of "(modified, id) <= (offset token)" the query so that it will pick up where the previous query left off.

This also includes the index fix from Bug 1007987.

Some notes:

  * I changed some "params.pop()" to "params.get()" so that we can see what the params were after the query as been executed; this is necessary to determine which column should be used when encoding next_offset.

  * the existing tests cover the use of limit/offset so I haven't added any extras, but if there are any particularly concerning edge-cases then I could do so.

  * We could add an extra sanity-check to the token, to prevent a (modified,id) offset being incorrectly used with a sort=index query and vice-versa.  This seems unlikely in practice though.
Attachment #8440527 - Flags: review?(telliott)
Per Bug 1025735, MySQL is no longer automagically doing the right thing when we order by the item id as a secondary column.  Here's an alternative that should be almost as good but doesn't depend on ordering over id - basically it uses an upper-bound on the timestamp to exclude the bulk of the previously-seen results, then a smaller numeric offset relative to that position.

We can get away with this because we know that at most 100 items will have an identical timestamp.
Attachment #8440527 - Attachment is obsolete: true
Attachment #8440527 - Flags: review?(telliott)
Attachment #8441175 - Flags: review?(telliott)
Comment on attachment 8441175 [details] [diff] [review]
sync15-index-driven-offset.diff

Review of attachment 8441175 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good.
Attachment #8441175 - Flags: review?(telliott) → review+
https://github.com/mozilla-services/server-syncstorage/commit/b178b8c3dd40f61eb92d1d28792c504929bece94

This should be good enough for our purposes, closing out this now-ancient bug.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Is this to be part of a future deployment? Or something I can just verify and close?
It's likely to be default off on our deployments, so I think you can just verify.
Product: Cloud Services → Cloud Services Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: