Closed Bug 1401990 Opened 3 years ago Closed 3 years ago

Use max_request_bytes instead of max_post_bytes on desktop if both are provided.


(Firefox :: Sync, enhancement, P1)




Firefox 58
Tracking Status
firefox58 --- fixed


(Reporter: tcsc, Assigned: tcsc)




(1 file)

This is sort of a followup for bug 1401707, which (among other things), adds code where the server sends max_request_bytes every time, even if batching support is on.

At the moment, we (desktop, since IIRC this doesn't apply to the other clients) treat max_post_bytes as if it were the max_request_bytes, but they're different but related[0], such that if you do the right thing for max_post_bytes... as long as max_post_bytes is <= max_request_bytes.

I considered opening another bug for removing the old-style config support, and properly supporting max_post_bytes, since eventually (probably in a few months at a minimum) the batching API is no longer likely to be turned off, but this probably is enough, especially since (AIUI) the server cares more about max_request_bytes than max_post_bytes -- since this value is somewhat enforced by nginx (with some slop).

[0]: Since I've done this math out, I feel compelled to write it here, even though it's not 100% relevant to the bug. They're related in that the maximum "overhead" size for a record is about 162b per record[1], since according to, something like this is the maximum sizes for the parts of the JSON object that are pure overhead: `[{"id":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA","modified":999999999999999934463.99,"sortindex":123456789,"payload":"","ttl":123456789}]`, and so with max_post_records == 100 (which, AFAICT, is the current value), the maximum amount of overhead we could have is 16200b (or, really, 16101, since all records except for the last will only add 1 byte of overhead due to being in a JSON array, and the last will add 2), which, sadly, is more than the desktop client assumes the slop can be; although now that batching is enabled everywhere, this doesn't really matter (it would only matter if we *were* using max_request_bytes, and even then probably not really...).

[1]: It's worth noting that the max realistic values is different, e.g. while the value I've used for modified is the theoretical maximum float we can format with 2 decimal places from JS without it converting to scientific notation it is a date many many *many* millenia in the future, and so is completely outside the realm of realistic dates -- the realistic value is around 150b, but the distinction doesn't seem that important.
Assignee: nobody → tchiovoloni
Priority: -- → P1
Comment on attachment 8912911 [details]
Bug 1401990 - Use max_request_bytes instead of max_post_bytes if sync server provides both

I opted to rename what we call the property to match as well, since not doing so would be horrifyingly confusing.

::: services/sync/modules/record.js:823
(Diff revision 2)
>        max_record_payload_bytes: getConfig("max_record_payload_bytes", 256 * 1024),
>      };
> -    if (config.max_post_bytes <= config.max_record_payload_bytes) {
> -      this._log.warn("Server configuration max_post_bytes is too low for max_record_payload_bytes", config);
> +    if (config.max_request_bytes <= config.max_record_payload_bytes) {
> +      this._log.warn("Server configuration max_request_bytes is too low for max_record_payload_bytes",
> +                     this._service.serverConfiguration);

I saw this in TPS, and it occured to me that what we'd want to see in the log is the actual server config. What we parsed it into isn't indicative of very much, since we don't know where the values came from.

Also, if trace logs are on, we log the real one below.
Comment on attachment 8912911 [details]
Bug 1401990 - Use max_request_bytes instead of max_post_bytes if sync server provides both

LGTM, thanks - let's make sure it's tested though ;)
Attachment #8912911 - Flags: review?(markh) → review+
Comment on attachment 8912911 [details]
Bug 1401990 - Use max_request_bytes instead of max_post_bytes if sync server provides both

Following the discussion on Wednesday (where rfk suggested we would really need to follow the max_post_byte limit to do this), I'm moving the r+ to r?. This code is completely different anyway.

This might seem (and be) pretty complex in comparison to the previous code (and it was very nearly worse[0]), but I think it's probably easier to follow, less prone to confusion, and easier to extend should (god forbid) we need to enforce additional limits, such as one for memcache-only collections.

It follows all the server limits strictly, and also has the beneficial property of not really caring what they are -- It will respect all of them that can, even if that means another limit is somewhat weird or meaningless as a result. It also manages to keep effectively the same code for the batching and nonbatching case, as the old version did.

The only major thing thing it does wrong is that it assumes all server byte limits are exclusive limits, since a) some are/were (And it wasn't clear to me if it was only the one in, or more), b) it's always safer to do this, and c) it's not clear to me what the deployment timeline on that patch is. In an ideal world we wouldn't do this. It does (correctly) treat record limits as inclusive, however.

I've also updated the tests to make them more accurately test what they claim to, and to be more thorough about ensuring all the limits are enforced.

[0]: It's still tempting to me to try to unify these limits with an abstraction (and then store them in an array on PostQueue), but maybe the android code is just rubbing off on me :p
Attachment #8912911 - Flags: review+ → review?(markh)
Comment on attachment 8912911 [details]
Bug 1401990 - Use max_request_bytes instead of max_post_bytes if sync server provides both

Looks great, thanks!

::: services/sync/modules/record.js:800
(Diff revision 3)
> -      return defaultVal;
> -    };
> +};
> -    // On a server that does not support the batch API, we expect the /info/configuration
> -    // endpoint to provide "max_record_payload_bytes" and "max_request_bytes" and limits.
> -    // On a server that supports the batching API, we expect "max_record_payload_bytes"
> +// These are limits for requests provided by the server.
> +//
> +// All are optional, but the only ones we're expected to respect if missing

this comment should be reworded to make it clearer - you can't respect missing values. Something like "the only ones we use/synthesize/something defaults for" or something similar?

::: services/sync/modules/record.js:813
(Diff revision 3)
> -      max_post_bytes: getConfig("max_post_bytes",
> -        getConfig("max_request_bytes", 260 * 1024)),
> -
> -      max_post_records: getConfig("max_post_records", Infinity),
> -
> -      max_batch_bytes: getConfig("max_total_bytes", Infinity),
> +// of and records we can send in a single post as well as in the whole batch.
> +// Note that the byte limits for these there are just with respect to the
> +// *payload* data, e.g. the data appearing in the payload property (a
> +// string) of the object.
> +//
> +// Note that these limits should be sensable, but the code (no longer) makes

s/sensable/sensible/, and remove the (no longer) part - there's no real point mentioning the sad history of this code ;)

::: services/sync/modules/record.js:841
(Diff revision 3)
> -      this._log.warn("Server configuration max_post_bytes is too low for max_record_payload_bytes", config);
> -      // Assume 4k of extra is enough. See also getMaxRecordPayloadSize in service.js
> -      config.max_record_payload_bytes = config.max_post_bytes - 4096;
> +  // as max_post_records, but for batches).
> +  max_total_records: Infinity,
> +});
> +
> +
> +// Manages a pair if (byte, count) limits for a PostQueue, such as


::: services/sync/modules/record.js:868
(Diff revision 3)
> +
> +  canNeverAdd(recordSize) {
> +    return recordSize >= this.maxBytes;
> +  }
> +
> +  addRecord(recordSize) {

this might be better named "recordRecordAdded" or similar as it's not actually doing an "add"?

::: services/sync/modules/record.js:923
(Diff revision 3)
> +  // Tracks the count and combined payload size for the records we've queued
> +  // so far but are yet to POST.
> +  this.postLimits = new LimitTracker(config.max_post_bytes, config.max_post_records);
> +
> +  // As above, but for the batch size. Includes the count for not-yet-committed batches.

I don't think that second sentence adds anything and is slightly confusing (ie, it implies we may have many not-yet-commited batches). I'd suggest dropping it.

::: services/sync/tests/unit/test_postqueue.js:16
(Diff revision 3)
>    }
>  }
> +// Note: This is 14 bytes. The limits that are used to test the limiting
> +// behavior are chosen with this in mind, this, but where possible this is
> +// used to be explicit where the number

this comment doesn't make sense to me
Attachment #8912911 - Flags: review?(markh) → review+
Pushed by
Use max_request_bytes instead of max_post_bytes if sync server provides both r=markh
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 58
You need to log in before you can comment on or make changes to this bug.