Closed Bug 546715 Opened 11 years ago Closed 9 years ago
Go to 10ths of a second for synchronization
Realistically, it seems like 10ths of a second should be plenty, and exposing 100ths of a second does have some potential, if fairly unlikely, security issues. If the client needs the %2d value, I can simply make it .x0 Can we do this?
Flags: blocking-weave1.2? → blocking-weave1.3?
Target Milestone: 1.2 → 1.3
I think if we're worried about race conditions measured in tenths of a second, we've probably won already. Mardak, your thoughts?
There's a header for X-If-Unmodified-Since header that the server supports but the client doesn't use. However, the time granularity still affects what can be done. For example, if it was a day granularity, client A uploads on day 1, client 2 can fetch new items on day 1 and on the upload of data, ask If-Unmodified-Since <last fetch time, day 1> and fail because the original data was uploaded on day 1. So only one write is allowed per time granularity, if that's what we want. Doing that would complicate sync in that we do multiple posts/deletes per sync. Do we want to have X-If-Unmodified-Since cancel the action or just inform that something was modified? If it's just informative, it could hint to the client that data should be refetched from newer=<value for unmodified snice> and older=<time of the response>. Otherwise, the client would need to break out of its upload process and start downloading items again and reconcile, etc before continuing.
This one's an API change, needs more discussion, and should be part of 2.0
Target Milestone: 1.3 → 2.0
Punting this to Future, we need to discuss this further before it gets onto the engineering roadmap.
Assignee: telliott → nobody
Target Milestone: 2.0 → Future
Would really be better if we could make this seconds...
Any plans on getting this done soon?
soon is a relative term, but I'd like to think it's on the 2.0 roadmap
+2 to "X-If-Unmodified_Since", because it's good practice regardless and I think it will also prevent a known issue where "upload new record" and "reset sync" trigger MySQL deadlocks. I am okay with the slightly increased load due to the client aborting its upload and reconciling, as long as we fuzz the delay between those two steps.
Let's spin that part off to another bug. The larger "when can we switch to a new API rev" question is post-Fx4, and then we'll have to think long and hard about how that'll work.
Something I jokingly suggested to Toby at a lunch a little while back, which got a positive reception. I realized I hadn't appended it to a bug, so here goes. This is mostly for posterity. It seems, from that conversation at least, that the current concern from the server side is about using excessive space per timestamp: 5,000 WBOs * 1M users = 37GB of timestamps. I didn't get the impression that privacy was a big issue. This does not seem to be captured in this bug, so I might be way off base. The obvious answer to the server space concern is to truncate the value, losing precision. The client, however, cares about granularity in order to do its work; coarse timestamps put more load on reconciliation, and introduce ambiguity. (Imagine an extreme world where timestamps had a granularity measured in hours. Which prefs bundle should we take?) My suggestion was to truncate at the top end of the value, not reduce precision. From what I can see, WBO lastModified is a bigint. Dropping to an int saves 4 bytes per WBO. 2^32 unsigned is 4,294,967,295, which gives a timespan of 497 days down to 1/100sec precision, which we could conceivably pin to a useful epoch. 1/10sec gives a span of 13 years. At the cost of complicating access, another byte or two could be added in an additional column to blow this up into a worry-free size. Yet another alternative is to share a single bigint between, say, timestamp (6 bytes) and collection (2 bytes), or timestamp and sortindex. Just somethin' to think about!
Average size per wbo body is ~550 bytes. Minimum disk space allocated per innodb record is 768 bytes. Using 8 bytes for the timestamp is unlikely to have any particular impact, vs 4 bytes currently. I don't see any discussion in *this* bug about the security interest in lowering precision to tenths or even whole seconds. :mcoates, is there a separate bug for that or is this it?
We're going to stay with hundredths of seconds, but look at other ways to compress this data.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.