Closed Bug 687108 Opened 13 years ago Closed 12 years ago

First info/quota call lies

Categories

(Cloud Services Graveyard :: Server: Sync, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: philikon, Assigned: rfkelly)

References

Details

(Whiteboard: [qa+][sync 2.0])

Attachments

(6 files)

The View Quota dialog on the client makes two server calls: info/quota and info/collections.

When viewing quota on a sizable profile of an active internet user (several megabytes of history data) for the first time, info/quota returns very quickly and typically displays a number below 1 MB. This number is obviously a lie. info/collections eventually completes too and displays believable data.

Viewing quota again after that displays the right data for the info/quota call.
jrconlin, rnewman, liuche and I can all reproduce it. These accounts are of various age. I can't reproduce it with an old test account that has very little data in it, so I'm guessing the fact that these accounts have a lot of data and/or are in regular use is a factor.
Whiteboard: [qa+]
Assignee: nobody → rkelly
Whiteboard: [qa+] → [qa+][sync 2.0]
Possibly fixed by the patch in Bug 745059, but we need to figure out how to confirm this.
(In reply to Ryan Kelly [:rfkelly] from comment #7)
> Possibly fixed by the patch in Bug 745059, but we need to figure out how to
> confirm this.

That bug seems to say: "user_id:size key in memcache isn't updated when new items are uploaded". Given username "atoll" with internal userid "123", what would the exact memcache key name be? We can just query it as a client actively surfs and syncs new history entries.
The memcached key would be "123:size"
Per Bug 748164, there should now be a fix for this live in production.  You might still get one more lie out of it as the old memcached key is updated with fresh data, but after that it should track usage much more closely.

(It might still lie a little, since this number is only an estimate, but the lies shouldn't be quite so egregious...)
Sadly, I am still observing this bug in the wild.
I think this bug may finally have gone away - I've been trying to catch it in the wild for a few months now, and I haven't seen it re-appear.  Most likely a result of the fix from Bug 748164 plus the new durability provided by couchbase.

:rnewman, :philikon, :jrconlin, :liuche - can you please do a quick "View Quota" and see if the bug has disappeared for you?  The total listed at the top of the "View Quota" dialog should now roughly match the individual size breakdowns listed below it.

If I can get a couple of independent confirmations, then I'll close out this bug.
Flags: needinfo?(rnewman)
Flags: needinfo?(rnewman)
Still happens.
Thanks rnewman.  This is better behaviour than the initial report though - it's now *overestimating* your usage, while the initial report had it severely *underestimating* the usage.

Overestimation is an unfortunate side-effect of how we count usage in memcache.  Every incoming write is added to your total as if it were a new item, so any items that are written more than once will contribute to overestimating your usage.  This keeps the overhead of size tracking to a minimum.

There is some logic that adjusts for this by refreshing from the database if you get too close to the max quota, but that is not triggered during ordinary activity.  It is triggered by the "View Quota" dialog though, which explains why it is correctly reported on the second attempt.

So the good news is that this now seems to be working as expected!  It's still lying, but it's lying in the right direction.

I think any further fix would have to originate in the client, to better handle the difference between "estimated total usage" as reported by /info/quota and "exact detailed usage" as reported by /info/collection_usage.

Do we care enough about this issue to pursue additional work here?  FWIW this same behaviour will still be present in sync2.0.
I'm closing this out - the server is now working as documented and as intended.  Any client-side mitigation of the remaining weirdness can be done in a separate bug.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Product: Cloud Services → Cloud Services Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: