Used storage space for saved cookies and site data is abnormal in some circumstances
Categories
(Core :: Storage: localStorage & sessionStorage, defect, P3)
Tracking
()
People
(Reporter: standard8, Assigned: leggert)
References
(Depends on 1 open bug, Blocks 1 open bug)
Details
Attachments
(2 files)
I'm on Mac, using the latest nightly build. I was investigating another bug at the time, here's roughly what I did:
- Used Time Machine to restore my main profile from a previous day to a new directory. I only restored the
~/Library/Application Support/Firefox/Profiles/<profile folder>section, not the cache. - Added the new profile directory to profiles.ini
- Started Firefox with the new profile.
When I went into preferences, at some stage, I noticed that the cookies and site data were on about 17 exabytes. This was obviously not quite right.
I later restarted and it then showed a reasonable 189MB.
Comment 1•6 years ago
|
||
FWIW, integer overflow of -1 for 64-bit is apparently 18,446,744,073,709,551,615 , which if divided by 1024 * 1024 * 1024 (from bytes to gb) is that number. So I suspect that's what's going on... somewhere.
Comment 2•6 years ago
|
||
(In reply to :Gijs (he/him) from comment #1)
FWIW, integer overflow of -1 for 64-bit is apparently 18,446,744,073,709,551,615 , which if divided by 1024 * 1024 * 1024 (from bytes to gb) is that number. So I suspect that's what's going on... somewhere.
Jan Varga said the same thing :) He might have an idea of what can cause this.
Comment 3•6 years ago
|
||
Yeah, I'll try to fix this before enabling LSNG on release.
Comment 6•6 years ago
|
||
Just one question here: does this in any way likely affect cookies functionality somehow? I have a weird bug for instance that I'm always logged off from matrix on Nightly restart. Just checking if those can be related or not.
Comment 7•6 years ago
|
||
I don't think it's related. I suspect this is only a usage calculation problem when we end up with a negative big int which is then interpreted as insanely large positive number.
Comment 9•4 years ago
|
||
(In reply to Jan Varga [:janv] from comment #7)
I don't think it's related. I suspect this is only a usage calculation problem when we end up with a negative big int which is then interpreted as insanely large positive number.
Per some of the debugging in bug 1743017 the quota usage per origin is reporting negative numbers; that seems like it could potentially mislead other code about how much space is still available for that origin?
Although it's good to have reproducible steps such as in comment 0, I can't imagine everyone seeing this has gone through the steps in that comment, there must be different ways of hitting this problem...
Comment 10•4 years ago
|
||
Jens, can this be prioritized? It's causing other side effects than just "funny" numbers, cf. bug 1749985.
Comment 12•4 years ago
|
||
Jan, do we have some QM_TRY in place to see these failures in order to have a feeling, how often this happens in the wild?
Comment 13•3 years ago
|
||
see exactly the same issue on openSUSE Tumbleweed, FF 101.0.1
Absolut disk usage:
docb@X1E:~> du -sh .mozilla/firefox
1,6G .mozilla/firefox
I clearly did not juggle around with profiles....
Updated•3 years ago
|
Comment 14•3 years ago
|
||
The severity field for this bug is relatively low, S3. However, the bug has 4 duplicates.
:jjalkanen, could you consider increasing the bug severity?
For more information, please visit auto_nag documentation.
Comment 16•2 years ago
|
||
The code which deals with usage calculation needs to be checked if some kind of overflow can happen. For example we can get a negative overall usage which then gets converted from a signed integer to unsigned integer, resulting in reporting insane total usage.
Comment 20•1 year ago
|
||
Quoting the message to make it easier for people to find this issue through search engines:
Cookies and Site Data
Your stored cookies, site data, and cache are currently using 17,179,869,184 GB of disk space.
Comment 23•1 month ago
|
||
Just saw this (slack in my case; looking at site data slack is using 2^34 - 7 GB it appears (....177 GB)
Comment 24•12 days ago
|
||
Diagnosis and proposed minimal fix.
Root cause
The Preferences total is siteDataUsage + cacheSize. siteDataUsage is summed in SiteDataManager.sys.mjs from nsIQuotaUsageResult::usage per origin (Services.qms.getUsage()). The reported 17,179,869,184 GB and the variant 2^34 - 7 GB (comment 23) are ~UINT64_MAX / 1024^3, i.e. an unsigned wrap on a uint64_t.
The wrap originates in dom/quota/OriginInfo.cpp (and the GroupInfo counterpart). OriginInfo::mUsage, GroupInfo::mUsage, QuotaManager::mTemporaryStorageUsage and the per-client usages are all uint64_t and are decreased by raw mUsage -= aSize in:
OriginInfo::LockedDecreaseUsage(line 123)OriginInfo::LockedResetUsageForClient(line 145)OriginInfo::LockedTruncateUsages(line 200)OriginInfo::LockedPersist(line 181)GroupInfo::LockedAdjustUsageForRemovedOriginInfo(lines 53, 60)
Each is guarded only by AssertNoUnderflow, which is MOZ_ASSERT -> debug-only. In release builds, any drift between the cached client-usage counters and on-disk reality wraps the counter to UINT64_MAX - delta. A concrete drift path: dom/cache/FileUtils.cpp::RemoveNsIFile decreases by the file's actual GetFileSize(); if that exceeds what mClientUsages[DOMCACHE] recorded for that file, the subtraction wraps. The Time-Machine / partial-profile-restore scenario in comment 0 is exactly the kind of state that makes the on-disk size diverge from the recorded usage.
The wrapped value is then persisted into the metadata-v2 file (ActorsParent.cpp:3648 Write64(mOriginUsage)) and re-read on next startup (ActorsParent.cpp:3743), which is why the bogus total is sticky across restarts until the origin is cleared. This matches comment 16.
Proposed minimal fix
Make the bookkeeping subtractions saturate at zero. Add a helper next to the existing AddCapped in dom/quota/UsageInfo.h:
inline void SubFloor(uint64_t& aValue, uint64_t aDelta) {
aValue = aDelta > aValue ? 0 : aValue - aDelta;
}
…and use it for mUsage, mGroupInfo->mUsage, quotaManager->mTemporaryStorageUsage and the per-client usage in the five sites above. The AssertNoUnderflow calls on those subtractions can be removed.
Why this is the right granularity:
- The additive side is already saturated (
AddCapped); subtraction is the only remaining path that can produce a hugeuint64_t. - All call sites are cold (file removal, client reset, eviction, origin teardown), so the extra branch is irrelevant for perf.
- It stops the bleeding: corrupt values can no longer be created in memory and persisted into metadata-v2 / the L1 cache.
Existing profiles that already have a wrapped mOriginUsage on disk will remain wrong until that origin is cleared. Self-healing for those would need a one-time check after reading the metadata (e.g. if mOriginUsage exceeds Σ mClientUsages[i].valueOr(0) by more than a sane bound, recompute it), but that is independent of and larger than the fix above.
This is an automated analysis result. If this result is incorrect please add a needinfo and feel free to correct the error.
| Assignee | ||
Comment 27•6 days ago
|
||
Replace unguarded uint64_t decrements in OriginInfo/GroupInfo with
detail::SubFloor() (saturating, mirrors existing AddCapped) to prevent
wrap-around to ~UINT64_MAX on counter drift. Add
FullOriginMetadata::FixUsagesIfInconsistent() called at both metadata
read paths to repair already-corrupted values in existing profiles.
Updated•6 days ago
|
Description
•