Closed Bug 604897 Opened 9 years ago Closed 9 years ago

Unable to cache small entries when _CACHE_00[1-3]_ is full


(Core :: Networking: Cache, defect, major)

Not set



Tracking Status
blocking2.0 --- betaN+


(Reporter: michal, Assigned: michal)



(Keywords: regression, Whiteboard: [softblocker])


(1 file, 2 obsolete files)

This is a follow-up bug of #588804 (see

We stop caching entries with certain size when some cache block file is full. We do evict entries due to insufficient space only when total limit of the cache is exceeded (

I'm not sure right now about the proper fix, but I guess that it needs (together with fixing the bug #593198) quite a big change and can't be done till the FF4 release. As an intermediate solution, we could do the following:

- Increase the capacity of the cache block files (maybe just _CACHE_001_).
- When some cache block file is full, allow to store the cache entry in another block file and/or as a separate file. For this we IMO need to land #597224, otherwise the cache performance could suffer a lot.

Any opinions?
CCing Taras since I believe he has done stuff to the cache-files.
IMO, we should allow any block-file to grow until the combined size of all cache-entries has reached the max-size of the disk-cache. The fact that we use block-files or separate files (or a mix) is an implementation-detail and should not impose anything on the user.

I realize that in practice, number of records may be limited by bit-patterns in nsDiskCacheMap, but IMO we can afford a new integer or two in this structure. And while we're at it (messing around with those packed values), we might fix bug #605101 also.
> - Increase the capacity of the cache block files (maybe just _CACHE_001_).
> - When some cache block file is full, allow to store the cache entry 
> in another block file and/or as a separate file.

Yes, we definitely want one or both of these. I agree that #2 would require bug 597224 (which is high on my review queue).

I think we want this for FF4, as no longer caching new small files is a serious problem and would seriously affect our cache hit rate.

Taras pointed out that apparently OSX does special Good Things with defragging files under 20 MB (though I've forgotten what, exactly: Taras?).  So we may want to keep multiple 20MB cache block files rather than a larger single file.
blocking2.0: --- → ?
Blocking as this could have a significant impact on our cache hit rate.
Assignee: nobody → michal.novotny
blocking2.0: ? → betaN+
Attached patch patch v1 (obsolete) — Splinter Review
If appropriate cache block file is full then the next one with bigger block is tried. If the data can't be stored in any of those block files then separate file is used.

I've also changed the number of blocks of the cache block files. With this patch the capacity of all files is the same:

_CACHE_001_ blocksize=256  blockcount=131072 => max.size=33570816
_CACHE_002_ blocksize=1024 blockcount=32768  => max.size=33558528 
_CACHE_003_ blocksize=4096 blockcount=8192   => max.size=33555456
Attachment #489498 - Flags: review?(jduell.mcbugs)
Depends on: 597224
Keywords: regression
Whiteboard: softblocker
Comment on attachment 489498 [details] [diff] [review]
patch v1

softblocker?  I actually think this is a horrible, block-worthy bug.  Not fixing essentially means caching will quickly stop working for the majority of files.  So I'd *really* like to see this fixed, probably more than any other outstanding 2.0 necko bug.

I've been meaning to get to this review, but am swamped with my move, so I'm giving this to Honza for now.  Honza, if you don't feel comfortable with it, give to Bjarne.
Attachment #489498 - Flags: review?(jduell.mcbugs) → review?(honzab.moz)
Whiteboard: softblocker → [softblocker]
Comment on attachment 502378 [details] [diff] [review]
patch v1 (by michal) merged for review by review board

Looks good.  Just few nits:

> nsDiskCacheBlockFile::AllocateBlocks(PRInt32 numBlocks)

return i * 32 + bit;  maybe also needs typecast?

> #define BLOCK_SIZE_FOR_INDEX(index)  ((index) ? (256 << (2 * ((index) - 1))) : 0)
>+#define BITMAP_SIZE_FOR_INDEX(index)  ((index) ? (131072 >> (2 * ((index) - 1))) : 0)

Where the number 131072 comes from?  Maybe add some comments how you get to this number, and how the calculations work.  Maybe share the (2 * index - 1) code in both macros?

> PRUint32
> nsDiskCacheMap::CalculateFileIndex(PRUint32 size)
> {
>-    if (size <=  1024)  return 1;
>-    if (size <=  4096)  return 2;
>+    if (size <=   768)  return 1;
>+    if (size <=  3072)  return 2;
>     if (size <= 16384)  return 3;
>     return 0;  
> }

Also, why these numbers must be changed right to these values?  Really need some calculation operations summary to keep track of these magics.

Attachment #502378 - Flags: review+
Attachment #489498 - Flags: review?(honzab.moz)
Duplicate of this bug: 626191
Attached patch patch v2Splinter Review
Honza, please take a quick look at the changes to see if it is OK now.
Attachment #489498 - Attachment is obsolete: true
Attachment #502378 - Attachment is obsolete: true
Attachment #504688 - Flags: review?(honzab.moz)
Comment on attachment 504688 [details] [diff] [review]
patch v2

Attachment #504688 - Flags: review?(honzab.moz) → review+
Keywords: checkin-needed
Closed: 9 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → mozilla2.0b10
You need to log in before you can comment on or make changes to this bug.