Closed Bug 512849 Opened 15 years ago Closed 11 years ago

implement modern cache

Categories

(Core :: Networking: Cache, defect)

defect
Not set
normal

Tracking

()

RESOLVED DUPLICATE of bug 877301

People

(Reporter: vlad, Unassigned)

References

(Blocks 1 open bug)

Details

Our cache is extremely crufty and has some significant issues that we would really like to resolve.  Chief among these:

1) Writing to the cache is currently synchronous with resource loading (in 16k chunks).  IO is painful, and we should do any cache IO on a separate thread with a work queue.  This is especially noticable on mobile platforms and other places where IO is slow, where the cache has a significant adverse effect on pageload time.

(bsmedberg pointed out an interesting optimization, which is if we ever get backed up too far, due to slow disk or whatever, just throw away the data that's still in the queue and mark those entires as bogus and ignore further writes to them.)

2) The cache API is overly complex for what it needs to do, leading to significant code maintenance and optimization issues.

3) The cache has both number of elements limits and collision issues.

Some things that should probably not be in the first implementation but might be interesting in the future are tnings like being able to transparently compress cache entries, again for mobile, where CPU is often a lot faster than IO.

Looking at Chrome's disk cache might be an interesting exercise; documentation is at http://dev.chromium.org/developers/design-documents/disk-cache
Assignee: nobody → bjarne
Blocks: 175600
No longer blocks: 175600
Steve Souders has some thoughts about cache at http://www.stevesouders.com/blog/2010/04/26/call-to-improve-browser-caching/

A comment there mentions a better alternative than LRU, namely Adaptive Replacement Cache. (http://en.wikipedia.org/wiki/Adaptive_replacement_cache) If IBM is willing to let Mozilla use it, could that be something worth investigating?
Regarding ARC http://www.varlena.com/GeneralBits/96.php Can use the postgresql though
Idle thoughts - I've implemented a couple intermediaries with caches using structured log approaches.. basically you preallocate the disk storage (and thus have no fragmentation issues) in one file which you treat as a single ring buffer - all writes go to the tail pointer.  when your ring wraps around you read a decent number of blocks of data, sort it by usefulness, and write back some fixed percentage of it. Whatever doesn't make the cut creates space for new data.. Kind of like grading on a curve. a very different approach than global cache management.. in some way it creates extra IO, but it all happens in large contiguous operations which have great bandwidth properties. 

bsmedberg's comment about throttling writes to a maximum rate matches my experience too.. probably want some kind of random drop instead of just from the front or rear of the unwritten queue.

the results were good for an intermediary, I would have to think about how it might apply differently in a client context.
a cache is only useful for data that remains static for a period of time and is not changing from hour to hour or day to day. 

for example website logos and css files which remain as they are for months are ideal candidates to keep in cache. a newspaper site which is called by the user only to check for NEW content is a bad candidate because it only exists to be permanently renewed. 

another bad candidate for caching is content which is viewed only once. a funny youtube video is watched once, you laugh about it, and thats it. seldom you want to laugh a second time.

currently most of the cache is filled with dynamic content like news, blogs and so on
which is of no use if we have that in the cache. 
ok increasing the cache size also increases the place for static long term content, but it is not a smart solution.

a cache size of 50 MB is perfect if we really have static content which is not
changing. 

a clever strategy is to keep only things in the cache that repeat to appear. that is the software has to do some statistic calculations and keep the things in the cache which are used very often without changes.

for example after the google logo image has been read and verified 3 times without changes, it is clear that it should stay in the cache.
(In reply to comment #5)

Citing from there:
"PostgreSQL used ARC in its buffer manager for a brief time (version 8.0.0), but quickly replaced it with another algorithm, citing concerns over an IBM patent on ARC."
Mozilla probably would have the same concerns.
Assignee: bjarne → nobody
duping this to the cache rewrite meta-bug.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.