Closed
Bug 595389
Opened 14 years ago
Closed 8 years ago
Disk Cache Block Files Should Not Grow Too Large
Categories
(Core :: Networking: Cache, defect)
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: byronm, Unassigned)
References
(Blocks 1 open bug)
Details
We are introducing functionality to support more block files in Bug 593198. A side effect of this is that we will have larger block files, and while this solves the problem of having too many cache files in our single cache directory, it introduces the problem of having too large of files in the cache directory. We need to implement functionality to spawn new block files when one becomes too large. So if, for instance, the 16K block file reaches a certain size, then we would create another 16K block file to start writing to. To be clear--this bug has nothing to do with how many buckets we should create. That is more in line with Bug 593198. This bug is just saying that given our bucket set up, let's not let the buckets grow out of hand in a single file. So the question is what the cutoff should be for a block file, at which point we spawn a sibling block file to start writing to?
Comment 1•14 years ago
|
||
I don't think there's a reason to fear having large cache block files (at least within the 1 GB overall max cache size constraint we've got--if we were exceeding 32 bits in size per file, then perhaps). Taras can probably tell me if I'm wrong here.
Comment 2•14 years ago
|
||
Splitting up block files into multiple files is at least an advantage on mac(since they can all be defragmented if under 20mb). So it's definitely something we should do.
Status: RESOLVED → REOPENED
Resolution: INVALID → ---
Comment 3•8 years ago
|
||
new cache code
Status: REOPENED → RESOLVED
Closed: 14 years ago → 8 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•