Closed
Bug 988318
Opened 11 years ago
Closed 11 years ago
HTTP cache v2: enlarge chunk size
Categories
(Core :: Networking: Cache, defect)
Core
Networking: Cache
Tracking
()
RESOLVED
FIXED
mozilla31
People
(Reporter: mayhemer, Assigned: mayhemer)
References
Details
Attachments
(1 file)
703 bytes,
patch
|
michal
:
review+
|
Details | Diff | Splinter Review |
Based on:
cache1 vs cache2: http://compare-talos.mattn.ca/?oldRevs=c0ec2e234457&newRev=a505e96204f6&server=graphs.mozilla.org&submit=true
cache1 vs cache2+512kB chunk: http://compare-talos.mattn.ca/?oldRevs=c0ec2e234457&newRev=56e8a11532b3&server=graphs.mozilla.org&submit=true
![]() |
Assignee | |
Updated•11 years ago
|
Hardware: ARM → All
![]() |
Assignee | |
Comment 1•11 years ago
|
||
256kb: http://compare-talos.mattn.ca/?oldRevs=c0ec2e234457&newRev=967920366e81&server=graphs.mozilla.org&submit=true
128kb: http://compare-talos.mattn.ca/?oldRevs=c0ec2e234457&newRev=25b80694cf10&server=graphs.mozilla.org&submit=true
For me the 512kB wins.
Michal?
Flags: needinfo?(michal.novotny)
Comment 2•11 years ago
|
||
Setting too large chunk size will cause problems on devices with small memory in case more larger resources are being read/written from/to cache at the same time. Right now we release all chunks when the CacheFile is no longer used, but I think the correct behavior (which we will IMO sooner or later implement) is to release all chunks that are no longer used by CacheFile. So right now there is not too much difference in memory consumption between large and small chunk size. But in case we keep only active chunks in memory, we occupy much less memory. Let's assume that 10 resources bigger than 512kB are being read from or written to cache. We need to allocate 5MB in case of 512kB chunk size but only 160kB in case of 16kB chunk size. I know that we have a plan to use fallible allocator for buffer in chunk, but this doesn't solve the problem, that cache will be completely unusable on devices with small memory.
Flags: needinfo?(michal.novotny)
![]() |
Assignee | |
Comment 3•11 years ago
|
||
(In reply to Michal Novotny (:michal) from comment #2)
> Setting too large chunk size will cause problems on devices with small
> memory in case more larger resources are being read/written from/to cache at
> the same time. Right now we release all chunks when the CacheFile is no
> longer used, but I think the correct behavior (which we will IMO sooner or
> later implement) is to release all chunks that are no longer used by
> CacheFile. So right now there is not too much difference in memory
> consumption between large and small chunk size. But in case we keep only
> active chunks in memory, we occupy much less memory.
> Let's assume that 10
> resources bigger than 512kB are being read from or written to cache. We need
> to allocate 5MB in case of 512kB chunk size but only 160kB in case of 16kB
> chunk size.
I don't much get this. Does it mean we allocate full chunk size even for a 1 byte file? Could we change this, since we know the file size and allocate only size % chunksize for the last chunk when reading? And reallocate for writes?
Still, this doesn't solve write problem - we will allocate the full chunk - unless the chunk buffering logic is made smarter like having 16kb sub-chunks in a 512kb chunk. But that sounds a bit like preloading several chunks.
> I know that we have a plan to use fallible allocator for buffer
> in chunk, but this doesn't solve the problem, that cache will be completely
> unusable on devices with small memory.
But resources may still go from network and from the user's point of view the device will work and not crash/be extremely slow due to swaps etc. So it's needed anyway.
------------
Anyway, how do you suggest to solve this?
Flags: needinfo?(michal.novotny)
Comment 4•11 years ago
|
||
(In reply to Honza Bambas (:mayhemer) from comment #3)
> > Let's assume that 10
> > resources bigger than 512kB are being read from or written to cache. We need
> > to allocate 5MB in case of 512kB chunk size but only 160kB in case of 16kB
> > chunk size.
>
> I don't much get this. Does it mean we allocate full chunk size even for a
> 1 byte file? Could we change this, since we know the file size and allocate
> only size % chunksize for the last chunk when reading? And reallocate for
> writes?
When reading, we allocate exact sum that we need. When writing, we start with 512 bytes and double it up to chunk size. In the example above I clearly stated that all entries are bigger than 512kB, so we always allocate full chunk size.
> Still, this doesn't solve write problem - we will allocate the full chunk -
> unless the chunk buffering logic is made smarter like having 16kb sub-chunks
> in a 512kb chunk. But that sounds a bit like preloading several chunks.
As I wrote above, we grow the buffer in steps, but we always end up with the chunk size if the entry is large. We cannot divide it to smaller pieces easily since we compute the hash from all data in chunk.
> > I know that we have a plan to use fallible allocator for buffer
> > in chunk, but this doesn't solve the problem, that cache will be completely
> > unusable on devices with small memory.
>
> But resources may still go from network and from the user's point of view
> the device will work and not crash/be extremely slow due to swaps etc. So
> it's needed anyway.
It will be slow and the system will swap. We will use a lot of memory before the allocations start to fail. So the system will swap and the process will be probably killed soon due to some infallible allocation at completely different place in the code. We shouldn't waste memory just because we will use fallible allocator.
> ------------
>
> Anyway, how do you suggest to solve this?
Maybe preloading more chunks would fix this too while we could still keep memory usage low when needed. I'm not against changing chunk size, but it definitely should not be based on a single talos test. Although I have no data, I think that chunk size larger than 64kB could be problematic.
Flags: needinfo?(michal.novotny)
![]() |
Assignee | |
Comment 5•11 years ago
|
||
(In reply to Michal Novotny (:michal) from comment #4)
> (In reply to Honza Bambas (:mayhemer) from comment #3)
> > > Let's assume that 10
> > > resources bigger than 512kB are being read from or written to cache. We need
> > > to allocate 5MB in case of 512kB chunk size but only 160kB in case of 16kB
> > > chunk size.
> >
> > I don't much get this. Does it mean we allocate full chunk size even for a
> > 1 byte file? Could we change this, since we know the file size and allocate
> > only size % chunksize for the last chunk when reading? And reallocate for
> > writes?
>
> When reading, we allocate exact sum that we need. When writing, we start
> with 512 bytes and double it up to chunk size.
Ah, didn't know about that.
> As I wrote above, we grow the buffer in steps, but we always end up with the
> chunk size if the entry is large. We cannot divide it to smaller pieces
> easily since we compute the hash from all data in chunk.
all data means to include also the padding 0?
> It will be slow and the system will swap. We will use a lot of memory before
> the allocations start to fail. So the system will swap and the process will
> be probably killed soon due to some infallible allocation at completely
> different place in the code. We shouldn't waste memory just because we will
> use fallible allocator.
>
>
> > ------------
> >
> > Anyway, how do you suggest to solve this?
>
> Maybe preloading more chunks would fix this too while we could still keep
> memory usage low when needed. I'm not against changing chunk size, but it
> definitely should not be based on a single talos test. Although I have no
> data, I think that chunk size larger than 64kB could be problematic.
OK, so you more suggest to do preloads. I'm not against, it just means a lot more work. Do you have an estimation when this could be done? I could also take it, I know the chunk management code relatively well.
![]() |
Assignee | |
Comment 6•11 years ago
|
||
So, I made some tests with an ssd drive and an microsd card while loading a page with some 500 image thumbnails with 16k and 256k chunks and the patches for bug 986179 + few tweak I will soon put for review.
There is no big difference in real memory consumption with different chunk sizes, actually larger chunk size makes the data be written faster to the media (16x less overhead) so it's actually a bit better with larger chunk!
![]() |
Assignee | |
Comment 7•11 years ago
|
||
(In reply to Honza Bambas (:mayhemer) from comment #6)
> ... and the patches
> for bug 986179 + few tweak I will soon put for review.
https://bugzilla.mozilla.org/attachment.cgi?id=8401622&action=edit
![]() |
Assignee | |
Comment 8•11 years ago
|
||
based on memory test result - comment 6.
Assignee: nobody → honzab.moz
Status: NEW → ASSIGNED
Attachment #8401837 -
Flags: review?(michal.novotny)
Updated•11 years ago
|
Attachment #8401837 -
Flags: review?(michal.novotny) → review+
Comment 9•11 years ago
|
||
Comment 10•11 years ago
|
||
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla31
You need to log in
before you can comment on or make changes to this bug.
Description
•