Closed
Bug 975367
Opened 11 years ago
Closed 11 years ago
~70M AWSY regression on February 18
Categories
(Core :: Networking: Cache, defect)
Core
Networking: Cache
Tracking
()
RESOLVED
WORKSFORME
People
(Reporter: Swatinem, Unassigned)
References
(Blocks 1 open bug)
Details
(Keywords: regression, Whiteboard: [MemShrink:P1])
Look at AWSY, see the big (~70M) spike around Feb 18.
The pushlog range looks like this: https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=ae95fa9d4450&tochange=0b1c1795142e
Zooming further in, it might be from bug 967693?
![]() |
||
Comment 1•11 years ago
|
||
It's the cache memory pool. We have 50MB of pool where we keep the most recently used data. This pool size is controllable at runtime via a preference.
So, it's a question if this is a bug or not, it's a feature actually.
Reporter | ||
Comment 2•11 years ago
|
||
In that case it is missing a memory reporter. AWSY reports it as heap-unclassified.
Would also be nice if someone could annotate the graph on AWSY to point to this (or the original) bug, I have no idea how to do that.
Updated•11 years ago
|
Whiteboard: [MemShrink]
![]() |
||
Comment 4•11 years ago
|
||
Why is this marked as [MemShrink] ?
Updated•11 years ago
|
Blocks: 967693
Component: Untriaged → Networking: Cache
OS: Windows 7 → All
Product: Firefox → Core
Hardware: x86_64 → All
Comment 5•11 years ago
|
||
(In reply to Honza Bambas (:mayhemer) from comment #4)
> Why is this marked as [MemShrink] ?
I thought an increase in memory footprint was worth discussing in the MemShrink meeting, but perhaps I read the bug wrong.
![]() |
||
Comment 6•11 years ago
|
||
(In reply to Arpad Borsos (Swatinem) from comment #2)
> In that case it is missing a memory reporter. AWSY reports it as
> heap-unclassified.
and/or the pref needs to be adjusted for lower-memory devices.
(In reply to Ben Kelly [:bkelly] from comment #5)
> (In reply to Honza Bambas (:mayhemer) from comment #4)
> > Why is this marked as [MemShrink] ?
>
> I thought an increase in memory footprint was worth discussing in the
> MemShrink meeting, but perhaps I read the bug wrong.
Seems appropriate to me.
Comment 7•11 years ago
|
||
I think it is worthy of some discussion. 70mb is a pretty huge increase, almost 20%.
![]() |
||
Comment 8•11 years ago
|
||
We have bugs to decide on how this should be set on mobile and b2g. Let you know that for those two the memory pool is at no more then 5MB only.
True is that I've chosen 50MB arbitrarily, no wider discussion was made.
I just implemented the memory reporter and I can see some 5MB (looks like constant) overhead of this limit (means that when the pool is at its 50MB we report some 55MB consumption via the complete memory reporter, bug 964039). The limit only analytically counts all the buffer sizes, not all the stuff around like classes, hashtables, objects, open files etc.
![]() |
||
Comment 9•11 years ago
|
||
5MB on B2G is *enormous*, esp. on Tarako (the 128 MiB devices). On Android it's large. 50MB on desktop is also large.
This definitely needs discussion.
Reporter | ||
Comment 10•11 years ago
|
||
Also, is this using *real* memory, as opposed to mmaped things that can be paged out by the os?
![]() |
||
Comment 11•11 years ago
|
||
(In reply to Arpad Borsos (Swatinem) from comment #10)
> Also, is this using *real* memory, as opposed to mmaped things that can be
> paged out by the os?
Ask Michal. AFAIK, this is using xalloc'ated memory. It's subject to change, but probably not for the first deployment.
![]() |
||
Comment 12•11 years ago
|
||
This is unacceptable, and is grounds for disabling this feature until the memory consumption is under control. (We gave gps a hard time when FHR landed and it used 3 MiB of memory on desktop!)
How were the 50MB/5MB numbers chosen? Can it be reduced easily? Etc, etc. Details, please!
status-firefox30:
--- → affected
tracking-firefox30:
--- → ?
Whiteboard: [MemShrink] → [MemShrink:P1]
Updated•11 years ago
|
Keywords: regression
![]() |
||
Comment 13•11 years ago
|
||
(In reply to Nicholas Nethercote [:njn] from comment #12)
> This is unacceptable, and is grounds for disabling this feature until the
> memory consumption is under control. (We gave gps a hard time when FHR
> landed and it used 3 MiB of memory on desktop!)
The new cache is disabled by default. Why so much yelling?
>
> How were the 50MB/5MB numbers chosen?
Arbitrarily as I said earlier.
> Can it be reduced easily?
Sure, with just a pref.
> Details, please!
Not sure what other details you want.
I designed the pool as a performance optimization feature. Most used content is left in memory, so pages may load shared content faster (way faster!). I really don't insist in keeping this at 50MB or keeping it all. However, some at least small size of the pool for desktop can be IMO beneficial. Measurement is needed and wider discussion as well.
Also, we should use VolatileBuffers or mmaped memory - not planned for the first version enabled. Now it's all just (not good...) xallocated memory.
Nick, let you know you are the first to complain, and I understand. It's good we made the first trial to get wider knowledge of what cache2 all does!
![]() |
||
Comment 14•11 years ago
|
||
Dropping the affected flag since this is disabled.
status-firefox30:
affected → ---
![]() |
||
Comment 15•11 years ago
|
||
And btw, the pool is completely purged on memory-pressure notification.
From AWSY it looks like this was disabled on the 24th. That was never mentioned in this bug though ...
status-firefox30:
--- → disabled
![]() |
||
Comment 17•11 years ago
|
||
(In reply to Kyle Huey [:khuey] (khuey@mozilla.com) from comment #16)
> From AWSY it looks like this was disabled on the 24th. That was never
> mentioned in this bug though ...
Ah, yeah, that's true.
Blocks: cache2enable
![]() |
||
Comment 18•11 years ago
|
||
I didn't realize it was disabled. Thanks for that.
> Measurement is needed and wider discussion as well.
Definitely. In this case, the onus is on you, as the implementer of the new cache, to try different size caches and identify what kind of speed-ups you got, and then argue for a particular configuration. And all this before the new cache is enabled. Thanks!
![]() |
||
Comment 19•11 years ago
|
||
So I'll mark this as fixed, but the general issue of how big this cache is should be resolved before it lands. Is there a bug open for enabling it?
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WORKSFORME
![]() |
||
Comment 20•11 years ago
|
||
(In reply to Nicholas Nethercote [:njn] from comment #19)
> So I'll mark this as fixed, but the general issue of how big this cache is
> should be resolved before it lands. Is there a bug open for enabling it?
cache2enable alias.
Updated•11 years ago
|
![]() |
||
Comment 21•11 years ago
|
||
I've made some basic testing with various sizes of the cache memory pool on an older mechanical drive box. It doesn't seem in simple scenarios like browsing a simple blog pages give any significant improvement. Still I believe it may save some main thread loops in some situations.
So, I'll change the behavior to:
- reengage browser.cache.memory.capacity pref
- use the pool implementation as the memory cache and obey memory.capacity for it
- don't keep disk entries data in the pool, only some limited amount of metadata for faster zero-loop examination
- remove the cache.memory_limit preference
![]() |
||
Comment 22•11 years ago
|
||
> So, I'll change the behavior to:
> - reengage browser.cache.memory.capacity pref
> - use the pool implementation as the memory cache and obey memory.capacity
> for it
> - don't keep disk entries data in the pool, only some limited amount of
> metadata for faster zero-loop examination
> - remove the cache.memory_limit preference
How does that configuration affect memory consumption?
![]() |
||
Comment 23•11 years ago
|
||
(In reply to Nicholas Nethercote [:njn] from comment #22)
> > So, I'll change the behavior to:
> > - reengage browser.cache.memory.capacity pref
> > - use the pool implementation as the memory cache and obey memory.capacity
> > for it
> > - don't keep disk entries data in the pool, only some limited amount of
> > metadata for faster zero-loop examination
> > - remove the cache.memory_limit preference
>
> How does that configuration affect memory consumption?
browser.cache.memory.capacity will work the same way as it does now for cache v1.
there will be a pref browser.cache.disk.metadata_memory_limit (250kB) limiting how many bytes of metadata we will keep for quick access.
So for common scenarior from 70MB we will drop to some 300kB with very small effect on the load performance.
![]() |
||
Comment 24•11 years ago
|
||
That sounds much better. Thanks!
You need to log in
before you can comment on or make changes to this bug.
Description
•