Closed Bug 204164 Opened 19 years ago Closed 13 years ago

Calculating memory cache as percentage of physical memory may not be such a good idea


(Core :: Networking: Cache, defect)

Not set





(Reporter: Christian_Mueller, Assigned: gordon)



User-Agent:       Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030314
Build Identifier: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030314

I just got aware of 105344. It may be too late but I think this is heading in
the wrong direction.

I believe that the amount of physical memory should not be of any interest to
application programs -- you never now whether the system is already paging and
whether a cache increase will make Mozilla 5% faster but slow down the rest of
the system to a crawl. Instead, each application should use as little amount of
memory as possible and, in the case of caches, let the user specify how much
total memory they are willing to allocate for this.

Furthermore, Mozilla is already very memory intensive and a few percent
rendering performance are not worth using up yet another 4+ megabytes on an
average system because it will slow down the rest of the system (even if it's
just lost file cache). You may want to calculate an initial default value for
brand-new installs this way (i.e. as a percentage) but the rest should be up to
the user.

Last but not least, the physical memory size is the wrong number to base any
caculation on because many operation systems can restrict the amount of memory
available to a single application or user, thus Mozilla may not even start on
those systems. Think of a big Unix or Windows Terminal server with 4GB RAM and a
ulimit of 64MB per process....

Reproducible: Didn't try

Steps to Reproduce:
You can still use the old memory cache : set the browser.cache.memory.capacity
preference to a positive number (create one in about:config if necessary). The
old default value used to be 4096, but a 8MB cache feels more comfortable
(that's the number selected when you have 128MB of RAM).

I agree that this whole algorithm is just a rule of thumb, and isn't appropriate
in very situation. But the maximal size of the cache is *only* 58MB, and only if
you have 4GB of RAM. That doesn't seem to cause a problem for most users, I think. 

BTW : if a malloc fails (because of per-process limits for instance), we still
have the memory-pressure observers. That will automatically free some memory to
make sure that this particular allocation will succeed (AFAIK this includes the
memory cache). Most people would never notice that they've hit a memory-barrier.
But I'm not entirely sure if this works on UNIX.

I think we can make 1 enhancement on UNIX and UNIX-like operating systems (Mac
OS X included) : instead of the physical memory, we could use the per-process
memeory limit, because that's the real memory limit for what the process is
concerned. If you happen to run on a real multi-user server with hundreds of
users and a pain-in-the-ass administrator (*), then you'll find yourselves with
a very small memory limit, although the real server might have hundreds of
gigabytes physical memory. I think that the desktop Unices (LINUX mostly) are a
bit spoiled, because they always have the entire memory space for themselves.

(*) = I used to be one of them, so I'm not complaining :-)

Yeah, if we can get the per-process memory limit we should use that.

Also notice that when memory pressure observers fire the memory cache should get
flushed, so I'm not sure how much we will ever swap due to memory cache...
Ever confirmed: true
This could become a concern on some of our higher end AIX systems, which support
256 GB or more of memory. This turns out to be around ~180mb of memory cache. If
you combine that with a large amount of users running Mozilla on one system, it
could become an issue.

I think it would make sense to set some sort of upper limit on the memory cache
when the new algorithm is used, but allow those who set it manually through the
preference to a higher value to do so.
On Win 9x Systems the "solution" of Bug 105344 had also a bad side effect: 
GDI system resources were stored with the cache entries, so the cachesize 
was the limiting factor to Mozilla's GDI usage. On Win 9x systems with great 
memory size these resources are now eaten away until Mozilla or any other
application crashes. 

 Besides that, a user who spends money on new memory is likely to do this
for some special purpose and doesn't expect the new memory to be eaten up
"automagicaly". One of this purposes could be a slow or energy consuming
hard disk (think of laptops), so some users might want a higher percentage
of memory used for caching.

 The removed User Interface for memory cache should be reintroduced, because
"about:config" is deterring to newbies and has no Internationalisation. The
current GUI is missleading anyway, i would never have guessed that the 
"clear cache" button behind the field to change the size of the disc cache
frees the memory cache, too.
*** Bug 211807 has been marked as a duplicate of this bug. ***
Bug 211807 reminded us that it's not just Unix and Unix-like OS'es (Linux, Mac
OS X, ...) that have multiuser capabilities, and as a result per-user and/or
per-process memory limits. VMS has them too, so --> All/All.
OS: Linux → All
Hardware: PC → All
For OpenVMS I have a suggestion how to make a memory cache size calculation,
maybe a similar setup can be used for other OS types as well.

To make the whole thing understandable first a few remarks about how things work
on OpenVMS. 

The Mozilla kit comes with a lot of libraries. By installing these libraries in
memory as sharable images, they can be used by all Mozilla users, but more
importantly for this subject, the memory usage of the libraries is not
calculated against the user account settings. Only a small fraction of the total
Mozilla code will be in the Working Set (WS) of the user.

There are several settings in a user account that regulate memory usage. Three
of them are WSquo, WSextent and Pgflquo. WSquo is the amount of physical memory
the user process is always entitled to use. WSextent is the amount of physical
memory that can be given to a user process if the system resources permit this.
However extra memory in excess of WSquo can also be reclaimed by the system.
Pgflquo is the amount of virtual memory the user process can use, including the
use of the page file. Paging on OpenVMS is called swapping on other operating
systems. OpenVMS also has swapping, but then the whole process is swapped out of
memory with the exception of a process header. Obviously a process can not be
active if it is swapped out.

There is also a System Service called $Getjpi that can be used to get all kind
of information about a process, including memory usage information. The
parameter WSsize will give the actual present memory usage of the process, the
parameter WSquota will obviously give information about the garantueed maximum
memory usage of the process

Now assume Mozilla is starting up, and just before the home page is loaded into
memory we try to calculate the maximum size of the memory cache. Then this could
be a calculation we can use:

maximum memory cache = $Getjpi(WSquota) - $Getjpi(WSsize)

This does not take in account the memory usage of the mail and news agent, so we
will have to make corrections. In the end it may look like this:

maximum memory cache = $Getjpi(WSquota) - ( $Getjpi(WSsize) * 1.5 )

Even if the memory usage of Mozilla code + memory cache would grow beyond WSquo,
it most likely would still remain in memory if the system resources are not
stretched to much. Otherwise paging might occur, which can be disk IO and is
unwanted for memory cache.
I'm aware this is all very OpenVMS specific, but I suppose similar principles
apply to Unix variants as well. So it should only be a matter of porting /
'translating' such a setup.


We've been debating this for a long time.

Let's see if people report a lot of problems (besides GDI), and think about a
1.5b/1.5f fix.

Power users can tune this value to whatever they like via about:config.
Depends on: 296538
Bug 296538 placed an upper limit of 32 MB on the cache, so most of the problems are over. But when the ulimit is much smaller then physical memory (large multi-user systems ?) we would still use too much cache.
Is anyone still having a problem with the default memory cache size calculation? If so, what OS are you using? What's the limit on the amount of memory each process can use? If you're manually entering a value of the size of the memory cache, what is it, and does it help significantly? The reason I'm asking is that I think bug 296538 helped a lot, and in the cases where users are still having a problem, bug 296818 may fix more problems than lowering the default size of the cache.
I think this was essentially fixed by the two bugs I mentioned in my previous comment.
Closed: 13 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.