Closed Bug 228245 Opened 21 years ago Closed 13 years ago

Limit x11 server side image storage to 128mb?

Categories

(Core :: Graphics: ImageLib, enhancement)

x86
Linux
enhancement
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: timeless, Unassigned)

Details

while looking at bug 51028, bz and I decided that it might be a good idea to
limit the amount of graphics which we ask the x server to store. 128mb seems
like a good limit by today's standard. If it's possible to ask the server about
the graphics cache size, using some function of that would be even better.

Doing this would hopefully decrease the chance of mozilla inviting x to get
itself killed by the linux out of memory killer.
similar suggestion is bug 214146 comment 4...

I want a "vote against" feature in bugzilla...
> similar suggestion is bug 214146 comment 4...

Which I can't read, of course...

If you have a reason you don't like the suggestion, I'd like to hear it.  I'm
open to being told this is a bad idea and just marking this wontfix; this bug
was just filed as a place to have the discussion.
well, I just don't like introducing arbitrary limits in the codebase. why 128
and not say 256? or 64?

what if I have a lot of memory and want to view a page with many images?
(greetings to timeless and stress.html)

though I do wish linux would just make malloc() fail on oom.
128 because most modern graphics cards have at least 64 megs of ram on board, s
128 should be safe.

I'm willing to go with more than 128; again, that was just a number to start
considering.   Also note the part about it being best if there is an API we can
use to ask X how much memory it'd be ok to try to allocate.
I'm reasonably sure that, in the current scheme at least, there is a short
time when a copy on an object exists simultaneously in both the mozilla
process and the X process. If you want to set a limit, I think you need to 
know something about the current physical memory. You don't want to force the 
system into swapping. I don't think the video card's memory is all that 
relevant. Mozilla doesn't know what card it's running on or even if the X 
server is doing accelerations.

> though I do wish linux would just make malloc() fail on oom.

Linux _does_ fail on ppm unless you fiddle with 
/proc/sys/vm/overcommit_memory. You deserve what you get if you do.
>You don't want to force the  system into swapping. 

if people load images large enough to require swapping, they deserve what they get.
You shouldn't assume that because a graphics card has 64 megs of ram
that all of that will be available for the visible framebuffer and
offscreen pixmaps.  The XFree86 DRI-aware drivers don't have a unified
memory manager, so the framebuffer is split into a small portion for
2D and the majority for OpenGL buffers and textures.

XFree86 doesn't provide any method that I know of querying the
offscreen pixmap cache size, and even if it did you still need to
share this with all the other applications running on the system.

To a rough approximation, the memory cache limit is equal to the
amount stored on the server (actually an overestimate) save for the
images used by a current page.
> if people load images large enough to require swapping, they deserve what 
> they get.

How would the typical user know that since mozilla doesn't tell them?

BTW, you are going to apply any limit to the sum of all mozilla's images not 
just a per-image limit, right?
> How would the typical user know that since mozilla doesn't tell them?

why would users set up a swap partition/file when they don't want apps to use it?
> /proc/sys/vm/overcommit_memory

Hmm... that must be reasonably new.  I don't recall that option existing before.
 So the problem is that X just fails to allocate the storage it needs and
proceeds to crash.  I suppose one could argue this is an X bug, but it would
still be good to try not to trigger it, if feasible.
> why would users set up a swap partition/file when they don't want apps 
> to use it?

What makes you think a swap device is there for some random app to gobble
up? Most people I know have a swap device so that the kernel has a place 
to put such things as stopped jobs, temp filesystems, etc. Yes, 
occasionally an app will need some space but the performance penalties 
usually keep users from doing so very often. Then there are the users who 
have no swap device at all.

My point is that forcing the system to swap make it very slow and 
potentially fatal particularly if the swap device is small or 
non-existent. Also, mozilla is not the only app running.


> Hmm... that must be reasonably new.  I don't recall that option existing 
> before.

It's been there since the 2.2 kernel but /proc/sys/vm is full of arcane 
and dangerous things which most people avoid.

> So the problem is that X just fails to allocate the storage it needs 
> and proceeds to crash.

Do you know if X really is crashing on dereferencing a null pointer or if 
it's some mid-level library that's aborting on a malloc failure?
The end result is that the X server dies.  To me that says that X itself is
either crashing, aborting, or being killed.
> The end result is that the X server dies.

X is certainly not a model of robustness but I have to say that I've found 
that it crashes when a client sends bad data.

I tried some experiments but all I get so far is gdk aborting mozilla because 
of malloc failures. Do you have a simple test to crash X?
See comment 0.  The testcases in the bug cited therein will happily crash X if
you don't have enough RAM + swap, last I checked.
Some notes.

Video ram isn't a major consideration. Xfree86 just mmaps the various 
pieces of ram into its process. This reduces the process free space bu 
thas no effect on the system vm. There's also no reason to assume that 
the card takes data in the same format as the X server so it's possible 
(likely I think) that X copies, rather than moves, data to the video card.

On my system, even though I use depth = 24, X is storing data in 32bpp so 
the memory requirements are 1/3 larger.

I had to use a 10200x13200 image to force my system into swapping so it's 
slow enough to observe but there is indeed a period of time when mozilla 
and X simultaneously have a copy of an image. That's 7/3 the original 
space. After the image is transferred, mozilla releases its space.

So it's no surprise at all that mozilla can run the system out of memory. 
Nor is it a surprise that the kernel will occasionally see X as a memory 
hog and kill it. Note that the mozilla killer demos use 256 1024x1024 
images which would be 1GB of space on my machine. 

It's generally not possible to know how much space is left in the X 
process. I don't think it's easy to deduce how X is storing its data. 
These things make it difficult to determine how much data mozilla should 
send to the server.

It's also generally not possible to determine the current free space in 
the system vm. On Linux you could use the sysinfo call or parse 
/proc/meminfo but you would have to do it a lot because there are other 
process running and the total size can change arbitrarily.

All this makes reducing memory usage hard. I do wonder why mozilla wants
to pass all of a 10200x13200 image to the server when it's windows are
much smaller so you can't see all of the image at once.
Because that image loaded via <img width="10" height="10"> would be scaled on
the server side.

More to the point, if such an image is loaded in multiple places on the page
with different height/widths, we store ONE copy in memory and scale server-side
at paint time.
> More to the point, if such an image is loaded in multiple places on
> the page with different height/widths, we store ONE copy in memory and
> scale server-side at paint time.

Mozilla isn't getting into trouble with one image used many times, it's
getting into trouble using many images once (or, equivalently, one very
large image).

What I'm wondering is why mozilla loads _all_ images into the server
if they're not all visible at the current time. Mozilla should be able
to determine approximately what part of the layout can appear in the
visible window at any time. I think it should only load the server with
images from within this region.

For a single large image that is significantly larger than the visible
window, I would argue that is is safer, less memory intensive, and often
faster to create a subimage and pass that to the server. For example,
the mozilla drawing area is 900x720 and it want to display a 4000x4000
image from the upper left corner. I think it would be better to create a
subimage of, say, 1024x1024 and just transfer that to the server. If the
user scrolls, then create a new subimage.
> Mozilla should be able to determine approximately what part of the layout can
> appear in the visible window at any time.

Layout can.  The image library can't.  It can't have dependencies on layout,
since it's designed to be a standalone image library.
but who sends the images to the xserver? not libpr0n, right?
It's either libpr0n or GFX; neither one knows where in the page layout the image is.
gfx does (maybe really gtk/gdk, I don't know). libpr0n knows nothing about an X
server.
it seems that layout should be able to tell gfx that an image isn't really
important to the display atm.

This would be good on windows too where we waste lots of TS resources on images
(and fonts) for things that are offscreen (stress, i can create a text based
demo for the gdi font flavor...).

Should we move this bug to GFX? :)
> it seems that layout should be able to tell gfx that an image isn't really
> important to the display atm.

gfx has a single image.  This can map to multiple images in layout, in multiple
browser windows, multiple tabs, etc.  There is no fast way to enumerate all
these, really.
OK, I'll ask a dumb question. Mozilla somehow knows how to tell the X
server which window to display an image and whether to scale the image, so
one part of mozilla passes this information to another part which passes it
to the server. Why is it hard to use the same channel to pass information
about the visibility of the image or even the necessity of displaying it?
Actually it does do necessity because mozilla will remove images
eventually.
> Why is it hard to use the same channel to pass information
> about the visibility of the image or even the necessity of displaying it?

Becuase the part that asks for the image to be painted may well not even exist
yet when we decode the image and store the decoded data somewhere.

It's also possible to have non-painting images (eg Image() objects) that people
want to access the image data for (the height and width, if nothing else). 
Right now we do store those server-side, which makes image animations done via
JS much snappier (changing src just involves a repaint without having to move
the data to/from the server every time) over remote X connections.  So there are
perfectly good reasons to store image data for images that are _not_ visible on
the server as well.
Yes, there certainly are good reasons for leaving some images on the server
but most images don't fall into that category. I don't understand the
comment about height and width. The true values are properties of the image
itself and scaled values are something mozilla knows already.

Maybe mozilla should cache images itself, particularly if the
device-dependent forms are larger, although I'm not sure if you can get
that sort of information.
> The true values are properties of the image itself

Exactly.  So to get them you need to decode the image anyway, at which point you
may as well cache the decoded data.
*** Bug 270777 has been marked as a duplicate of this bug. ***
Assignee: jdunn → nobody
QA Contact: imagelib
This bug was probably useful when it was filed, but seems obsolete in 2011 in an era of virtualized video memory. wontfix.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.