OS/2 Mozilla apps never seems to give back any memory to the system, not when closing tabs and not even when closing windows (at least I have never seen that happening). As it has been realized for other platforms, too, this is mainly because of memory fragmentation. Two easy things we can do about it on OS/2: 1. Use DosAllocMem/DosFreeMem for allocations of large buffers. When freed the RAM they allocated is immediately given back to the system. 2. Try _heapmin() to force the C library to give back unused memory. Part 1 can be used in the cairo library for the pixel buffers, part 2 is best added to the end of a cycle collector run. Part 1 should not be used too often and especially not for small allocations because each DosAllocMem() call consumes an additional 64K of address space. Andy, Dave, this is basically what we already talked about with Doodle via email. Mike, Felix, I think this might be interesting for you, too. Will upload the two patches in a minute.
Created attachment 289405 [details] [diff] [review] work in progress: add DosAllocMem into cairo This patch alone gives me back 2-8 MB per closed tab, depending on window size and settings. And some more for closed windows.
Created attachment 289407 [details] [diff] [review] Call heapmin after cycle collector (checked in) I ran a bit with COLLECT_TIME_DEBUG on and it only minimally affected performance (I saw extra times of up to 10ms but not more). It's not quite as effective as the former patch but it gives me back an additional MB here and there... Perhaps there are other places in the code where setting this could he helpful?
Comment on attachment 289407 [details] [diff] [review] Call heapmin after cycle collector (checked in) David, perhaps you can better judge, if this would be a good place to call _heapmin().
Attachment #289407 - Flags: review?(dbaron)
Created attachment 292256 [details] [diff] [review] activate OS/2 API allocations in cairo The patch to cairo-os2-surface.c corresponds to commit a7ae9c45d924effdd61390267eb216302a270d8e in the cairo git repository. I'm basically asking for permission to do the same change already now in the Mozilla repository, and activate it, to get wider testing before the next sync.
Comment on attachment 289407 [details] [diff] [review] Call heapmin after cycle collector (checked in) >+ // Now that the cycle collect has freed some memory, we can try to "cycle collection" or "cycle collector" I'm not crazy about platform ifdefs here, but I suppose it's ok, so r=dbaron.
Attachment #289407 - Flags: review?(dbaron) → review+
Comment on attachment 289407 [details] [diff] [review] Call heapmin after cycle collector (checked in) One r+ is good enough for OS/2. I have now checked this into trunk with the suggested comment typo fixed (collect -> collector).
Created attachment 297930 [details] [diff] [review] Makefile change for cairo (checked in) The cairo update now got us the cairo code changes, we still need to activate them in the Makefile. Dave, Andy, can one of you look over this patch and confirm that it looks OK and perhaps that it work, too?
Walter, in case you have time to test... :-)
yeah, after the cairo-update this patch works in minefield and seamonkey. I can see now reductions of used memory in memory-sentinel widget. Particularly, when I go back to the first site opened (up to 20 MB released). You can also see it, when you press in debug menu "flush memory".
Comment on attachment 297930 [details] [diff] [review] Makefile change for cairo (checked in) Taking Walters confirmation as r+.
Attachment #297930 - Flags: review? → review+
Attachment #297930 - Attachment description: Makefile change for cairo → Makefile change for cairo (checked in)
Status: NEW → RESOLVED
Last Resolved: 11 years ago
Resolution: --- → FIXED
In reply to comment #10) > (From update of attachment 297930 [details] [diff] [review]) > Taking Walters confirmation as r+. > After some time having this in, I must say it really improved significantly the memory usage. The only thing I worry a little about is that I got the impression, when you load very long pages (e.g. large source files from mxr or bonsai search lists) it takes way longer than before. It seems like two "forces" are running against each other: the incoming data and the attempt to free memory again. Scrolling during this time is delayed as is typeaheadfind. Would there be a possibility to free the memory after the page loaded?
I don't think DosAllocMem/DosFreeMem should be any slower than the C libraries' malloc/free. I mean we freed the memory before, too, it was just not visible as free to the system. The only drawback of the Dos APIs is that they take up at least 64K of RAM for every allocation that's why we use it only for the big pixel buffers in cairo where this shouldn't matter. Don't know why there would be a difference for long pages, though. There should be only one surface (the content window) active for a page. Can you turn on the debug output in gfxOS2Surface to see how many surfaces are created, destroyed, and resized while a long page loads?
I also see quite slow page load times. This seems to get worse after Seamonkey has been running for a while. I find myself quite often looking at the HD light because it feels like the system is swapping. This also happens when switching tabs after Seamonkey has been up for a while. I wonder if due to my processor, 800MHz Duron with only 64kb cache, it shows up worse. Perhaps fragmented memory? or memory that is not aligned? I also see more hesitation in the system then before this update. eg I play a game, lbreakout2, which seems to use polling to keep track of the ball. Before it ran quite smooth unless I was loading a page in Seamonkey. Now it will hesitate quite often and the ball will go right through the paddle.
Fragmentation for sure has reduced a lot with the patches here, so it can't be that. But you can try to switch off the fixes here easily, just edit gfx/cairo/cairo/src/Makefile.in and rebuild in $OBJDIR/gfx/cairo/cairo/src and then relink in $OBJDIR/gfx/thebes/src. I would be very surprised if that would improve things but it would be nice if you could confirm that. Can you make sure that you are actually swapping by watching the free RAM and swap file size? MailNews is still awful about memory leaks but I can actually run SeaMonkey for hours and it never goes above 300 MB any more. There are some slowdowns related to graphics decoding, where e.g. decoded JPEGs are removed from memory after a while (45s or so) and if you then switch back to a tab with many JPEGs and scroll in there it will take a while to decode them again. But that won't explain slowdowns on mxr or bonsai pages. I won't have time to try that any time soon, but to make the speed comparison before/after the patches here more scientific, one could also set up pageload measuring. I remember that it's not that difficult if one already has a local Apache server. http://lxr.mozilla.org/mozilla/source/tools/page-loader/ has the necessary stuff.
You need to log in before you can comment on or make changes to this bug.