Closed Bug 130157 Opened 22 years ago Closed 15 years ago

Virtual memory growth (without actual object leaks) due to libc malloc behavior (e.g. fragmentation)

Categories

(Core :: Layout, defect, P2)

defect

Tracking

()

RESOLVED FIXED
Future

People

(Reporter: bugzilla, Unassigned)

References

()

Details

(Keywords: memory-footprint, memory-leak, testcase, Whiteboard: Just load any large bugzilla search result to reproduce)

Attachments

(1 obsolete file)

Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.9) Gecko/20020310 ; xfree-4.2.0
(so this isnt covered by "X font scaling..."  ( bug 120238 ) )

Reproducible: Always (havent dared more than three attempts)
Steps to Reproduce:
1.Surf to http://www.linuxgames.com
2.a enter "doom" in searchform (lots of results)
2.b confirm postdata warnings (hello, bug 116217 )
3. view source of resultpage

Actual Results:  memory useage rises - havent dared to explore the limits. on
closing the "view source" window, it maintains its level. using mozilla to visit
other pages or even closing all but the last instance does not reduce memory useage.

Expected Results:  a little less memory consumption or at least garbage
collection after closing the offending window.

higscore to beat:
  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 1437 winnetou   9   0  324M 187M 43132 S     0.0 74.9  11:53  mozilla-bin
Well, this is very interesting.... We load a lot more source than you'd think
because view source is not passing in post data correctly...

The real problem is the memory growth.  I see that, most definitely, with a
current trunk build.  More interestingly, I tried the following:

1)  Open View Source on that page.
2)  Wait for memory use to go up about 300MB (doesn't take very long)
3)  Close window
4)  Look at memory use.  Still at the +300MB mark.
5)  Open view source again.  Watch memory usage stay constant at the +300MB mark
    until we have a whole bunch of the source rendered, at which point it starts
    growing again.

From the looks of it it seems like we're not shrinking an arena somewhere... and
indeed, the frame arena in nsPresShell.cpp never gets shrunk.  Furthermore,
there is the question of what's going on with allocations bigger than
gMaxRecycledSize from that arena...
Assignee: asa → karnaze
Severity: normal → major
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: footprint, mlk
Component: Browser-General → XP Apps: GUI Features
I'm not sure why this bug is assigned to me. Back to XP Apps.
Assignee: karnaze → blaker
QA Contact: doronr → paw
This is _so_ not an XP apps bug...

Shaver and I did some investigating.  He promised to comment later with full
details, but here is the short scoop:

1)  We allocate hundreds of megabytes in an arena (most likely the layour frame
    arena)
2)  We allocate something else on the C heap
3)  We free the hundreds of megabytes we allocated.  Unfortunately there is
    still stuff on the heap after the memory we just freed and the C heap cannot
    be compacted.

The bug was assigned to karnaze per shaver's suggestion.... Sending it back over
to layout pending further investigation on what exactly we can do here.
Assignee: blaker → attinasi
Component: XP Apps: GUI Features → Layout
QA Contact: paw → petersen
OK.  So.. shaver's busy.  Here is a dump from malloc_stats from his testing of this:

before view-source:
Arena 0:
system bytes     =   12343728
in use bytes     =   11083456
Arena 1:
system bytes     =      40960
in use bytes     =       6344
Total (incl. mmap):
system bytes     =   12523952
in use bytes     =   11229064
max mmap regions =          2
max mmap bytes   =     528384

during view-source:
Arena 0:
system bytes     =   33860016
in use bytes     =   33852072
Arena 1:
system bytes     =      45056
in use bytes     =      25952
Total (incl. mmap):
system bytes     =   34179504
in use bytes     =   34152456
max mmap regions =          2
max mmap bytes   =     528384

after view-source:
system bytes     =   34625968
in use bytes     =   12241680
Arena 1:
system bytes     =      45056
in use bytes     =       6008
Total (incl. mmap):
system bytes     =   34810288
in use bytes     =   12386952
max mmap regions =          2
max mmap bytes   =     528384

Note that last pair of "system bytes" and "in use bytes".  There's 22-some
megabytes freed (shaver didn't run this for very long) there but the allocator
can't let them go.

Another interesting data point.  The original HTML data that we are viewing the
source of is 8MB.  The view-source HTML we generate is about 27MB (makes
sense... each tag gets a span for start, a span for end, two spans per attr, and
the original page is low on content, high on markup).

Total Mozilla memory usage at the end of the view-source load is about 310MB. 
Total Mozilla memory usage at the end of loading the original page is about
110MB (Usage at startup is about 20MB, just as a note).  So the 3.3-to-1 ratio
of view-source to regular display is holding here.

If I close the window I loaded the original page in, the memory usage stays at
110MB (I'm down to one window, with about:blank loaded).  So this is not a
view-source-specific issue.  It's just easier to produce big pages with view-source.
Summary: memory leak on view source → Viewing large pages allocates memory and can't deallocate it
Nice analysis. So how are we ever going to make sure that memory we allocate is
at the end of the heap so it can be un-sbrk() ed ? - I dont think this is
possible. Esp who know when which thread will allocate or which timer will fire...

One approach is maybe layout can use mmap() to allocate its memory. Once done,
that memory can be deallocated. But that means layout in this case cannot call
malloc() and needs to manage its own memory which has its own problems if
someone outside of layout is freeing the memory.

Another is to optimize the amount of allocations for view source.

Anyway, just wanted to note that Marc shoudn't feel like this is a deficiency of
layout in anyway if the bug is that someone is holding on to allcoated memory
that lies after the fter the sum total of large allocations for view source in
the heap.

Shaver: Anyway, this is VM and we know we freed it. Why are we worried about a
big free hole in the VM address space ? Perception is the only valid reason I
can think of. Eventhen this would be more like a P4 bug. no ?
No, dp, we haven't freed it.  We have not returned it to the system, only to the
allocator's free store, which is why you see 300M in "system bytes".  VM size,
as you advocate measuring it in footprint-guide.html, is the sum of the ranges
of the mappings, not the amount of memory we've actually claimed.  We've
referenced these pages in the process of laying out the page, required that
backing store be allocated from them, and not given the system any way to know
that we don't still need that data.  We can thrash the pages out, and hope that
they're not made resident again until we get them from malloc on future
allocations, but that's hardly the same thing.

Do you see the difference?  Compare adding a buffer to the recycling allocator's
freelist and calling free(3) on it.  That's what's happening here, one level
down.  Maybe we just need to say "don't use view-source on an 8M HTML document,
because you'll get a 37x inflation factor in memory", but this is certainly not
just a perception problem.

(Removing mlk keyword, which is used almost exclusively to indicate that we're
not returning memory to the system allocator.)
Keywords: mlk
Bad use of "we" by me. I meant the application knows it has freed it. And yes I
understand the allocator is holding it. So, I am in sync with what you say
except for your conclusion that this is not just a perception problem.

And yes the footprint-guide advocates VM measurement (system bytes used) and the
reason given is for systems with no backing store this is useful. Dont
understand your point here.

Say things went great and the allocator freed that memory back to the system.
Would the app run any faster ? Would the system run any better ? What more than
perception is this ?
One other thing.  There is really no evidence so far as to exactly where all
this memory is allocated.  The frame arena is a likely place, but it's just a
pie-in-the-sky guess, really.
Changing Priority to P2
Priority: -- → P2
Target Milestone: --- → Future
From the user's end, I just tried to load a photograph site that took 272
seconds to load.  I noticed in Eudora while I was waiting for the page to load
and I noticed something flashing but too fast to see - it was my memory utility
telling me I was out of memory.

I went back into Mozilla, tried to bookmark the site, and was promptly presented
with an error box.  It was an error in some module of Mozilla and it closed
Mozilla after I clicked on ok.

At this point, I had no memory left (out of 145MB).  My memory utility couldn't
free up the memory.  I obviously didn't have enough memory to do very many things.

Build 2002040803 on win98se
sorry, the URL is  www.rockside.org/wedding
Depends on: 98835
See bug 98835.  We _may_ have some honest-to-god leaks after all as well...
Blocks: 170762
Reassigning to jst
Assignee: attinasi → jst
Blocks: 174604
OS: Linux → All
Hardware: PC → All
Target Milestone: Future → ---
This is not jst's bug...  Please stop unsetting target milestones.
Assignee: jst → other
QA Contact: petersen → ian
Verified on Win XP using 1/08/03 Trunk build.
Keywords: testcase
Target Milestone: --- → Future
No longer blocks: 170762
*** Bug 170762 has been marked as a duplicate of this bug. ***
*** Bug 184535 has been marked as a duplicate of this bug. ***
*** Bug 226268 has been marked as a duplicate of this bug. ***
Blocks: 215491
I can kind of reproduce this bug on my win2k machine, but not in such big
numbers. Just a "little" leak.

1. Surf to http://www.linuxgames.com
2. enter "doom" in searchform (lots of results)
3. view source of resultpage
4. close the source-windoe 
5. repeat steps 3 and 4 a few times.

After step 3, firefox' memory usage increases by about 8mb.
AFter step 4, memory usage decrease by about 4mb.

So everytime you repeat 3 and 4, firefox' memory usage increases by about 4mb.
This happens on every page and makes firefox use about 200mb of memory after a
few days. Closing all the firefox windows or just killing the firefox process
fress the memory again.

Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.7b) Gecko/20040317
Firefox/0.8.0+
This (and the multitude of other memory leak bugs) has become really bad as of
Firefox 0.9 on gtk+xft2, where Firefox will just use up more than 200 MB
resident memory after only a few hours of usage. After a day or so, it obviously
becomes so bad that a restart of the application is necessary.
Ari, did you even read this bug?  THIS IS NOT A MEMORY LEAK!
Just an idea. I don't know anything about the Mozilla codebase, so ignore me if 
I don't get it.
Does the memory allocator return Handles, or Pointers? If it returns handles, 
just have everyone unlock their handles every couple of minutes. In the MM, 
when a handle is unlocked, Mozilla can move the memory to another location, and 
freeze the thread that tries to relock the memory until it has finished moving 
it. Then, return the new pointer from the lock function. This would allow all 
the memory to be periodically moved to the top of the heap so that the rest of 
the heap could be taken by other applications.

- Just a thought...
The memory allocator involved is malloc(), which absolutely returns pointers.
Would that make this article apply?

http://gee.cs.oswego.edu/dl/html/malloc.html
The point is, we don't want to be in the business of rewriting the OS allocators...
Point the first: the allocator described in that paper is dlmalloc, which is
also the allocator built into glibc, and so very likely to be the one in use for
the original reporter.

Point the second: we were very nearly in _just_ the business of replacing system
allocators -- certainly not the most complex piece of the system that Mozilla
replaces! -- in just this way (dlmalloc on win32) back when I was still at
Netscape, but found that it didn't make much of a difference on Win32.  And this
was at a time when our sense of "a difference" was outrageously sensitized!
No longer blocks: 215491
*** Bug 215491 has been marked as a duplicate of this bug. ***
*** Bug 247222 has been marked as a duplicate of this bug. ***
Mightn't this bug be related to bug 131456 and/or bug 250558 ? (Just a guess, I
don't know how to tell for sure if this bug is a duplicate of that one or
vice-versa).

Best regards,
Tony.
Flags: blocking-aviary1.1?
Flags: blocking-aviary1.1?
Attached file testcase (obsolete) —
Using this attachment test can be done, that show the problem.

1. open the page
2. note the memory usage of mozilla
3. Click open (this will open 30 windows with the specified URL)
4. note the memory usage
5. click close (this will close the opened windows in 3, and release some
memory)
not all memory is released, but one can say this might be normal
6. click open again
7. note the memory usage (it's a little bit more than it was in 4)
8,9,10,11,etc. - close, open,close,open,close,open.

At some point the memory usage is reduced than the previous, so this doesn't
seem like a memory leak, just a matter of who, when and where allocated memory.


Hope the testcase might help in several things:

1. help prove or detect memory leaks for other bugs
2. measure how much memory is used by a web page (for about:blank - it is more
than 1MB)
3. show that there exists a real problem (eventually mozilla's virtual memory
could grow over 300MB after you close most of the tabs/windows)
4. show that allocated memory is reused (so it's not a plain memory leak) if
you open the same page again.
Flags: blocking-aviary1.1?
> Created an attachment (id=171787) [edit]
> testcase

After running the testcase (open&close) with 30 windows and an empty URL, I had
30 handles more in use as before running the test. (The number of GDI is
constant.) This also happens with a local image and a local html page with two
images.
I don't know if this behaviour belongs to this bug or should be a new one.
There are at least three types of virtual memory increase problem.
(1)Bug 263930 : When single window, single tab, virtual memory increases.
(2)Bug 131456 : When tab is closed,  virtual memory doesn't decrease.
(3)Bug 130157 : After view (large) page and view page source,
   (this bug)   when view-source window close, virtual memory doesn't decrease.

Virtual memory increases during single tab use, and is not released on tab
close, and is not released even when window close.
Someone failed to release(destory) or forgot to release(destory) resources(object)?
Or it's only a result of current design? (fragmentation or inefficient virtual
memory use?)

(Q1) What will happen when open a new window then close the old window in this
bug's case?
(Whether differnce exists or not bewteeen "view source" and "view HTML")

(Q2) Comment #31 From Daniel de Wildt indicates remainig resource which holds
     GDI handle even after window close.
     Aren't there some holes in "destroy object" when window(tab) close?

(Q3) Is there any way to know what objects are remain?
     Is there any way to track what objects are created and destroyed?

> (Q3) Is there any way to know what objects are remain?
>     Is there any way to track what objects are created and destroyed?

Of course.  The mozilla.org website has several tutorials on this.

Again, note that the problem in this bug was that while Mozilla deallocated the
memory malloc itself did not.  The other bugs you mention are just that -- other
bugs, with different causes and different symptoms.
Blocks: 249469
Flags: blocking-aviary1.1? → blocking-aviary1.1-
Using David Baron's LeakTest JS program, the test case https://bugzilla.mozilla.org/attachment.cgi?id=171787 leaked 38 out of 170 DOM Windows.  Now we can get this fixed.
Could we file a new bug on that testcase?  Cc me, please.  It has nothing to do with the original issue encountered in this bug if it's actually leaking windows.  The original bug was about virtual memory growth in the absense of actual object leaks due to allocator behavior, and I see no reason to morph it.
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9a1) Gecko/20060215 Firefox/1.6a1 ID:2006021509
re: comment 34 -- I'm not seeing any leaks with that testcase (opening 30 windows then closing them all). Sure you were (1) using a new profile (2) closing all instances of firefox before analysing the nspr.log?
(In reply to comment #35)
That's bug 327337.  I agree that the test case is a different bug.

This bug, using the original Web page, works for me.  The test case does not.  With the Linuxgames page, VM never went above 16468 kB (Task Manager), and LeakTest showed no leaks.  If no one else can reconfirm it, maybe it should be marked WORKSFORME.

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a1) Gecko/20060123 Firefox/1.6a1


In reply to comment #36)
Yes, and yes.  It's essentially a new profile except for disabling a few things like Java and remembering passwords.  It's used for testing only.  And yes I closed all Fx windows before viewing the log file.
in reply to comment #37:

the whole point of this bug is that LeakTest will show no leaks for it.  the memory is actually being freed internally, just not returned to the OS.  as for reproducing it with this specific URL, it seems as if the site has been redesigned and now it only shows 10 results per page.  the point of the test case was that it generated a huge page with many results, which results in major memory usage for the application that never decreases.  maybe somebody should start looking for a testcase that still works for this bug?
Any large bugzilla search result (or even better view-source) for it works as a testcase for this bug.

Now could we please move all irrelevant comments (such as anything about the testcase attached here) to other bugs?
Whiteboard: Just load any large bugzilla search result to reproduce
Attachment #171787 - Attachment is obsolete: true
Yes, of course the LeakTest does not apply to this bug.  Doh!  Sorry about the bug spam.
A similar topic: bug 324081
*** Bug 354929 has been marked as a duplicate of this bug. ***
Bug 354929 has links to some forum threads where people ran into this on Linux and tried to fix it in the system allocator.

Now the question is whether we can tell what the fragmentation sources are.  Artem, do you know what those might be?  The forum threads don't say.  We might be able to try doing something about them if we knew.

brendan, you were saying something the other day about using OS malloc/free unless there are proven issues with them?  Here are the proven issues....  Our memory consumption story is a _lot_ better on Linux with an mmap/munmap based allocator than with a brk based one.
(In reply to comment #43)
> brendan, you were saying something the other day about using OS malloc/free
> unless there are proven issues with them?  Here are the proven issues....  Our
> memory consumption story is a _lot_ better on Linux with an mmap/munmap based
> allocator than with a brk based one.

Why can't Linux compete?  Other popular OSes do the obvious right thing of using page-wise allocation for sufficiently large allocations.

If we have to roll our own, suggest we do so only for known-large allocators.

/be
(In reply to comment #43)
> Now the question is whether we can tell what the fragmentation sources are. 
> Artem, do you know what those might be?  The forum threads don't say.  We might
> be able to try doing something about them if we knew.

Unfortunately I'm not a guru of glibc and I don't know why this thing happens. I just know that OpenBSD allocator works (not without glitches and somewhat frequent firefox freezes) and doesn't cause Firefox to leak memory.
In comment 4, Boris notes that a huge amount of memory has been allocated
in an "arena".  (I don't know if this refers to an "arena pool", or to an
arena in an arena pool, but perhaps that distinction is unimportant.)  

There are two functions for destroying arena pools, one that actually frees
the arenas in the pool, and one that puts all the arenas from the pool into
an "arena free list".  Sadly, there is no upper bound on the amount of data 
held in the arena free list.  There's no threshold at which the code says
"That's enough in the free list, so I'll really free the rest of these arenas."

So, one wonders if the real issue here is that teh ~35MB in the arena (pool)
is simply never being freed and instead is all going into the arena free list.

I think we should NOT consider using a different memory allocator for arenas
until we know that the problem is NOT a swolen arena free list.  Further, 
it seems to me that if we're going to replace the memory allocator, we should
do so for more of mozilla's core than merely for arenas.
Depends on: 366937
The arena freelist is a botch from the old days, implemented based on the paper cited in the top comment (the one about arenas in lcc).  We ripped it out of the forked js/src/jsarena.c file a while ago. Anything like it is suspect on its face, given malloc implementations that manage their own freelists.

This is independent of whether or not the arena freelist is the culprit here.

/be
Comment 4 is all about data that shows that we're releasing the data to libc and that libc is not releasing it to the OS.

The relevant arena pools are cleaned up when the page is unloaded using PL_FinishArenaPool.  If I read that right, it honestly deallocates the memory.
Thanks for that explanation, Boris.
Summary: Viewing large pages allocates memory and can't deallocate it → Virtual memory growth (without actual object leaks) due to libc malloc behavior
Okay, I'm writing C applications for several years now und Unix/Linux systems and never heard of anything like an arena. Anyway, I think I know where the problem comes from and I think I know the only solution there is to the problem. A process has a memory layout like that (simplified):

Code - Heap - Free Space - Stack(s)

The Heap is one large block, it grows and shrinks as needed, but only at the end. Memory can't be moved around in memory in C like in Java (it would break all pointers). So if you alloc like comment 4 did

Some Data - Huge Data Block - Small Data Block

And you free Huge Data Block, the memory between Some Data and Small Data Block is "free", but it can't be given back to the system, because the Heap may not have holes inside. Only if Small Data Block is released, the memory down to some Data is returned to the system. Hence the free memory is there, it will be re-used (new allocations will use it, no problem), but it's not available to other processes.

There is only one way around such an issue: When you allocate memory with malloc, there are two ways of doing it:

1. Get some space from the heap and increase the heap if necessary.

2. Get some new free memory pages from the system and map them into the process space, IOW, build your own small extra heap area.

malloc actually uses them both. Usually malloc has some logic to decide when to use which method. This logic is usually:

1. If the memory block requested is small, use the heap.

2. If the memory block requested is very big, use extra memory pages (what "very big" means is very system dependent, every system has different ideas here, every implementation of malloc has, some systems might even have this configurable).

3. If you can't expand the heap any further for whatever reason, use extra memory pages, even if the requested block is small.

At least that is the behavior AFAIK. Every method has advantages and disadvantages. 

Advantages are of using the heap are: It's faster, since memory free'd there is not always returned to the system and thus getting a free block in virtual memory already belonging to the process is faster. Further the memory footprint is lower (when I need 10 bytes, I can get exactly 10 bytes from there). The disadvantage is, the the memory can fragment and these holes will keep the process memory big, even if little space is used in the big heap.

The advantages of using dedicated memory pages is that when you free the block, the memory is immediately returned to the system. The disadvantage is that mapping memory pages to a process is slow. Another one is that each memory page has a fixed size (usually 4 KB), and you can only get whole pages. That means, even if you just need 10 bytes of memory, you waste 4 KB of memory.

Now how you solve issues with heap: If you are unhappy with how the system malloc works, get your own one. This is the simplest solution. There are about 20 free malloc implementation on the web. What malloc does is no magic. It manages a list of allocated memory and free blocks between the memory and grows/shrinks the heap if possible or necessary.

A bit more tricky: Use more than one heap!

I don't know if that is what you refer to by arena, but usually malloc on most systems supports zones. Now what is a zone? Well, the heap is just one large memory block that grows/shrinks at the end. Basically the heap is nothing more than the "default zone". You can have malloc create a second (third, fourth, ...) zone. The second zone is just like the heap, just located elsewhere in memory. 

The big disadvantage of using such a zone: If need to create many, many objects, and you know, that sooner or later, all these objects are released again (all of them), but there are other objects create now and then in between this group of objects or after this group of objects, you avoid fragmentation problems if you place the big lot of objects into an own zone. That way they will not fragment your normal heap (you place other objects still onto the heap, though). Now if you want to release all the objects of one zone, you can release the whole zone (fast - thousands of objects are dumped at once) and once the zone is gone, the memory of the zone is immediately returned to the system.

E.g. every HTML page rendered, every tab, every JavaScript instance and the like could be an own zone. The heap will then only be for the user interface and network, etc. I admit, that managing that many zones throughout a large project as we have here is more than just a bit PITA. And every tiny mistake will basically cause your app to crash. But I used it before and it works great, when you know how to use it wisely.
pav and vlad are looking into what can be done to reduce memory fragmentation.

http://www.pavlov.net/blog/archives/2007/11/memory_fragment.html
Summary: Virtual memory growth (without actual object leaks) due to libc malloc behavior → Virtual memory growth (without actual object leaks) due to libc malloc behavior (e.g. fragmentation)
This has been fixed for a while by going with jemalloc, right?
I doubt. Virtual Memory size increases here continually during the session. (saw 1 GB this week-end)
With what setup (Application, version, plugins)? It was fairly well proved before the Firefox 3 release that jemalloc that is now used (at least in Firefox 3.x) fixed the problem with fragmentation. (Of course, if you use plugins that are built with other C libraries, that will change things.)
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.0.8pre) Gecko/2009022505 GranParadiso/3.0.8pre (.NET CLR 3.5.30729)

*** Plugins
Adobe Acrobat
Cortona VRML Client
Google Update
Java(TM) Platform SE 6 U12
Microsoft® DRM
Microsoft® Windows Media Player Firefox Plugin
Mozilla Default Plug-in
QuickTime Plug-in 7.4.1
QuickTime Plug-in 7.6
RealPlayer Version Plugin
RealPlayer(tm) G2 LiveConnect-Enabled Plug-In (32-bit) 
Shockwave Flash
Shockwave for Director
VLC Multimedia Plug-in
Windows Media Player Plug-in Dynamic Link Library
Windows Presentation Foundation


And of course some add-ons.
I think, at the very least, new issues should be tracked in a separate bug, with a new testcase.  This one is surely addressed (if not wholly fixed -- fragmentation will always be an issue) by jemalloc usage.
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
(In reply to comment #56)
> I think, at the very least, new issues should be tracked in a separate bug,
> with a new testcase.  This one is surely addressed (if not wholly fixed --
> fragmentation will always be an issue) by jemalloc usage.

Were any of these new bugs ever filed? If so, could the filer list them? Thanks.
(In reply to comment #57)
> (In reply to comment #56)
> > I think, at the very least, new issues should be tracked in a separate bug,
> > with a new testcase.  This one is surely addressed (if not wholly fixed --
> > fragmentation will always be an issue) by jemalloc usage.
> 
> Were any of these new bugs ever filed? If so, could the filer list them?
> Thanks.

???
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: