Open Bug 281854 Opened 20 years ago Updated 2 years ago

PL arena free list grows boundlessly, simulating a leak

Categories

(NSPR :: NSPR, defect, P3)

Tracking

(Not tracked)

People

(Reporter: nelson, Unassigned)

Details

(Keywords: memory-leak, perf)

Attachments

(1 file)

When the PL's arena free list is in use, no arena is ever truly freed, and the size of the free list can grow boundlessly. In programs that mix arenas of many different sizes, large and small, this can cause the arena free list to grow significantly. It is easy to construct a simple program that loops, allocating arenas and freeing them, that demonstrates this boundless growth. The program is correct. It frees everything it allocates, and does not leak. Yet the program grows rapidly, megabytes per second, as it runs, wiht the growth being all in the arena free list. When run with NSS_DISABLE_ARENA_FREE_LIST=1 in the environment, the program does not grow in that way. I will attach a test program that demonstrates this quite effectively.
This zip file contains the source to an enhanced version of the "nsprleak" program that I originally attached to bug 281763. This version contains two test algorithms, one for bug 281763 and one for this bug. To invoke this program to reproduce this bug, invoke it with a command line argument, e.g. nsprleak X I think the solution to this bug involves two parts: 1. put a ceiling on the size of an arena that will be put into the free list. E.g. don't put multi-hundred-kilobyte arenas into the list, and 2. keep track of the total memory size of the arenas in the free list and stop adding arenas to it when it reaches some limit.
Nelson, is this bug related to bug 281866?
(In reply to comment #2) > is this bug related to bug 281866? Only in that they both relate to PLArenaPools. This bug says that the free list grows and grows without bound. Bug 281866 says that the *reallocated* (not free) arenas are often reallocated wastefully. An improvement to NSPR for bug 281866 may reduce the impact of this bug somewhat, but I think they're really separate bugs. They may have separate solutions, or one solution might be able to solve both.
QA Contact: wtchang → nspr
Keywords: mlk
Documenting relationship to bug 166701.
If this bug were fixed, then there would be a new opportunity for performance improvement. Users of PLArenaPools could call PL_FreeArenaPool instead of PL_FinishArenaPool, which would allow the caching of malloc'd arenas and avoid future calls to malloc. Without a fix for this bug, users of PLArenaPools are tempted to call PL_FinishArenaPool instead, which destroys the PLArenaPool and free()s the associated memory. If they were to call PL_FreeArenaPool instead, then there is the potential to activate this bug, and leak terribly. However, PL_FreeArenaPool is in many cases what they should be calling, because they ought to want their pool's arenas to be re-used by other pools, which is what FL_FreeArenaPool is for.
Keywords: perf
Priority: -- → P3
NSS uses PL_FreeArenaPool extensively, almost exclusively, to "destroy" arenapools. Consequently, the total amount of memory allocated to arenas never shrinks. It grows, and at some point it levels off, but it never shrinks back from the maximum. Again, there is no leak here. No allocated arena is ever lost. When the PLArenaFreeList is finally flushed, all the freed arenas are truly freed to the heap. But in the meantime, before that free list is flushed, it is able to grow and grow. I suggest that there be a ceiling on the size of the arena free list, and that PLFreeArenaPool should simply call PL_FinishArenaPool if the free list is already at that maximum size. Forcing arenas to use a space that is a multiple of some large number may help avoid fragmentation and improve arena reuse, but in the short term it could actually increase the amount of memory sitting in the arena free list.
Nelson, I agree that no allocated arena is ever lost. I like your suggestion about bounding the arena_freelist with some maximum size (of arenas, or of bytes buffered?). Let me know if I can help.
Severity: normal → S3

The bug assignee is inactive on Bugzilla, so the assignee is being reset.

Assignee: wtc → nobody
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: