Closed
Bug 683597
Opened 12 years ago
Closed 12 years ago
Change jemalloc's accounting for huge allocations to round up to the next page boundary, not to the next MB
Categories
(Core :: Memory Allocator, defect)
Core
Memory Allocator
Tracking
()
RESOLVED
FIXED
mozilla10
People
(Reporter: justin.lebar+bug, Assigned: justin.lebar+bug)
References
Details
(Whiteboard: [MemShrink:P2][inbound])
Attachments
(1 file)
7.03 KB,
patch
|
khuey
:
review+
|
Details | Diff | Splinter Review |
jemalloc rounds "huge" allocations (1MB and up) up to megabyte boundaries. But the physical memory used is the allocation size rounded up to a page boundary -- if you allocate but never touch the page, the OS never attaches physical memory to that page. This is only a problem for huge allocations, because large allocations are rounded up to page boundaries. This is important, e.g. for bug 682195. There, we want to account for how much memory is being used by media elements. If we allocate 1MB + 1B, we use 1MB + 4KB. I'd like to report that, not 2MB, in about:memory. But right now, we have to report 2MB, because that's how much jemalloc counts against heap-allocated, which is used to compute heap-unclassified.
Assignee | ||
Comment 1•12 years ago
|
||
This is [MemShrink] inasmuch as it influences accounting in about:memory.
Whiteboard: [MemShrink]
(In reply to Justin Lebar [:jlebar] from comment #0) > jemalloc rounds "huge" allocations (1MB and up) up to megabyte boundaries. > But the physical memory used is the allocation size rounded up to a page > boundary -- if you allocate but never touch the page, the OS never attaches > physical memory to that page. So, you're saying that if we ask for 1 MB + 1 byte we end up using 1 MB + 1 page of actual memory (but reserving 2MB of address space)? Is that true on all OSs, or just Linux (where I'm presuming you tested)?
Assignee | ||
Comment 3•12 years ago
|
||
> So, you're saying that if we ask for 1 MB + 1 byte we end up using 1 MB + 1 page of
> actual memory (but reserving 2MB of address space)?
Right.
I haven't tested specifically in this case with jemalloc, although I've tested this in general on Linux. I'm pretty sure it works the same way on Mac.
I don't know how it works on Windows, but in jemalloc, we set decommit on Windows, which explicitly decommits the slop pages.
Comment 4•12 years ago
|
||
But it still eats VM space even if decommitted. The original design goals for jemalloc probably didn't consider the exhaustion of VM space on 32-bit machines, but it's very much a reality nowadays, and was the primary cause of my crashing or being forced to restart my browser on WinXP with several hundred (mostly not loaded!) tabs.
Assignee | ||
Comment 5•12 years ago
|
||
Right now 'explicit' is almost directly comparable to 'resident' (minus code size, which is constant), but it's not at all comparable to 'vsize'. I understand that vmem exhaustion is a problem, but I don't think that making 'explicit' count a hybrid between resident and non-resident memory is a useful step towards improving that situation.
Comment 6•12 years ago
|
||
Agreed; that's really a different bug than this one. (We could keep a separate count of this sort of slop, though it's semi-calculable from those values.) When tbpl.allizom.org/?usebuildbot=1 was leaking and I'd been running for a week or so in my 'huge' profile, I had 3.1GB explicit, 3.1GB resident, 2.9GB heap-allocated, 3.7GB heap-committed - and 5GB vsize (Linux x64). That's a big difference.
![]() |
||
Comment 7•12 years ago
|
||
In the 1MB+1 case, does jemalloc's "heap-allocated" measurement count 2MB or 1MB+4KB? I've been assuming it's the former. Furthermore, "heap-unclassified" is computed in about:memory to be "heap-allocated" minus everything reported by heap memory reporters. If my assumption is correct, then the heap reporters should report 2MB, as malloc_usable_size does, otherwise there'll be a portion (1MB-4KB) of dark matter that we'll never be able to avoid, which is bad.
![]() |
||
Comment 8•12 years ago
|
||
(In reply to Nicholas Nethercote [:njn] from comment #7) > In the 1MB+1 case, does jemalloc's "heap-allocated" measurement count 2MB or > 1MB+4KB? I've been assuming it's the former. Oh god, it appears to depend on whether MALLOC_DECOMMIT is defined. From huge_malloc(): # ifdef MALLOC_DECOMMIT huge_allocated += psize; # else huge_allocated += csize; # endif |psize| is the request rounded up to the nearest page (i.e. 1MB+4KB), |csize| is the request rounded up to the nearest "chunk ceiling" (i.e. 2MB). Remind me if we have MALLOC_DECOMMIT defined, and on which platforms?
Comment 9•12 years ago
|
||
MALLOC_DECOMMIT is only defined on Windows (see http://mxr.mozilla.org/mozilla-central/source/memory/jemalloc/jemalloc.c#269)
Assignee | ||
Comment 10•12 years ago
|
||
(In reply to Nicholas Nethercote [:njn] from comment #7) > In the 1MB+1 case, does jemalloc's "heap-allocated" measurement count 2MB or > 1MB+4KB? I've been assuming it's the former. The proposal is to make it always be the latter, regardless of whether DECOMMIT is enabled.
![]() |
||
Comment 11•12 years ago
|
||
In general, I want memory reporters to use malloc_usable_size. If we don't do that, we'll always miss the 10%+ of heap memory that is slop. For non-huge (< 1MB) allocations, the meaning of "slop" is clear, and we can use malloc_usable_size without any confusion. But for huge (1MB+) allocations, "slop" can have one of two meanings, which I'll call "psize" and "csize" (as per comment 8). psize corresponds to RSS, csize corresponds to vsize. malloc_usable_size uses the "csize" meaning. And you're proposing that "heap-allocated" be changed to always use the "psize" meaning. But if we do that, then we can't use malloc_usable_size in memory reporters if we want our measurements to be consistent. I suppose we could have an alternative version of malloc_usable_size that is just used in memory reporters and gives us the "psize" meaning. Ugh. And I don't know how all this would work on non-jemalloc platforms... but (apart from Mac, which will use jemalloc soon anyway) we don't compute "heap-allocated" or "heap-unclassified" on non-jemalloc platforms so maybe it doesn't matter.
Assignee | ||
Comment 12•12 years ago
|
||
I don't know how hard it would be to change malloc_usable_size so it reports the psize instead of the csize, but we might be able to do that... Do we even have malloc_usable_size on non-jemalloc platforms?
![]() |
||
Comment 13•12 years ago
|
||
> Do we even have malloc_usable_size on non-jemalloc platforms?
Yes. From memory/mozalloc/mozalloc.cpp:
size_t
moz_malloc_usable_size(void *ptr)
{
if (!ptr)
return 0;
#if defined(XP_MACOSX)
return malloc_size(ptr);
#elif defined(MOZ_MEMORY)
return malloc_usable_size(ptr);
#elif defined(XP_WIN)
return _msize(ptr);
#else
return 0;
#endif
}
![]() |
||
Updated•12 years ago
|
Assignee: nobody → justin.lebar+bug
Whiteboard: [MemShrink] → [MemShrink:P2]
Assignee | ||
Comment 14•12 years ago
|
||
Attachment #565602 -
Flags: review?(khuey)
Assignee | ||
Comment 15•12 years ago
|
||
I tested (in gdb on Linux) that if you malloc(1024 * 1024 + 1024) bytes, you get something with malloc_usable_size = 1MB + 4KB. If you then realloc to 1024 * 1024 + 5000, the malloc_usable_size is now 1MB + 8KB, as expected.
Assignee | ||
Comment 16•12 years ago
|
||
Comment on attachment 565602 [details] [diff] [review] Patch v1 Canceling the review; I may have a clever way of solving this and bug 670967 with one simple patch.
Attachment #565602 -
Flags: review?(khuey)
Assignee | ||
Comment 17•12 years ago
|
||
Comment on attachment 565602 [details] [diff] [review] Patch v1 Hm, no, it's not so simple. (My idea was to define MALLOC_DECOMMIT everywhere, but only *actually* decommit when MALLOC_DECOMMIT_FOR_REAL is defined, on Windows. Then you'd get all the bookkeeping for free. But jemalloc assumes that commit() zeroes memory, which of course isn't the case if decommit and commit are nops. It was a nice idea.)
Attachment #565602 -
Flags: review?(khuey)
Comment on attachment 565602 [details] [diff] [review] Patch v1 Review of attachment 565602 [details] [diff] [review]: ----------------------------------------------------------------- This looks fine, even though it's pretty scary. jlebar is going to verify that the assumptions about lazy physical page allocation hold on 10.6 before landing.
Attachment #565602 -
Flags: review?(khuey) → review+
Assignee | ||
Comment 19•12 years ago
|
||
> jlebar is going to verify that the assumptions about lazy physical page allocation hold on 10.6
> before landing.
Just did this. I mmap'ed 8GB with
mmap(NULL, 8L * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE, MAP_ANON | MAP_PRIVATE, -1, 0)
(which is what jemalloc uses). Then I observed that the activity monitor's "real mem" column reflected not how much memory I'd mapped but how much of the mapping I'd written to.
Assignee | ||
Comment 20•12 years ago
|
||
https://hg.mozilla.org/integration/mozilla-inbound/rev/21df455d5083
Whiteboard: [MemShrink:P2] → [MemShrink:P2][inbound]
Comment 21•12 years ago
|
||
https://hg.mozilla.org/mozilla-central/rev/21df455d5083
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla10
You need to log in
before you can comment on or make changes to this bug.
Description
•