Revisit the size of BLOCK_INCREMENT

RESOLVED FIXED in mozilla27



11 years ago
5 years ago


(Reporter: bzbarsky, Assigned: njn)


({memory-footprint, perf})

memory-footprint, perf
Bug Flags:
blocking1.9 -

Firefox Tracking Flags

(Not tracked)



(2 attachments)

The presshell StackArena has:

#define BLOCK_INCREMENT 4044 /* a bit under 4096, for malloc overhead */
struct StackBlock {
   char mBlock[BLOCK_INCREMENT];
   StackBlock* mNext;

Ideally, each StackBlock would be exactly a 4096 bytes.  We probably want to condition the size of BLOCK_INCREMENT on:

1)  The size of void*
2)  Whether the size of a 4096-byte (or a little under) is stored as part of
    the allocation by the allocator.  If it's not, we want it to be
    4096-sizeof(void*).  If it is, we want to allow for the allocator memory

Of course we'd have to figure out what the allocator overhead really is, and it'll likely differ by OS, but it seems worthwhile to do this.
Flags: blocking1.9?

Comment 1

11 years ago
Worth answering one way or the other before ship
Flags: blocking1.9? → blocking1.9+
Priority: -- → P2

Comment 2

11 years ago
Most allocators do not store metadata with objects that are as large as a page, so it would typically be possible to completely remove internal fragmentation by making each StackBlock precisely 4096 bytes.  Still, the current code only suffers ~1% internal fragmentation, so this doesn't look like a significant problem to me.
The problem is not the 1% internal fragmentation per se.  The problem is that if the allocator actually hands back a chunk of memory that's, say, 4048 bytes in size, it's easy for something else to get allocated in the remaining 48 bytes on the memory page.  Then when the StackBlock is deallocated we have a page with only a very small allocation on it, which is suboptimal...

Since StackBlocks are very short-lived, generally speaking, it would be good to avoid this situation.

Comment 4

11 years ago
Allocations of size, say, 4048 bytes, will always result in the allocator reserving an entire page (4096 bytes) for the allocation.  No allocation of the remaining 48 bytes on that page will ever occur.  This is true on Mac OS X, Linux, and very probably Windows.

Basically, memory allocators (malloc implementations) know about the problem that Boris is trying to anticipate, so they avoid it by wasting the 48 bytes.  Jason calls that wastage "internal fragmentation", and points out that it remains <1% of the total allocation.

You can reduce the internal fragmentation to 0 with code (informal) like this:

struct StackBlock {
   enum { BlockSize = PAGE_SIZE - sizeof(StackBlock*) };
   char mBlock[BlockSize];
   StackBlock* mNext;

To avoid someone changing the StackBlock structure and making it bigger than it should be, one could add some code like:

assert( sizeof(StackBlock) == PAGE_SIZE );


11 years ago
Flags: tracking1.9+

Comment 5

11 years ago
Boris, I'll take this bug, and add some code like I suggested above, if that's okay.
Sounds good to me!
Assignee: nobody → moz

Comment 7

5 years ago
Created attachment 810185 [details] [diff] [review]
(part 1) - Clean up the style of StackArena.{h,cpp}.

This patch fixes various style problems:  trailing whitespace, missing and
mispositioned braces, badly-written comments, etc.  There are no functional
Attachment #810185 - Flags: review?(bzbarsky)


5 years ago
Assignee: moz → n.nethercote

Comment 8

5 years ago
Created attachment 810188 [details] [diff] [review]
(part 2) - Remove insignificant wasted space in StackBlock.

This patch removes 44 (on 64-bit) or 48 bytes (on 32-bit) of wasted space in

I stumbled across this bug somehow and saw it would be easy to fix.
Unfortunately, it makes no difference in practice -- I added some printfs and
never saw more than one StackBlock allocated at any time.  If I'd known that in
advance I wouldn't have bothered :(
Attachment #810188 - Flags: review?(bzbarsky)
Comment on attachment 810185 [details] [diff] [review]
(part 1) - Clean up the style of StackArena.{h,cpp}.

> +  static const int MARK_INCREMENT = 50;

Leave this as a file static, please.  There are some concerns about our compilers outputting crappy code for non-default static init in a function.

>+// Class for stack scoped arena memory allocations.

Not sure why the changes to comment style, but ok...

Attachment #810185 - Flags: review?(bzbarsky) → review+
Comment on attachment 810188 [details] [diff] [review]
(part 2) - Remove insignificant wasted space in StackBlock.

>-    MOZ_ASSERT(aSize <= 4044);

We should keep asserting the size is smaller than the MAX_USABLE_SIZE, no?  Or will that still happen somewhere?

Attachment #810188 - Flags: review?(bzbarsky) → review+

Comment 11

5 years ago
> We should keep asserting the size is smaller than the MAX_USABLE_SIZE, no? 
> Or will that still happen somewhere?

The NS_ASSERTION in StackArena::Allocate() does it.
Last Resolved: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla27
You need to log in before you can comment on or make changes to this bug.