3.62 KB, patch
|Details | Diff | Splinter Review|
1.17 KB, patch
|Details | Diff | Splinter Review|
When profiling bug 1357621, I noticed a couple of cases where we can easily reduce the cost of dynamic memory allocation that we currently do for FloatManager.
Created attachment 8865239 [details] [diff] [review] Part 1: Allocate nsFloatManager on the stack instead of on the heap
Attachment #8865239 - Flags: review?(dbaron)
Created attachment 8865240 [details] [diff] [review] Part 2: Use reserved storage for 10 FloatInfo objects on the stack in nsFloatManager::mFloats
Attachment #8865240 - Flags: review?(dbaron)
My initial quick reaction is that 10 seems too high -- my guess is that almost all float managers have 0 floats, and then the distribution of those that do have floats peaks at 1. I'd assume that increasing the size of stack-allocation objects reduces memory locality for other things on the stack, so I think there is a real tradeoff. And FloatInfo isn't particularly small (2 words and 6 32-bit numbers.)
> Part 1: Allocate nsFloatManager on the stack instead of on the heap It seems we already cache these, so allocating them on the stack probably doesn't help much: http://searchfox.org/mozilla-central/rev/8b70b0a5038ef9472fe7c53e04a70c978bb06aed/layout/generic/nsFloatManager.cpp#66-100 Caching the FloatInfo arrays in some way might help though.
... which is what Part 2 does in a way, so taking only that part seems reasonable to me.
FYI, part 2 changes the size of nsFloatManager from 56 bytes to 464 bytes on a 64-bit machine. The nearest jemalloc sizes are 448 and 512, so we might as well make it 11 elements which makes it 504 bytes. 8 fits exactly in a jemalloc 384 byte chunk, 9 needs 424 bytes which fits in a 448 byte chunk but spills 24 bytes.
Comment on attachment 8865239 [details] [diff] [review] Part 1: Allocate nsFloatManager on the stack instead of on the heap Given that we already cache these, I think I'm inclined to think we shouldn't do this. It's a pretty sizeable increase in the size of stack frames in layout, which has two problems: 1. reduction in memory locality 2. reduction in the depth of frame tree at which we start hitting stack overflow crashes in Reflow (which is really a think, due to the large amount of stack storage we use in Reflow) As an alternative, I'd suggest we should probably substantially increase NS_FLOAT_MANAGER_CACHE_SIZE.
Attachment #8865239 - Flags: review?(dbaron) → review-
s/think/thing/. (I'd suggest maybe 32 or 64 for the cache size?)
Comment on attachment 8865240 [details] [diff] [review] Part 2: Use reserved storage for 10 FloatInfo objects on the stack in nsFloatManager::mFloats On the assumption of *not* doing part 1, I think this probably becomes a good idea, as Mats suggests.
Attachment #8865240 - Flags: review?(dbaron) → review+
OK taking only part 2 sounds good! I will file a follow-up to increase NS_FLOAT_MANAGER_CACHE_SIZE.
Pushed by email@example.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/912f1804ec56 Use reserved storage for 10 FloatInfo objects on the stack in nsFloatManager::mFloats; r=dbaron
Is there a reason this must be precisely 10? 8 or 11 seems more optimal to me (see comment 6)
Follow-up: bug 1363252.
(In reply to Mats Palmgren (:mats) from comment #12) > Is there a reason this must be precisely 10? > 8 or 11 seems more optimal to me (see comment 6) D'oh sorry, I have an autolander setup locally that keeps landing patches as soon as I put them on a local branch. :-( It already did it once again earlier today before I remembered to address the review comment (https://bugzilla.mozilla.org/show_bug.cgi?id=1362880#c5). I'm still not quite used to my new computer setup yet. So sorry about this, will land a follow-up to bump up to 11. :-/
Pushed by firstname.lastname@example.org: https://hg.mozilla.org/integration/mozilla-inbound/rev/1eb6f7736fa4 follow-up: Bump up the reserved size to 11
Status: NEW → RESOLVED
Last Resolved: 2 years ago
status-firefox55: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla55
You need to log in before you can comment on or make changes to this bug.