Closed
Bug 421747
Opened 16 years ago
Closed 15 years ago
Find out who is allocating and freeing memory with every mouse move
Categories
(Core :: DOM: Events, defect, P2)
Core
DOM: Events
Tracking
()
RESOLVED
FIXED
People
(Reporter: moz, Assigned: smaug)
References
(Depends on 1 open bug)
Details
(Keywords: perf)
Attachments
(4 files)
2.50 KB,
patch
|
Details | Diff | Splinter Review | |
3.57 KB,
patch
|
Details | Diff | Splinter Review | |
7.13 KB,
patch
|
sicking
:
review+
sicking
:
superreview+
|
Details | Diff | Splinter Review |
5.71 KB,
patch
|
sicking
:
review+
sicking
:
superreview+
|
Details | Diff | Splinter Review |
Every single mouse movement or keystroke allocates many memory regions with malloc(), and then subsequently frees them with free(), both called by the NSPR arena memory allocation code. NSPR is not at fault, here; there are just too many memory allocations being done with arenas. While frequent allocation and deallocation from arenas might be okay, allocations that are so big that arena code needs to malloc a lot is not okay. A single keystroke should cause at most 1 malloc(), not many. Then, that memory should not be freed immediately: we should expect another keystroke or mouse move.
Reporter | ||
Comment 1•16 years ago
|
||
See "EventTargetChainItem Pool" for keystroke allocations, and also "displayListArena" for mouse-movement allocations. Those pools are being allocated and destroyed for each user action. The first kind of pool is about 3.5K in size, the second kind is about 1K.
Updated•16 years ago
|
Product: Firefox → Core
QA Contact: general → general
Reporter | ||
Comment 2•16 years ago
|
||
Increasing the priority to reflect my uncertainty of the magnitude of the problem.
Priority: P4 → P2
Reporter | ||
Comment 3•16 years ago
|
||
I removed the unnecessary calls to malloc due to the frequent creation and destruction of the "EventTargetChainItem Pool" and obtained ~60% reduction in the number of calls to malloc in a run of Tdhtml. I will do a similar thing with "displayListArena" and see what total reduction in malloc calls I can achieve. A patch is imminent. The current priority, P2, is appropriate.
Reporter | ||
Comment 4•16 years ago
|
||
The "displayListArena" does not need to have a similar thing done to it, though I suggested otherwise in comment #3. This is because, with my change to "EventTargetChainItem Pool", the latter pool's arenas are freed to the global free arena list, and then picked up by the "displayListArena" pools. So, there is not often a call to malloc with the "displayListArena" pools after the change. This works because the two pools in question have the same usage patterns, and operate in tandem.
Reporter | ||
Comment 5•16 years ago
|
||
Reporter | ||
Updated•16 years ago
|
Assignee | ||
Comment 6•16 years ago
|
||
You need to still release the whole pool when shutting down. And note, if there is nested event dispatch, or the DOM is very deep, ETCI pool may reserve quite a lot memory.
Reporter | ||
Comment 7•16 years ago
|
||
Smaug: Yes, the pool should be released when shutting down. Where is a good place to do this? Regarding the ETCI pool reserving a lot of memory under some circumstances: I don't understand why you mention this. Will you elaborate?
Assignee | ||
Comment 8•16 years ago
|
||
(In reply to comment #7) > Regarding the ETCI pool reserving a lot of memory under some circumstances: I > don't understand why you mention this. Will you elaborate? Some page may use deeply nested event dispatching or deep DOM. That may increase the size of the pool a lot and cause lots of memory to be reserved. After the page has been unloaded the memory could be used for something else, since usually DOM isn't that deep and deeply nested event dispatch is rare. I'm not sure what the following comment means: "Destroy the buckets, (partially) free the memory associated with the * PLArenaPool, but do not destroy the PLArenaPool." FreeAll() calls PL_FreeArenaPool, which calls FreeArenaList with reallyFree=PR_FALSE. So when does the memory actually get released?
Assignee | ||
Comment 9•16 years ago
|
||
I'd do something like this.
Assignee | ||
Comment 10•16 years ago
|
||
(In reply to comment #9) > Created an attachment (id=353203) [details] The patch makes sure that after some nested event dispatch on a deep DOM tree the memory is released. And memory is properly released when shutting down. Speeds up event dispatch a bit. Robin, what you think about this approach?
Assignee | ||
Updated•16 years ago
|
Component: General → DOM: Events
QA Contact: general → events
Assignee | ||
Comment 11•16 years ago
|
||
Note, the patch doesn't even try handle all cases very strictly, but it just makes sure that if it is possible that the pool uses lots of memory, it is released asap - but in normal cases the pool doesn't get released after an event dispatch.
Assignee | ||
Comment 12•15 years ago
|
||
Attachment #381300 -
Flags: superreview?(jonas)
Attachment #381300 -
Flags: review?(jonas)
Assignee | ||
Comment 13•15 years ago
|
||
Comment on attachment 381300 [details] [diff] [review] reuse the pool, v2 Forgot to say that this does actually speed up event dispatch a bit.
Reporter | ||
Comment 14•15 years ago
|
||
(In reply to comment #10) > Robin, what you think about this approach? I failed to respond to this when you asked because I was too busy, but I will look into the most recent attachment, now...
Reporter | ||
Comment 15•15 years ago
|
||
(In reply to comment #13) > (From update of attachment 381300 [details] [diff] [review]) > Forgot to say that this does actually speed up event dispatch a bit. How do you measure this, out of curiosity?
Assignee | ||
Comment 16•15 years ago
|
||
I have all sort of tests which I profile using Shark.
Comment on attachment 381300 [details] [diff] [review] reuse the pool, v2 Technically speaking, isn't this a bit overly eager to destroy the pool still. If we start firing one event, and then an event handler for that event fires 10 events at a node with 20 ancestors. We'll then increase sEtciPoolReservers 200 times, whereas there's really only 20-something "etcis" in the pool at any given time. But maybe nested events aren't common enough to optimize for. r/sr=me
Attachment #381300 -
Flags: superreview?(jonas)
Attachment #381300 -
Flags: superreview+
Attachment #381300 -
Flags: review?(jonas)
Attachment #381300 -
Flags: review+
Reporter | ||
Comment 18•15 years ago
|
||
(In reply to comment #12) > Created an attachment (id=381300) [details] > reuse the pool, v2 In comment#7, I asked: > Smaug: Yes, the pool should be released when shutting down. Where is a good > place to do this? With this attachment, you have implicitly answered, with "in nsLayoutStatics::Shutdown(), which is in layout/build/nsLayoutStatics.cpp". Is that correct? The logic that you have added (the sEtciPoolReservers stuff) seems like it is trying to solve a different problem. If there are more than NS_CHAIN_POOL_SIZE sEtciPoorReservers, the sEtciPool is deleted (!) upon destruction of a ChainItemPool. If there are not more than NS_CHAIN_POOL_SIZE sEtciPoolReservers, then the sEtciPool is not deleted, and the memory in the pool is left allocated. But, I don't think that this is what we want. We want the sEtciPool to be deleted only on shutdown (probably in nsLayoutStatics::Shutdown with a simple "delete sEtciPool"). We don't want sEtciPool to have memory allocated out of it when sEtciPoolUsers is 0, which could happen with this patch. Also, I don't like how the patch does a strange sort of reference counting with sEtciPoolReservers and GetPool(). So, I humbly suggest that my patch would be better than this one (if it were to have the "delete sEtciPool" added to nsLayoutStatics::Shutdown()) because it would not destroy the pool until shutdown, avoiding pool creation costs, but it would free the memory associated with the pool when sEtciPoolUsers is 0. Let me know if my ignorance of layout/ code is leading me astray in my analysis.
Assignee | ||
Comment 19•15 years ago
|
||
(In reply to comment #18) > But, I don't think that this is what we want. We want the sEtciPool to be > deleted only on shutdown No, we don't want that. If DOM is *very* deep, or there are several nested event dispatched, dispatching events would cause the pool to reserve lots of memory.
Assignee | ||
Comment 20•15 years ago
|
||
See comment 8.
Reporter | ||
Comment 21•15 years ago
|
||
(In reply to comment #19) > (In reply to comment #18) > > But, I don't think that this is what we want. We want the sEtciPool to be > > deleted only on shutdown > No, we don't want that. If DOM is *very* deep, or there are several nested > event dispatched, dispatching events would cause the pool to reserve lots of > memory. We have some confusion, I think. There are 2 distinct operations on pools which will result in their associated memory regions to be deallocated. One is deletion/destruction. The other is deallocation of the memory without the associated destruction of the pool. My patch does the latter only. Think of the pool as a metadata item, distinct from the memory that it manages. So, yes, we want the pool to be *deleted* only on shutdown, but we want the memory associated with the pool deallocated whenever the sEtciPoolUsers is 0. Is that right?
Assignee | ||
Comment 22•15 years ago
|
||
We want the memory to be deallocated on shutdown or if etcipool has possibly allocated lots of memory and sEtciPoolUsers == 0. But if only the default amount memory has been allocated, we can and should reuse that.
Reporter | ||
Comment 23•15 years ago
|
||
(In reply to comment #22) > We want the memory to be deallocated on shutdown or if etcipool has possibly > allocated lots of memory and sEtciPoolUsers == 0. > But if only the default amount memory has been allocated, we can and should > reuse that. My patch (plus a "delete sEtciPool" in nsLayoutStatics::Shutdown()) achieves this. That is how the PL_ArenaPool code (which is used to build the nsFixedSizeAllocator) works. However, the semantics of the underlying PL_ArenaPool may and should change, and this change might affect the correctness of my patch and yours. That is why I long ago arranged for bug 166701 to block this bug.
Assignee | ||
Comment 24•15 years ago
|
||
In your patch, where does the deallocation happens if lots of memory has been allocated and then sEtciPoolUsers drops to 0? FreeAll calls PL_FreeArenaPool, but I can't see any code which actually calls free. PL_FreeArenaPool calls FreeArenaList with reallyFree=PR_FALSE.
Reporter | ||
Comment 25•15 years ago
|
||
(In reply to comment #24) > In your patch, where does the deallocation happens if lots of memory > has been allocated and then sEtciPoolUsers drops to 0? > FreeAll calls PL_FreeArenaPool, but I can't see any code which actually > calls free. PL_FreeArenaPool calls FreeArenaList with reallyFree=PR_FALSE. Good question. In happens in the NSPR function FreeArenaList, but the memory is not actually freed back to the OS (yet). It goes onto a NSPR-library-owned list of free memory, which can then be reused by other users of NSPR arenas. It's kind of like buffering memory. It is faster than calling free() and then malloc() later, which is exactly what this bug exists to fix.
Assignee | ||
Comment 26•15 years ago
|
||
(In reply to comment #25) > Good question. In happens in the NSPR function FreeArenaList, but the memory > is not actually freed back to the OS (yet). That is not good enough. The pool may have allocated lots of memory. We do want to deallocate pretty soon, especially because in normal cases event target chain doesn't need too much memory. > It's kind of like buffering memory. It is faster than calling free() and then > malloc() later, which is exactly what this bug exists to fix. Well, my patch buffers memory too, in most common cases, but deallocates soon if lots of allocation has happened.
Assignee | ||
Comment 27•15 years ago
|
||
Sure the sEtciPoolReservers handling could be improved if needed, but the patch should work ok in most common cases, like when moving the mouse over a webpage.
Assignee | ||
Comment 28•15 years ago
|
||
Better then. Count exactly etci count.
Assignee: moz → Olli.Pettay
Attachment #381934 -
Flags: superreview?(jonas)
Attachment #381934 -
Flags: review?(jonas)
Assignee | ||
Comment 29•15 years ago
|
||
Jonas, want to review the v3 patch?
Attachment #381934 -
Flags: superreview?(jonas)
Attachment #381934 -
Flags: superreview+
Attachment #381934 -
Flags: review?(jonas)
Attachment #381934 -
Flags: review+
Assignee | ||
Comment 30•15 years ago
|
||
http://hg.mozilla.org/mozilla-central/rev/ca8799e74642
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•