Closed Bug 153215 Opened 22 years ago Closed 12 years ago

Kill internal use of nsIMemory?

Categories

(Core :: XPCOM, defect)

defect
Not set
major

Tracking

()

RESOLVED WORKSFORME
Future

People

(Reporter: dougt, Unassigned)

References

Details

(Keywords: helpwanted)

Attachments

(1 file, 1 obsolete file)

Waterson cornered and drilled me yesterday for the need of nsIMemory. The arguments I gave were pathetic. I spoke with rpotts today (the one who i knew could argue for a nsIMemory better than I could), and he had better arguments, yet they will were unrealistic. A couple points which came up: 1. To ensure that the same allocate is used. - by providing an interface to the allocate we use (malloc) should be enough. We don't have to use the interface directly. 2. We could do arena based allocation under the hood of nsIMemory in which at shutdown of Gecko, we could dispose of the arena thereby freeing any memory leaks. - we already malloc/new directly. - staying viglate on memory leaks is the only way to go. 3. Easy to change allocator - nope. just replace malloc at build time. 4. The one use case that we found was the following: If someone were to embed a prebuilt Gecko into their application and their application used SmartHeap, there would be allocation and deallocation mismatches. However, the solution to this is have the embeder to use the nsIMemory interface. So, what I think that I am proposing is the following: a. In all mozilla code, we use malloc/free. This means everywhere regardless of crossing interface boundaries. b. In the standalone xpcom glue code, we go through the nsIMemory to allocate. This is so that if the embedding can enforce the old "use nsIMemory" rule. This will be driven by a compiler flag. (the standalone glue is built by the embedder/component developer) Thoughts?
I think that there were a few more reasons for wanting to use nsIMemory allocation for cross interface allocation. However, of those we enumerated none were very important.
*** Bug 153213 has been marked as a duplicate of this bug. ***
*** Bug 153214 has been marked as a duplicate of this bug. ***
I can see how you can control the interfaces seen by embedders and use nsIMemory in them. But what about people who use Mozilla as an application framework and add their own components to Mozilla, interfaceing directly to all sorts of "internal" Mozilla components? By passing directly-malloc'ed stuff across some of the "internal" Mozilla interfaces you'd effectively force those "external" components to use the same runtime-libraries as linked to by the rest of Mozilla. (On Windows you couldn't even mix DEBUG and RELEASE components because they link to different runtime libs.) So by going over to direct mallocing/freeing you're sacrificing a substantial amount of modularity. It would e.g. be difficult to get to a situation where several applications on a users machine share the same Mozilla base (whether this is desirable or not). OTOH, if we dump internal usage of nsIMemory, we might as well start using C++ features that are currently prohibited for XPCOM components. For the content element hierachy, e.g., it would be desirable to use virtual inheritance so that we don't have to do any explicit interface forwarding (all those ugly NS_FORWARD_NSIDOMNODE_NO_CLONENODE's, etc.)
Alex, sorry I wasn't clear. Components *and* embedders still could use nsIMemory to access the same heap as Gecko. Basically, nsIMemory would map to the same malloc as it does today. There are no plans to remove it. I think that a rule could be to use the libc allocator when you allocate or free memory. If you are not using the libc allocator for whatever reason, use nsIMemory. Does this make sense?
what about the fun bits of mixing debug and non debug components w/ msvc, didn't they have different allocators?
let me restate it again. Sorry for the confusion. The proposal is: * mozilla uses libc - be that debug or not. * components and embedders that don't use the libc allocator (or are worry about the ndebug/debug problem) use nsIMemory which maps to whatever allocator mozilla is uses.
I still think this doesn't remove the problem. Firstly, if you start calling malloc directly, you'd be breaking your rule since for a debug Gecko build, you'd be calling the libcd version of malloc. Hence you couldn't add release-components to a debug Gecko build if those components take you up on the promise of always going through libc's malloc/free and call the libc-versions of those functions directly. Secondly, what do you define as "Gecko"? Is the editor part of it? If you stop using nsIMemory then you couldn't replace just the editor component(s) by new versions, in case they link to a different runtime lib (e.g. debug instead of release - for whatever reason, or a different compiler's libc equivalent). To get around this you'd either have to always compile everything using the same libraries or use nsIMemory in the editor component(s). But where do you draw the line? I can envisage a point in the future where people might want to drop in new content element factories or rendering libraries. Or you might want to replace only a part of Mozilla's core components with updated ones. This would significantly erode the definition of what constitutes the Gecko "core" and where you'd have to use nsIMemory or not. I think from an "application-on-top- of-Mozilla"-developer's point-of-view the only sensible definition of a Mozilla "core" would include the xpcom library alone. Thirdly, what are the reasons for wanting to get rid of nsIMemory usage in the first place? Is it *really* a performance concern?
Sorry Doug, these comments (#8) where in response to your comment #6. (Just had a mid-air collision).
... and of course I mean your comment #5, not #6.
> what are the reasons for wanting to get rid of nsIMemory usage in the first place? Is it *really* a performance concern? I am not hell-bent on removing anything really. I just want some concrete reasons why we have it. I don't really buy the debug-ndebug problem as there are debug versions of mozilla. Defining gecko is a good thing and a hard one. I think what I was aiming at was to tell component developers and embedders to use nsIMemory and internally, just use malloc with the assumption (possible poor assumption) that the "internal" components *know* what the implmentation is.
shaver was bugging me about string memory ownership and so forth a while back, so I'm adding him to the CC I guess my question is this: where exactly is is that we're allocating memory that needs to be used and possibly freed across interface boundaries? The only thing I can think of is "out" strings - the callee-allocated "string" and "wstring" in IDL. Aside from that, object creation/destruction is handled by the appropriate factory/Release() within each XPCOM object. I'm not aware of large blobs of memory that are passed around across interface boundaries. (but someone could certainly prove me wrong) So my take is this: use nsMemory for string allocation/deallocation. All other instances can use whatever allocator they want, because the allocation strategy is local to a particular module. In fact, if we can get people to STOP using nsMemory for non-string uses, we could start optimizing nsMemory for global string-sized data. (i.e. recycling commonly-sized elements with nsFixedSizedAllocator) But frankly, if we're moving to an nsAString-oriented world, I don't even see the need to make that optimization unless there's some frozen string/wstring based interface that's just killing us.
>where exactly is is that we're allocating memory that needs to be used and possibly freed across interface boundaries? The only thing I can think of is "out" strings - the callee-allocated "string" and "wstring" in IDL. suppose only strings freed across interface boundaries. Don't they just all bottom out in the libc allocator? (assume that we don't have the debug/ndebug problem timeless mentioned.)
right, they do right now, but they're shielded behind nsMemory so I guess what I was trying to say in my rambling was: * the only real useful reason I see to use nsMemory is to have a specialized allocator for strings * however, that might not actually be as useful as one might think, assuming nsAString solves most of our string-related performance/bloat problems. so yeah, even strings could just default to the base allocator.. something still rubs me the wrong way about it - just a gut feeling related to embeddors and such. Maybe something about how our string usage might fragment the embeddor's heap and degrade the performance of an embeddor's app? (since now we're guaranteeing that we're sharing a heap with the embeddor)
even if we implemented the global nsIMemory to use a different heap, there are plenty of places where we call down to malloc directly. Hence we are already sharing the heap with the embedder. maybe this is just a bug. How about this idea. Make everyone use nsMemory (the static class) - everyone (no more use of malloc/new). This funnels all allocations into one place (similar to what js does). If you link against XPCOM, the implementation of this class doesn't call through the to nsIMemory. If you link against the Glue library (write up on what this library is coming soon), the nsMemory goes through the nsIMemory interface. What does this buy us: 1. A single point for allocation. 2. no vtable jump for memory management 3. embedders and component developers uses the same class to allocate from and it does the "right" thing (assuming they are using the glue lib) 4. A clearer allocation story. No more "if your here do this, if your there do that". Thoughts?
assuming by "make every one use nsMemory" you mean just "out strings" then this sounds great - I always wondered if nsMemory had a vtable jump or not - would be great to see it gone! I think the fact that many places do eventually call libc is just a bug... we could find many of those locations (at least the common ones) pretty easily by creating a seperate heap (i.e. on windows) for nsMemory and then running the build under purify - they'd show up as FMMs I think
> The only > thing I can think of is "out" strings - the callee-allocated "string" and > "wstring" in IDL. and arrays!
> assuming by "make every one use nsMemory" you mean just "out strings" then this sounds great Actually, I am thinking everything that is allocated would go though nsMemory. Using malloc directly *anywhere* would be a bug.
Attached patch nsMemory optimization (obsolete) — Splinter Review
This patch basicaly allows anyone linking to XPCOM to avoid the vtable jump. If the glue library is built in standalone mode (with the define XPCOM_GLUE), the old behavior will still exists (going through the interface pointer) I think that will give us a minor performance improvement while preserving the same memory management policy for components and embedders. reviews?
ignore the changes to base/nsMemoryImpl.cpp in the above patch. They are not part of what I intended to do here.
Comment on attachment 88711 [details] [diff] [review] nsMemory optimization sr=alecf
Attachment #88711 - Flags: superreview+
So, are we killing the memory flusher stuff, too? I actually thought this was used on Mac.
The memory flusher is used on Mac, yes. I'm not sure that it does much that is useful, but Mac on OS 9 still needs it.
warren, simon, please see comment #20. The removal of the memory flusher thread crap will be addressed in another bug at another time.
ugh. I see what you mean. If an allocation fails, you want to purge... Why does the mac need this? Why can't this be done in the mac application allocator - eg. under the hood of malloc?
> Why does the mac need this? Why can't this be done in the mac application > allocator - eg. under the hood of malloc? On Mac OS 9, the application has limited heap space. Unlike other platforms, an allocation can fail before the machine is on its knees, and applications should be able to recover from this. > Why can't this be done in the mac application > allocator - eg. under the hood of malloc? Er, you think that we should be doing things like flushing the XBL cache inside of a malloc call?
certainly not without abstraction. This is currently what the nsMemoryImpl does right now. If an allocation fails, we call: FlushMemory(NS_LITERAL_STRING("alloc-failure").get(), PR_FALSE); This posts a async event which tells memory pressure observer to release their caches. With the patch above (ignoring the change to base/nsMemoryImpl.cpp), will skip flushing memory if malloc fails. Now, suppose a malloc fails. we are possibly at a point where we can not recover. Now, act of posting these async event causes more memory to be allocated. Hence, i do not believe that flushing memory when an allocation fails buy us anything. Thoughts?
look out waterson, you've got a new name!
sorry about that chris. I have got warren on the brain...
> Now, suppose a malloc fails. we are possibly at a point where we can not > recover. Now, act of posting these async event causes more memory to be > allocated. Hence, i do not believe that flushing memory when an allocation > fails buy us anything. How about this: figure out approximately how much memory is required to do the event posting. Allocate and hold on to that amount of memory at startup. Then inside FlushMemory(), free that block before posting the event. Then reallocate and hold again afterwards. Or something like that.
the only platform which requires this hand holding is os 9, right?
OS 9, and any other OSes that have severe memory limitations and/or no VM (like some embedded systems?)
and nsMemoryImpl is the place where we handicap all platforms? Why not just #ifdef this notify pressure observer for weaker platforms?
> and nsMemoryImpl is the place where we handicap all platforms? Why not just > #ifdef this notify pressure observer for weaker platforms? Huh? This is useful functionality for every platform, even if the others don't currently depend on it.
dmose, prove it. read commment #27.
yeah. I like it. I guess my point is, right now, all of this presure observering stuff is handwaving.
i've actually crashed Bezilla due to a memory allocation failure (well it was a new failure, but whatever). fwiw, Netbeans (the java idea, http://netbeans.org) talked about this same problem and someone proposed the same thing that dmose proposed in comment 30, I don't know if they ever implemented it, but I could check.
if we *really* believe that checking after every allocation for null buys us some protection against low memory conditions, then *all* allocation must go through nsMemory. This means, that any shortcut to PR_Malloc, new, or, malloc makes this system brittle.
actually I would tend to disagree. I think that one possibility for an OOM situation is that someone is allocating some huge large of memory for something - lets say a 10k block for reading in a file or something - that allocation could fail, but only 1-2k of objects might be necessary in order to kick off the flushing, which might free up a few megs of memory so that future allocations would succeed. So maybe you'd see a string bundle fail to load, or a CSS file fail to be read/parsed, but the browser would recover a bit and be able to run a bit longer. Now all of this is a "might" of course but given that the only other option is simply to return null, and possibly crash very soon, I'd rather see us try to recover as best as possible.
so, maybe that answers my inital questions. We tunnel everything through nsIMemory because it give us the ablity to detect low memory situation and do something about them. If this is the case, then everything *must* go through nsMemory or nsIMemory.
well, its more like using nsMemory has this added feature, so I think we should highly encourage it... if you don't use it, then your allocation won't trigger a memory compaction.. still a better situation than crashing :)
Attached patch patch v.2Splinter Review
instead of going to PR_* directly, nsMemory when not built for the glue code calls directly into nsMemoryImpl.
Attachment #88711 - Attachment is obsolete: true
Comment on attachment 89345 [details] [diff] [review] patch v.2 How does this save on virtual function calls? nsMemoryImpl still has virtual methods, and the compiler doesn't know that nothing derives from nsMemoryImpl and overrides them (C++ doesn't have |final|).
Perhaps a better approach would be to have inline functions (member functions of something? a new class (ugh)?) that are called by both nsMemory (in the !defined(XPCOM_GLUE) case) and by nsMemoryImpl (which would be used by nsMemory in the defined(XPCOM_GLUE) case)?
moving out.
Keywords: helpwanted
Target Milestone: --- → Future
*** Bug 75620 has been marked as a duplicate of this bug. ***
OS: Windows 2000 → All
QA Contact: scc → xpcom
mass reassigning to nobody.
Assignee: dougt → nobody
obsolete in these days of jemalloc?
See Also: → 44352
NS_Alloc is our shared allocator.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: