Closed Bug 781589 Opened 12 years ago Closed 11 years ago

Convert some/all of the major XPConnect hash tables to something infallible

Categories

(Core :: XPConnect, defect)

defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: mccr8, Assigned: mccr8)

References

Details

Attachments

(8 files, 1 obsolete file)

Currently, XPConnect uses PL_DHashtable all over the place (well, the JS version). This is fallible, but I don't really trust that we're checking everything we should all over the place, and if something fails due to OOM or whatever, I don't really trust things as being handled well.  This shouldn't really happen anyways.
Depends on: 782792
XPConnect is the only user of JSDHashTable in the tree!
http://mxr.mozilla.org/mozilla-central/ident?i=JSDHashTable&filter=
I'm inclined to convert all uses of JSDHashTable in XPConnect over to something fallible, so we can get rid of it. Do you have any particular opinions on hash tables, bholley?
(In reply to Andrew McCreight [:mccr8] from comment #2)
> Do you have any particular opinions on hash tables, bholley?

I don't, but I think luke and terrence do.
I thought infallible allocation could only be used when we could demonstrate that the size of the allocations could be very large due to content; it seems like the various maps in XPConnect could be very large.  Does the rest of DOM also allocate (say, nodes) infallibly?
(In reply to Luke Wagner [:luke] from comment #4)
> I thought infallible allocation could only be used when we could demonstrate
> that the size of the allocations could be very large due to content;

I assume you mean "can't be very large" here?

> it seems like the various maps in XPConnect could be very large.  Does the rest
> of DOM also allocate (say, nodes) infallibly?

My main philosophy here is skepticism that all of these maps are handling failure in a secure manner.

Is it really a danger that we can end up with a hash table operation failing when we're not already in a situation where we're already about to die anyways from OOM?

But you are right, maybe conversion to infallibility should be done on a more case-by-case basis.  This bug could just be about eliminating the use of JSDHash in XPConnect, then followup bugs could flip the switch to make individual maps infallible as desired.
(In reply to Andrew McCreight [:mccr8] from comment #5)
> Is it really a danger that we can end up with a hash table operation failing
> when we're not already in a situation where we're already about to die
> anyways from OOM?

Yeah, that's what I was getting at by asking of DOM also dies on OOM.

> This bug could just be about eliminating the use
> of JSDHash in XPConnect, then followup bugs could flip the switch to make
> individual maps infallible as desired.

If you switch to a new hash table type, you'll effectively be auditing all uses and thus be able to catch any missing OOM checks.
(In reply to Luke Wagner [:luke] from comment #6)
> Yeah, that's what I was getting at by asking of DOM also dies on OOM.
I'm not sure, I'd have to look.
> 
> > This bug could just be about eliminating the use
> > of JSDHash in XPConnect, then followup bugs could flip the switch to make
> > individual maps infallible as desired.
> 
> If you switch to a new hash table type, you'll effectively be auditing all
> uses and thus be able to catch any missing OOM checks.

Well, not really, because XPCMaps do this weird wrapping thing that would isolate changes. ;) But yeah, I can just look around at the code. Even if there are OOM checks, it doesn't mean that we end up in a sane state.  I've fixed a bunch of sec bugs recently from cruddy error code that has never been tested. But I suppose it is easy enough to force a failure and see how badly things blow up...
If we OOM on account of wrapper maps we're pretty screwed. I think we should crash safely in those cases. FWIW, I'm pretty sure most of the DOM/XPCOM uses operator new, which is infallible. So we're probably already going the infallible route for node allocation.

khuey, am I correct in this?
For small allocations in large numbers, Gecko is almost entirely infallible these days.  Nodes are definitely being fallibly allocated.

I think we should proceed with this.
definitely being *infallibly* allocated.
I use nsTHashtable and not nsDataHashtable to preserve the behavior that if there's an existing mapping, we use that value instead of overwriting it.
Blocks: 783820
Assignee: nobody → continuation
Attachment #652805 - Attachment is obsolete: true
Attachment #653817 - Attachment is patch: true
Attachment #653815 - Attachment description: convert Native2WrappedNativeMap → part 1: convert Native2WrappedNativeMap
I'd like to templatize a bit more here, but unfortunately the classes aren't quite close enough.  For instance, consider the Add method.  Three classes take a value as an argument, then extract a key from the value.  Two or three take a key and a value and work directly.  One of them is a set, so there is no value.  One class has this weird thing where keys can be converted to values under some circumstances, so there are two separate Add methods.

I noticed that the Remove method for IID2ThisTranslatorMap is never called, which seems a little sketchy.
This is the last of them.
njn converted these over to PLDHashtable, and I think these patches are really tedious without any particular benefit, so I'm just going to mark this WONTFIX. If in the future we have a reason to convert these over to an infallible hashtable, these patches will be here.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: