Closed Bug 969875 Opened 10 years ago Closed 10 years ago

Use fallible allocations in the URL Classifier some more

Categories

(Toolkit :: Safe Browsing, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla30

People

(Reporter: ehsan.akhgari, Assigned: ehsan.akhgari)

References

Details

Attachments

(1 file, 1 obsolete file)

      No description provided.
Assignee: nobody → ehsan
Blocks: 969864
Attachment #8372845 - Flags: review?(gpascutto)
Comment on attachment 8372845 [details] [diff] [review]
Use fallible allocations in the URL Classifier some more

Review of attachment 8372845 [details] [diff] [review]:
-----------------------------------------------------------------

I need to look a bit more at the callers and see how well we recover in all cases where we run out of memory.

::: toolkit/components/url-classifier/HashStore.cpp
@@ +622,1 @@
>    inBuff.SetLength(inLen);

You made it fallible, you need to deal with the consequences now.
(In reply to Gian-Carlo Pascutto (:gcp) from comment #2)
> ::: toolkit/components/url-classifier/HashStore.cpp
> @@ +622,1 @@
> >    inBuff.SetLength(inLen);
> 
> You made it fallible, you need to deal with the consequences now.

I can do that if you tell me what the failure path should look like.  I don't know much about this code.
For the context behind this patch, please see bug 969864.  Making nsTArray::SetLength return void uncovered some bugs in this code where we were checking the value of SetLength() and pretending to do OOM handling even though the allocations never failed.
Well you'll want to start checking the SetLength return value and returning NS_ERROR_OUT_OF_MEMORY in the case above for starters :)

I'll need to check the callers (HashStore.cpp, LookupCache.cpp, Classifier.cpp) and figure out a reasonable way to behave on OOM, can't give you a straight answer without doing so.

HashStore already has: http://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/HashStore.cpp#110

For LookupCache probably something else can be done as it may tolerate the allocs failing:
http://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/LookupCache.cpp#92
OK, I'll wait for more suggestions on what to do and will do it all in the next version of the patch.  Thanks!
After some discussion on IRC, I think only InflateReadTArray and the ones in ByteSliceRead should be fallible. mChunks should never be that big, neither should mCompletions, and handling errors on OOM in those gets complicated fast.
Attachment #8372845 - Attachment is obsolete: true
Attachment #8372845 - Flags: review?(gpascutto)
Attachment #8373701 - Flags: review?(gpascutto)
Comment on attachment 8373701 [details] [diff] [review]
Use fallible allocations in the URL Classifier some more; r=gcp

Review of attachment 8373701 [details] [diff] [review]:
-----------------------------------------------------------------

r+ with the modifications indicated

::: toolkit/components/url-classifier/HashStore.cpp
@@ +622,1 @@
>    inBuff.SetLength(inLen);

You need to check this and return NS_ERROR_OUT_OF_MEMORY if it fails.

It looks like all callers will do the right thing then.

@@ +625,5 @@
>    NS_ENSURE_SUCCESS(rv, rv);
>  
>    uLongf insize = inLen;
>    uLongf outsize = aExpectedSize * sizeof(T);
>    aOut->SetLength(aExpectedSize);

Same here.
Attachment #8373701 - Flags: review?(gpascutto) → review+
https://hg.mozilla.org/mozilla-central/rev/f5be8660f07b
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla30
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: