Closed Bug 1102687 Opened 10 years ago Closed 10 years ago

Review memory allocation patterns in safebrowsing code

Categories

(Toolkit :: Safe Browsing, defect)

x86_64
Windows 7
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla36

People

(Reporter: away, Assigned: gcp)

Details

(Whiteboard: [MemShrink])

Crash Data

Attachments

(5 files)

Despite some recent improvements, the safebrowsing code is still the top source of Large OOM crashes. 

I see a lot of code that copies large arrays between callee and caller. In some cases these could be improved such that the caller takes over the callee's data. The arrays that remain should use the SegmentedArray class with segments less than 1MB.

Here are some things I noticed:

Classifier::ApplyTableUpdates
* This function gets an array from LookupCache::GetPrefixes, and (via AugmentAdds) copies the array data to |store|
* Does |store| really need to be heap-allocated?

Classifier::ReadNoiseEntries
* This function gets an array from LookupCache::GetPrefixes, and copies the array data to its out-parameter |aNoiseEntries|

LookupCache::GetPrefixes
* This function gets a C-style array from nsUrlClassifierPrefixSet::GetPrefixes and copies its nsTArray out-parameter. If nsUrlClassifierPrefixSet::GetPrefixes were changed to use an nsTArray, then LookupCache::GetPrefixes could simply pass the array through. (Even better, a SegmentedArray for both)

nsUrlClassifierPrefixSet::LoadFromFd
* If indexSize == 0, we don't clear mIndexDeltas or mIndexPrefixes. Is that intentional? (I can't tell whether LoadFromFd can be called several times)
* Would it help to do a single mIndexDeltas.SetLength() instead of AppendElement() in the for loop? (The recent exponential growth change probably mitigates it)
* Each mIndexDeltas[i] could definitely use a SegmentedArray as their SetLength is a popular OOM site
Depends on: 1102525
Here is some data from downloading the whole safebrowsing database when starting the browser with a new profile. The __GI_qsort_r ones are out of our control -- when you call qsort() glibc apparently does a merge sort, requiring extra space -- and will vary across platforms. But some of the others look like good candidates for improvement, and probably align with dmajor's observations in comment 0.
>* Does |store| really need to be heap-allocated?

No, but what alternative is there? It's a transient object with unknown-size-in-advance that only exists during an update.

>* This function gets an array from LookupCache::GetPrefixes, and copies the
>array data to its out-parameter |aNoiseEntries|

No, that's not correct. The out-parameter is an empty array that's allocated in the caller.

>LookupCache::GetPrefixes
>* This function gets a C-style array from nsUrlClassifierPrefixSet::GetPrefixes and copies its nsTArray 
>out-parameter. If nsUrlClassifierPrefixSet::GetPrefixes were changed to use an nsTArray, then 
>LookupCache::GetPrefixes could simply pass the array through. (Even better, a SegmentedArray for both)

The fact that it has to call GetPrefixes is unfortunate, but complicated to solve. The fact that GetPrefixes keeps two copies of the array might be an easier win. 

http://dxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/LookupCache.cpp#698
Can we do some sort of transfer-of-ownership of the raw data into the nsTArray? I didn't see any way to do that, though.

Changing GetPrefixes to use an nsTArray is AFAIK not possible because it's exposed to JS. I checked and the only user are our tests, so an alternative is to make the "native" function use nsTArray and do the copying in the JS exposed one.

>* If indexSize == 0, we don't clear mIndexDeltas or mIndexPrefixes. Is that intentional? (I can't tell whether LoadFromFd can be called several times)

It's only once (for as long as the objects exist).

>* Would it help to do a single mIndexDeltas.SetLength() instead of AppendElement() in the for loop?

Yes, that's an easy win now that the size is known in advance.

>* Each mIndexDeltas[i] could definitely use a SegmentedArray as their SetLength is a popular OOM site

They're 120 x 2 byte arrays! Are you sure your profiling is right? See https://bugzilla.mozilla.org/show_bug.cgi?id=1046038
> Can we do some sort of transfer-of-ownership of the raw data into the
> nsTArray? I didn't see any way to do that, though.

I don't think you can do that with nsTArray, but if you were to switch to mozilla::Vector you can use the replaceRawBuffer() method.
(In reply to Gian-Carlo Pascutto [:gcp] from comment #2)
> >* Does |store| really need to be heap-allocated?
> 
> No, but what alternative is there? It's a transient object with
> unknown-size-in-advance that only exists during an update.

Can you not say: HashStore store(aTable, mStoreDirectory);
It's probably minor but it stuck out while I was reading.
 
> >* This function gets an array from LookupCache::GetPrefixes, and copies the
> >array data to its out-parameter |aNoiseEntries|
> 
> No, that's not correct. The out-parameter is an empty array that's allocated
> in the caller.

The spirit is the same: for some period of time, there are two copies of the array.

> Changing GetPrefixes to use an nsTArray is AFAIK not possible because it's
> exposed to JS.

Sorry, my DXR searches didn't see that caller because of the upper/lowercase IDL conversion :(

> >* Each mIndexDeltas[i] could definitely use a SegmentedArray as their SetLength is a popular OOM site
> 
> They're 120 x 2 byte arrays! Are you sure your profiling is right? See
> https://bugzilla.mozilla.org/show_bug.cgi?id=1046038

I'm only going by crash reports. The crashes on that line must be from the 33 release, and the bug you linked says mozilla34. If it's fixed already, that's great!
Assignee: nobody → gpascutto
Status: NEW → ASSIGNED
Attachment #8526773 - Flags: review?(dmajor) → review+
Attachment #8526772 - Flags: review?(dmajor) → review+
I want to spend some extra time walking through part 2. I'll take a closer look on NZ Monday.
(In reply to David Major [:dmajor] (UTC+13) from comment #4)

> > >* This function gets an array from LookupCache::GetPrefixes, and copies the
> > >array data to its out-parameter |aNoiseEntries|
> > 
> > No, that's not correct. The out-parameter is an empty array that's allocated
> > in the caller.
> 
> The spirit is the same: for some period of time, there are two copies of the
> array.

The out parameter is an empty array in which a very limited number (count param) of specific entries from the return value from GetPrefixes are copied. Either there's a bug in my code which I'm blind to, we're talking about different things, or you misunderstood that.
Ah, I see. It wasn't obvious from the code that aCount is significantly smaller than the size of the other array. Thanks for the clarification.
Comment on attachment 8526771 [details] [diff] [review]
Avoid copying and allocating 3 times in GetPrefixes. r=

Review of attachment 8526771 [details] [diff] [review]:
-----------------------------------------------------------------

::: toolkit/components/url-classifier/nsUrlClassifierPrefixSet.cpp
@@ +187,2 @@
>  
>    *aCount = itemCount;

I'm surprised the compiler doesn't complain about assigning uint64_t to uint32_t*.
Attachment #8526771 - Flags: review?(dmajor) → review+
Attachment #8526770 - Flags: review?(dmajor) → review+
There's a bunch of extra things we can address once Bug 1102525 lands.
Let's follow up in:
https://bugzilla.mozilla.org/show_bug.cgi?id=1103857
No longer depends on: 1102525
Crash Signature: , nsTArray_CopyWithMemutils>::EnsureCapacity(unsigned int, unsigned int) | nsTArray_Impl<unsigned int, nsTArray... ] → , nsTArray_CopyWithMemutils>::EnsureCapacity(unsigned int, unsigned int) | nsTArray_Impl<unsigned int, nsTArray... ] [@ OOM | large | mozalloc_abort | mozalloc_handle_oom | moz_xrealloc | nsTArray_base<T>::EnsureCapacity | nsTArray_Impl<T>::AppendElement…
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: