Closed Bug 1014362 Opened 6 years ago Closed 6 years ago

Large infallible allocations in UrlClassifier


(Toolkit :: Safe Browsing, defect)

Windows 7
Not set



Firefox 32


(Reporter: dmajor, Assigned: gcp)



(Keywords: crash, Whiteboard: [MemShrink])


(1 file, 1 obsolete file)

>would a fallible allocation result in un-safety?

Yes, it would disable SafeBrowsing updates.

We've been through this code a few times to split up the allocations and the peak memory usage is now fairly small. We're running out of possibilities to reduce that though, unless we want to go back to updates hugging the disk several minutes by doing uncached SQLite IO :-)

In the first piece of code you mention, we can make a tiny tweak:

We know in advance how big the aArray will be, so we could do a SetCapacity before the loop. This would avoid some extra allocations/reallocations, and slightly reduce the peak usage as AppendElement won't need to double.

In fact, it looks like we can get rid of it entirely by using retval directly:
Assignee: nobody → gpascutto
Attachment #8426932 - Flags: review?(mmc)
Comment on attachment 8426932 [details] [diff] [review]
Patch 1. reduce-update-ram-usage

Review of attachment 8426932 [details] [diff] [review]:

Thanks for fixing this.

::: toolkit/components/url-classifier/nsUrlClassifierPrefixSet.cpp
@@ +142,5 @@
>    NS_ENSURE_ARG_POINTER(aPrefixes);
>    *aPrefixes = nullptr;
> +  uint32_t itemCount = mIndexStarts.Length() + mDeltas.Length();
> +  uint32_t* retval = static_cast<uint32_t*>(nsMemory::Alloc(itemCount * sizeof(uint32_t)));

Can this have a better name than retval, like prefixArray or something?
Attachment #8426932 - Flags: review?(mmc) → review+
Uh, I guess I'd better either put the fixed patch or checkin myself :)
Keywords: checkin-needed
Closed: 6 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 32
Product: Firefox → Toolkit
Duplicate of this bug: 995782
Comments such as and the continuous appearance of SafeBrowsing bugs that involve OOMs with 1-2M allocations even with free virtual/physical memory suggest that making a >1M infallible allocation in Firefox is sufficiently problematic that we shouldn't do it. This is, to me, a sad and strange state of affairs.

This patch reworks the ~1.2M deltas array into an array of arrays, matching the actual datastructure, which means it's about 6000 arrays of 100 2-byte elements each. That's obviously not the most memory optimal new layout, but it's what I could reasonably do without making this code significantly more complex, introducing a huge risk of new bugs and likely writing backing segmented array classes - for which I quite frankly don't have time.

So my question now is, does this actually improve the situation over a singular contiguous array of about 1.2M? The actual memory usage jumps from about 1.2M to 1.7M, but it will now work if no more than ~200 byte chunks of continuous memory are available. I worry that this may just make the heap fragmentation even worse.

mmc r? for checking the algorithmic changes, dmajor/bsmedberg/anyone appropriate f? for the issue of >1M allocations being such a problem and some advice if this patch is a good idea or not.
Attachment #8465611 - Flags: review?(mmc)
Attachment #8465611 - Flags: feedback?(dmajor)
Attachment #8465611 - Flags: feedback?(benjamin)
Comment on attachment 8465611 [details] [diff] [review]
Patch 2. Replace big array by array of arrays

Wrong bug :( :( :(
Attachment #8465611 - Attachment is obsolete: true
Attachment #8465611 - Flags: review?(mmc)
Attachment #8465611 - Flags: feedback?(dmajor)
Attachment #8465611 - Flags: feedback?(benjamin)
You need to log in before you can comment on or make changes to this bug.