Closed Bug 1014362 Opened 6 years ago Closed 6 years ago
Large infallible allocations in Url
I came across some large allocations in the Url Classifier. These can fail at the 2MB and 3MB plateaus on VM-constrained machines. "aArray" in nsUrlClassifierPrefixSet::GetPrefixes: http://hg.mozilla.org/releases/mozilla-release/annotate/0cd2e9a8ba6f/toolkit/components/url-classifier/nsUrlClassifierPrefixSet.cpp#l160 "array" in mozilla::safebrowsing::LookupCache::ConstructPrefixSet: http://hg.mozilla.org/releases/mozilla-release/annotate/0414e679f2ab/toolkit/components/url-classifier/LookupCache.cpp#l618 https://crash-stats.mozilla.com/report/list?product=Firefox&signature=mozalloc_abort%28char+const*+const%29+|+mozalloc_handle_oom%28unsigned+int%29+|+moz_xrealloc+|+nsTArray_base%3CnsTArrayInfallibleAllocator%2C+nsTArray_CopyWithMemutils%3E%3A%3AEnsureCapacity%28unsigned+int%2C+unsigned+int%29+|+nsTArray_Impl%3Cunsigned+int%2C+nsTArrayInfallibleAll...#tab-reports Given that this code has "safety" in the name... would a fallible allocation result in un-safety?
>would a fallible allocation result in un-safety? Yes, it would disable SafeBrowsing updates. We've been through this code a few times to split up the allocations and the peak memory usage is now fairly small. We're running out of possibilities to reduce that though, unless we want to go back to updates hugging the disk several minutes by doing uncached SQLite IO :-) In the first piece of code you mention, we can make a tiny tweak: http://hg.mozilla.org/releases/mozilla-release/annotate/0cd2e9a8ba6f/toolkit/components/url-classifier/nsUrlClassifierPrefixSet.cpp#l164 We know in advance how big the aArray will be, so we could do a SetCapacity before the loop. This would avoid some extra allocations/reallocations, and slightly reduce the peak usage as AppendElement won't need to double. In fact, it looks like we can get rid of it entirely by using retval directly: http://hg.mozilla.org/releases/mozilla-release/annotate/0cd2e9a8ba6f/toolkit/components/url-classifier/nsUrlClassifierPrefixSet.cpp#l169
Assignee: nobody → gpascutto
Comment on attachment 8426932 [details] [diff] [review] Patch 1. reduce-update-ram-usage Review of attachment 8426932 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for fixing this. ::: toolkit/components/url-classifier/nsUrlClassifierPrefixSet.cpp @@ +142,5 @@ > NS_ENSURE_ARG_POINTER(aPrefixes); > *aPrefixes = nullptr; > > + uint32_t itemCount = mIndexStarts.Length() + mDeltas.Length(); > + uint32_t* retval = static_cast<uint32_t*>(nsMemory::Alloc(itemCount * sizeof(uint32_t))); Can this have a better name than retval, like prefixArray or something?
Attachment #8426932 - Flags: review?(mmc) → review+
Uh, I guess I'd better either put the fixed patch or checkin myself :)
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 32
Comments such as https://bugzilla.mozilla.org/show_bug.cgi?id=995782#c5 and the continuous appearance of SafeBrowsing bugs that involve OOMs with 1-2M allocations even with free virtual/physical memory suggest that making a >1M infallible allocation in Firefox is sufficiently problematic that we shouldn't do it. This is, to me, a sad and strange state of affairs. This patch reworks the ~1.2M deltas array into an array of arrays, matching the actual datastructure, which means it's about 6000 arrays of 100 2-byte elements each. That's obviously not the most memory optimal new layout, but it's what I could reasonably do without making this code significantly more complex, introducing a huge risk of new bugs and likely writing backing segmented array classes - for which I quite frankly don't have time. So my question now is, does this actually improve the situation over a singular contiguous array of about 1.2M? The actual memory usage jumps from about 1.2M to 1.7M, but it will now work if no more than ~200 byte chunks of continuous memory are available. I worry that this may just make the heap fragmentation even worse. mmc r? for checking the algorithmic changes, dmajor/bsmedberg/anyone appropriate f? for the issue of >1M allocations being such a problem and some advice if this patch is a good idea or not.
Comment on attachment 8465611 [details] [diff] [review] Patch 2. Replace big array by array of arrays Wrong bug :( :( :(
You need to log in before you can comment on or make changes to this bug.