Created attachment 520772 [details] [diff] [review] Patch Callers don't generally pass in such a huge amount of data that the extra (temporal) memory hit is burdensome, and the new API being added in bug 643532 is a bit simpler to use. Moreover, this simplification makes possible various JSON performance enhancements based on not needing to always have parser-saved state in case of buffer boundaries. I'm way out of date on what the proper standard data structure is for an array of chars, but what I could find on MDN suggests nsTArray is the right container. This depends on the API added in the patch in bug 643532.
Comment on attachment 520772 [details] [diff] [review] Patch - In nsJSONListener::ConsumeConverted(): + PRUnichar* endelems = mBufferedChars.AppendElements(unicharLength); + if (!endelems) + return NS_ERROR_OUT_OF_MEMORY; I think you picked the right container for this data, but it is an infallible container, as in AppendElements will never fail, it'll abort() on oom. So no need to check here, or if you feel that's wrong, then we should change this to be a FallibleTArray<>. Seems to me that having this be an infallible container is ok though. r=jst.
I landed this in TM a bit ago, and it recently got merged to m-c: http://hg.mozilla.org/mozilla-central/rev/7b977e10c721
If unicharLength is content-determined, this should be a fallible allocation. Very large JSON exists and it can trigger OOM (at least on Windows) without the browser being fatally out of memory. Please cite the followup bug here. Thanks. /be
This is only tenuously determined by content. decodeFromStream for this interface is only called from the search service, which appears to cache a JSON description of used search engines, and from the addon manager, which appears to cache a JSON description per extension. Neither use seems interestingly content-controllable to me, nor is either likely to be very large unless someone's deliberately trying to mess with things here (in which case, they can knock themselves out).