Closed Bug 1134885 Opened 5 years ago Closed 5 years ago

ChunkSets should be fallible


(Toolkit :: Safe Browsing, defect)

Not set



Tracking Status
firefox39 --- fixed


(Reporter: gcp, Assigned: gcp)



(1 file, 1 obsolete file)

Underlying issue of
is that ChunkSets are not currently fallible. In theory this isn't needed because they're small and the server is trusted.

In practice, *cough*.
ni dougt for prioritization.
Flags: needinfo?(dougt)
ChunkSets should be fallible, and it shouldn't even try to allocate insane amount of memory.
Attached patch ChunkSets should be fallible (obsolete) — Splinter Review
Make ChunkSet use FallibleTArray and add some basic sanity checks.

I think ChunkSets were hit by this because they use a range,
so only a few bytes are needed to make the update huge in terms of RAM. 
The other data like prefixes etc probably needs to push a load of data
from server to client to run into the same problem so it's less of a worry.
Attachment #8566857 - Flags: review?(mmc)
Assignee: nobody → gpascutto
Comment on attachment 8566857 [details] [diff] [review]
ChunkSets should be fallible

Review of attachment 8566857 [details] [diff] [review]:

Thanks, gcp.
Attachment #8566857 - Flags: review?(mmc) → review+
We should do something sane here for ranges. The chunkset that was being pushed was an adddel range.
The patch includes a sanity check on the range.
What I mean is, we should be able to handle an adddel range that just tries to wipe out all previous chunks (1-whatever) without allocating a ton of memory.
This ain't going to do much good if the callers don't check the result:
we'll end up with an inconsistent DB which is very bad as well.

Propagate the errors upwards.

I took a look and many of the other arrays are fallible already,
but nothing is checking the return values. Sigh. I'm going to file
bugs against nsTArray because god knows how many other bugs like
this we have.
Attachment #8567012 - Flags: review?(mmc)
Attachment #8566857 - Attachment is obsolete: true
(In reply to [:mmc] Monica Chew (please use needinfo) from comment #8)
> What I mean is, we should be able to handle an adddel range that just tries
> to wipe out all previous chunks (1-whatever) without allocating a ton of
> memory.

Changing ChunkSet so it stores ranges efficiently is an XXX in it's header, but it's also a nice programming exercise to get it right. I think it's suitable for a mentored bug.
Filed bug 1135022 for that optimization.
Comment on attachment 8567012 [details] [diff] [review]
ChunkSets should be fallible

Review of attachment 8567012 [details] [diff] [review]:

::: toolkit/components/url-classifier/HashStore.h
@@ -38,5 @@
>    }
>    // Throughout, uint32_t aChunk refers only to the chunk number. Chunk data is
>    // stored in the Prefix structures.
> -  void NewAddChunk(uint32_t aChunk) { mAddChunks.Set(aChunk); }

yikes :(

::: toolkit/components/url-classifier/ProtocolParser.cpp
@@ +26,5 @@
>  namespace safebrowsing {
>  // Updates will fail if fed chunks larger than this
>  const uint32_t MAX_CHUNK_SIZE = (1024 * 1024);
> +// Updates will fail if the total number of tocuhed chunks is larger than this

Attachment #8567012 - Flags: review?(mmc) → review+
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla39
Flags: needinfo?(dougt)
You need to log in before you can comment on or make changes to this bug.