Closed
Bug 1134885
Opened 10 years ago
Closed 10 years ago
ChunkSets should be fallible
Categories
(Toolkit :: Safe Browsing, defect)
Toolkit
Safe Browsing
Tracking
()
RESOLVED
FIXED
mozilla39
| Tracking | Status | |
|---|---|---|
| firefox39 | --- | fixed |
People
(Reporter: gcp, Assigned: gcp)
Details
Attachments
(1 file, 1 obsolete file)
|
17.94 KB,
patch
|
mmc
:
review+
|
Details | Diff | Splinter Review |
Underlying issue of https://bugzilla.mozilla.org/show_bug.cgi?id=1134858
is that ChunkSets are not currently fallible. In theory this isn't needed because they're small and the server is trusted.
In practice, *cough*.
ni dougt for prioritization.
Flags: needinfo?(dougt)
Comment 2•10 years ago
|
||
ChunkSets should be fallible, and it shouldn't even try to allocate insane amount of memory.
| Assignee | ||
Comment 3•10 years ago
|
||
| Assignee | ||
Comment 4•10 years ago
|
||
Make ChunkSet use FallibleTArray and add some basic sanity checks.
I think ChunkSets were hit by this because they use a range,
so only a few bytes are needed to make the update huge in terms of RAM.
The other data like prefixes etc probably needs to push a load of data
from server to client to run into the same problem so it's less of a worry.
Attachment #8566857 -
Flags: review?(mmc)
| Assignee | ||
Updated•10 years ago
|
Assignee: nobody → gpascutto
Status: NEW → ASSIGNED
Comment 5•10 years ago
|
||
Comment on attachment 8566857 [details] [diff] [review]
ChunkSets should be fallible
Review of attachment 8566857 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks, gcp.
Attachment #8566857 -
Flags: review?(mmc) → review+
Comment 6•10 years ago
|
||
We should do something sane here for ranges. The chunkset that was being pushed was an adddel range.
| Assignee | ||
Comment 7•10 years ago
|
||
The patch includes a sanity check on the range.
Comment 8•10 years ago
|
||
What I mean is, we should be able to handle an adddel range that just tries to wipe out all previous chunks (1-whatever) without allocating a ton of memory.
| Assignee | ||
Comment 9•10 years ago
|
||
This ain't going to do much good if the callers don't check the result:
we'll end up with an inconsistent DB which is very bad as well.
Propagate the errors upwards.
I took a look and many of the other arrays are fallible already,
but nothing is checking the return values. Sigh. I'm going to file
bugs against nsTArray because god knows how many other bugs like
this we have.
Attachment #8567012 -
Flags: review?(mmc)
| Assignee | ||
Updated•10 years ago
|
Attachment #8566857 -
Attachment is obsolete: true
| Assignee | ||
Comment 10•10 years ago
|
||
(In reply to [:mmc] Monica Chew (please use needinfo) from comment #8)
> What I mean is, we should be able to handle an adddel range that just tries
> to wipe out all previous chunks (1-whatever) without allocating a ton of
> memory.
Changing ChunkSet so it stores ranges efficiently is an XXX in it's header, but it's also a nice programming exercise to get it right. I think it's suitable for a mentored bug.
| Assignee | ||
Comment 11•10 years ago
|
||
Filed bug 1135022 for that optimization.
Comment 12•10 years ago
|
||
Comment on attachment 8567012 [details] [diff] [review]
ChunkSets should be fallible
Review of attachment 8567012 [details] [diff] [review]:
-----------------------------------------------------------------
::: toolkit/components/url-classifier/HashStore.h
@@ -38,5 @@
> }
>
> // Throughout, uint32_t aChunk refers only to the chunk number. Chunk data is
> // stored in the Prefix structures.
> - void NewAddChunk(uint32_t aChunk) { mAddChunks.Set(aChunk); }
yikes :(
::: toolkit/components/url-classifier/ProtocolParser.cpp
@@ +26,5 @@
> namespace safebrowsing {
>
> // Updates will fail if fed chunks larger than this
> const uint32_t MAX_CHUNK_SIZE = (1024 * 1024);
> +// Updates will fail if the total number of tocuhed chunks is larger than this
touched
Attachment #8567012 -
Flags: review?(mmc) → review+
| Assignee | ||
Comment 13•10 years ago
|
||
Comment 14•10 years ago
|
||
sorry had to back this out in https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=7d9e802627f7 for causing a bustage like:
https://treeherder.mozilla.org/logviewer.html#?job_id=6955039&repo=mozilla-inbound
| Assignee | ||
Comment 15•10 years ago
|
||
| Assignee | ||
Comment 16•10 years ago
|
||
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
status-firefox39:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla39
Updated•10 years ago
|
Flags: needinfo?(dougt)
You need to log in
before you can comment on or make changes to this bug.
Description
•