Closed Bug 985722 Opened 10 years ago Closed 10 years ago

don't make gethash requests for any full hash (-digest256) whitelists

Categories

(Core :: DOM: Security, defect)

x86_64
Linux
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla31

People

(Reporter: mmc, Assigned: mmc)

References

(Blocks 1 open bug)

Details

Because of

http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/LookupCache.h#38
 bool Confirmed() const { return (mComplete && mFresh) || mProtocolConfirmed; }

and 

http://mxr.mozilla.org/mozilla-central/source/toolkit/components/url-classifier/nsUrlClassifierDBService.cpp#829

     // We will complete partial matches and matches that are stale.
829     if (!result.Confirmed()) {

we are fetching gethash completions even if the table in question contains complete hashes already.
Blocks: 985623
No longer depends on: 985623
The logic in that code is correct. I'm not convinced this is an actual bug.

https://developers.google.com/safe-browsing/developers_guide_v2

Thus, the client MUST not make any further full length hash requests for that hash, unless a client is following the timing requirements set forth in Request Frequency - HTTP Request for Data (below) and would be unable to issue a warning due to timing constraints in Lookups - Age of Data, Usage (e.g. **the client has just been launched within the past 5 minutes and so has not yet done a list update, or the list is more than 45 minutes out of date because the client has been backed off and is following the update frequency requested by the server.)

The last section between parentheses are what the above code ensures.
Hmm, in that case I still want to know what was on the whitelist that was causing problems. It can't be the top-level cert string, because then Chrome would cause the same issue.
I also don't understand what it means to get a completion for a list that already contains complete hashes.
Confirm that the entry still exists on the remote server. (Remember that the DB can be weeks old just after startup)
The doc in comment 1 wasn't written at the time Google developed the as-yet undocumented application reputation API. I think we have to fix this. From bryner in an email:

After chatting with some other folks here, I wanted to clarify that the client should never send gethash requests for matches in the goog-downloadwhite-digest256 list.  This is true even if the client would otherwise consider itself out of date.

The rationale is that because this is a whitelist, we don't need the extra validation step to deal with false positives; removals are handled through the normal sub chunk mechanism.
Summary: don't make gethash requests for full-length hashes → don't make gethash requests for any whitelists
Summary: don't make gethash requests for any whitelists → don't make gethash requests for any full hash (-digest256) whitelists
Fixed by bug 985623
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Assignee: nobody → mmc
Target Milestone: --- → mozilla31
You need to log in before you can comment on or make changes to this bug.