Closed Bug 475436 Opened 11 years ago Closed 11 years ago

tinyURL and other URL shortners being used to bypass safe browsing (?)


(Toolkit :: Safe Browsing, defect, P2)






(Reporter: beltzner, Assigned: dcamp)




(Keywords: regression, verified1.9.1, Whiteboard: [sg:low])


(1 file)

from ReadWriteWeb (article in the URL field, above):

> In tests, the reason that the TinyURLs were able to be used in this way is 
> because the pages they masked were not at the domain level, but were rather 
> sub-pages of a domain marked as "safe." This actually points to a weakness in 
> the Safe Browsing feature and not really a security risk in the TinyURL service 
> in and of itself. Because Safe Browsing only ranks sites at the domain level, 
> infected sub-pages will always be ranked as "non-malicious" as long as the 
> domain is categorized as "safe."

This doesn't seem right to me, thus the (?) in the bug title. I would expect that we would resolve each of the TinyURL items to an actual URL, then as we request that actual URL we'd run it through the SafeBrowsing warden for a lookup.

Dave: can you look at this and tell us if it's INVALID or real? If it's real, how is it bypassing the SafeBrowsing feature?
Flags: blocking-firefox3.1?
If the implementation is working correctly, a redirect (on a subresource or top-level page) should trigger another check against the SB database. Using redirectors should not successfully avoid a SafeBrowsing hit.

WFM - Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2a1pre) Gecko/20090126 Minefield/3.2a1pre
WFM - Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv: Gecko/2008120121 Firefox/3.0.5

I'm gonna resolve this as INVALID since nobody can reproduce it. Dave, re-open if you think our analysis is flawed. Blocking+, though, as if it were a real bug, it would likely have to block.

(Ian: thanks for your input, here. Are you guys working up a response to RRW? Wanna do joint?)
Closed: 11 years ago
Flags: blocking-firefox3.1? → blocking-firefox3.1+
Resolution: --- → INVALID
data:text/html,<iframe src=""> works as it should too.
For the sake of completeness, this attack would have worked against Firefox 2.0's phishing filter, but not against 3.0+.
From an email from the study author:

> Saw your comment saying that you couldn’t reproduce this issue, here is a test 
> case(Please handle with caution, this is a real malicious URL):
> view-source:
> view-source:  (WARNING: this is malware)

I can confirm that these both appear to end up at the same URL, and while the former is blocked, the latter is not. On Firefox 3.1 nightlies the latter will beachball the browser for a bit, but not crash it - be ye aware!
Resolution: INVALID → ---
Both of these trigger a malware warning for me using Fx3.0.7pre on Mac. Does this test depend on how recently these urls were reported as attack sites? When I try it on a fresh profile on XP these are not blocked? And when try these on an older profile on Linux, for example, they are blocked.
Entirely possible, actually. I'm noticing now that I'm *not* getting a warning on the former or the latter.
I confirm that the first one gets blocked immediately, but the latter appears to connect, on a fairly recent 3.1 build.  Dave - we do check redirects, right?  Is tinyurl maybe doing something surprising with the way they redirect?
We do check redirects, looking in to it now.
Stupid question: TinyURL uses 301 redirects, it's not possible that we only catch 302s, is it?
(In reply to comment #12)
> Stupid question: TinyURL uses 301 redirects, it's not possible that we only
> catch 302s, is it?
Given, we'd only not catch it if necko didn't dispatch the callback.
OK, this isn't actually redirect related.  There's a dumb bug in the patch for bug 453723, that will cause some checks to fail the second time.  If you'd loaded the tinyurl first, going straight to the url would have failed.

This should be trunk/3.1 only, bug 453723 wasn't landed on the 3.0 branch.

Patch coming once I write some unit tests to catch this.
Attached patch fixSplinter Review
To expand a bit on the last comment:

To avoid re-hashing keys we know aren't in the database, we maintain a cache of recently-checked clean fragments.  We mark a fragment clean after CheckKey() doesn't find a match.

This was breaking because we actually CheckKey() twice:  the url was called once for and once for  So if is blacklisted, "" was caught when checking's entries.  But it was marked as clean when checking against's entries.  So the next time through, it was marked as clean.

The attached patch changes CheckKey() to lump together all of and's entries into one check.
Attachment #359132 - Flags: review?(tony)
Comment on attachment 359132 [details] [diff] [review]

The mCleanHostKeysLock doesn't prevent a double insert into mCleanHostKeys, but maybe that's ok since it's a hash (and it was that way in the old code).

Looks good.
Attachment #359132 - Flags: review?(tony) → review+
Priority: -- → P2
Assignee: nobody → dcamp
Assignee: dcamp → nobody
(unintentional reassign; sorry)
Assignee: nobody → dcamp
Reviewed patch, just waiting to land?
Whiteboard: [needs landing]
Closed: 11 years ago11 years ago
Keywords: fixed1.9.1
Resolution: --- → FIXED
Whiteboard: [needs landing]
Whiteboard: [sg:low]
Not needed on the 1.9.0 branch per comment 14
Blocks: 453723
Flags: wanted1.9.0.x-
Keywords: regression
verified FIXED across the following links

on builds:

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1b2) Gecko/20081201 Firefox/3.1b2 ID:20081201061100


Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1b5pre) Gecko/20090517 Shiretoko/3.5b5pre ID:20090517031745
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.