Closed Bug 421494 Opened 17 years ago Closed 17 years ago

reimplement third party cookie blocking

Categories

(Core :: Networking: Cookies, defect, P1)

defect

Tracking

()

RESOLVED FIXED
mozilla1.9beta5

People

(Reporter: dwitte, Assigned: dwitte)

References

()

Details

(Keywords: privacy)

Attachments

(2 files, 3 obsolete files)

our current method of third-party blocking kinda works, but not always - there are many little loopholes (e.g. iframes; see bug 158463) that allow sites to sneak cookies in. this is a privacy concern for people who only want cookies set by the sites they actually visit, and not the countless advertising and tracking entities out there. it's also a feature parity concern, since other browsers are now doing this by default, and our implementation just isn't up to snuff. the basic problem is that the cookie API takes two URI's, one for the current request and one for the supposed originating URI, and compares them to determine if they're in the same domain. however, this relies on the consumer code to provide the correct originating URI, which it sometimes doesn't or can't. a better approach is to have cookies figure this out itself, from the channel - we can leverage the loadgroup to find the docshell owning the load, and from there, the root content docshell and finally the URI. this is philosophically saying "for a given load, walk up the docshell tree, and find out what the user actually typed in the location bar", which is really what this feature should do.
as MO, i think we want this for firefox 3. -> beta 5, P1.
Priority: -- → P1
Target Milestone: --- → mozilla1.9beta5
Nom'ng for blocking as well. Dan, can you estimate the risk of this change in any way, to help drivers make a good decision on blocking? The benefits seem clear. PS - If it works, and lands in time for FF3, beer bistro is on me.
Flags: blocking1.9?
i've been testing this over the last few days, and figured out the risks and things that will break. of course, this stuff only applies for people that twiddle the pref - the impact for the "accept all cookies" case is zero. 1) we'll block plugins that use the plugin-specific cookie API in nsPluginHostImpl.cpp. that API doesn't provide any load context, so there's no way to make it work. i actually doubt this is used much nowadays - but i haven't done extensive testing with different plugins to be sure. flash has its own cookie mechanism and is unlikely to use our one. 2) loads kicked off from directly within the browser chrome/backend won't have cookies. this covers favicons, safebrowsing updates, feed updates, ocsp security certificate stuff etc. allowing cookies with favicons in the first place has already caused trouble (bug 263057), and i don't think we really intend to have cookies with any of the others, so no tears shed here. 3) extensions that kick off loads from chrome (similar to above). it's unknown what this will affect, really... i'm not sure what extensions would do this, and then whether they rely on cookies. extensions that load stuff in the sidebar (via nsISidebar) should work, though. 4) sites that use third party cookies (surprise!). of course, some legit sites will be caught in the crossfire, which is what bug 417800 was all about. we can worry about improving whitelisting UI, or having downgrade-rather-than-block action, to help with these later. all the big cases should work: http loads, document.cookie, META HTTP-EQUIV. the advantage to making sure we really do have a top-level URI that the user typed in or clicked on is that this is going to be infallible - in my testing i've seen precisely zero third-party cookies getting through. (in fact, the favicon hole is currently what makes us fail the grc.com third-party test.)
Attached patch patch v1 (obsolete) — Splinter Review
and here it is ;) i plopped all this docshell-crawling stuff into the app-specific implementation of nsICookiePermission, which nicely avoids any docshell goo crawling into necko. lots of comments in the patch, so the rest should be pretty self-explanatory. thanks a bunch to bz for sanity-checking my ideas and giving suggestions. (yeah, i'm going to take a stab at unit tests next - need to figure out how and what to test first though, and don't want to let that hold up this patch, since time is short.)
(In reply to comment #3) > 2) loads kicked off from directly within the browser chrome/backend won't have > cookies. this covers favicons, safebrowsing updates, feed updates, ocsp > security certificate stuff etc. allowing cookies with favicons in the first > place has already caused trouble (bug 263057), and i don't think we really > intend to have cookies with any of the others, so no tears shed here. CCng dcamp since I know he's gone around the loop once already on cookies in safebrowsing requests.
Brief comments based on skim: 1) The method throws, not "returns failure" on failure to get a URI. 2) This needs tests. Almost every line of nsCookiePermission::GetOriginatingURI should be testable; there should definitely be a test for the document channel thing, and for any fixes in response to review comments. 3) Using the docshell URI seems wrong: it'll fail for javascript: documents, data: documents, about:blank documents populated via DOM methods, documents created using document.write(), and so forth. The right thing to use is the URI of the principal of the docshell's current document. 4) How (if at all) does this interact with the DOM0 API for getting/setting cookies? Add tests for that too, please.
(In reply to comment #4) > i plopped all this docshell-crawling stuff into the app-specific > implementation of nsICookiePermission, which nicely avoids any docshell > goo crawling into necko. [...] Does that mean this will alter only Firefox and not SeaMonkey?
(In reply to comment #6) > Brief comments based on skim: thanks - tests coming, and will fix principal stuff! document.cookie is covered by this patch, and works in my testing so far (we have a channel there; see http://mxr.mozilla.org/mozilla/source/content/html/document/src/nsHTMLDocument.cpp#2017... i'll try to see if there are cases where we don't.) (In reply to comment #7) > Does that mean this will alter only Firefox and not SeaMonkey? nope, it'll cover firefox, seamonkey, and thunderbird.
What about borrowing Safari's strategy of not letting third parties *set* cookies, but allowing them to *read* cookies that have already been laid down. That seemed pretty smart to me, mash-up-enablement-wise.
That enables user tracking, which is the thing we're trying to prevent, no?
Keywords: privacy
yeah, there's been some debate about making this feature asymmetric (see bug 417800 comment 8 and bug 417800 comment 11). allowing reading will amount to allowing tracking; there are plenty of ways to get a google cookie, but that doesn't mean you want it to track you when it's a third party. same for doubleclick - one click on an ad, for instance, and boom, you're forevermore tracked by them. (okay, i exaggerate. until the cookie expires, in 2156...) having said that, i really would love to get some heads together and figure out some middle ground on this stuff, to make it nicer for the average user who's less concerned than fervent grc.com readers. anything we do is only as good as the implementation, though...
(In reply to comment #9) > What about borrowing Safari's strategy of not letting third parties *set* > cookies, but allowing them to *read* cookies that have already been laid down. > That seemed pretty smart to me, mash-up-enablement-wise. That might be OK as a default setting but not as a "block 3rd party" setting since it's ineffectve. Do we want to support 3 states? (complicates UI, needs radio buttons) Or keep two states and make the "non-blocking" default match what Safari and IE do. Having three states in the UI lets us get in a dig at the IE/Safari implementation :-) o Allow third-party cookies o Don't accept third party cookies, but send if already set (like IE/Safari). Useful for web widgets but allows tracking. o Block all third-party cookies. Of course way too wordy. I guess dwitte already suggested this over in bug 417800 comment 13
Dan, et al ... FWIW, the stuff I'll be doing at GRC will be making the distinction between "fresh" and "stale" cookies, i.e. cookies that the browser has to which it is no longer accepting updates, but which it is still returning. You can see the beginnings of that here... http://www.grc.com/cookies/forensics.htm Again, FWIW, the need to end the session, or require the user to manually delete existing and unwanted persistent cookies to accomplish the goal of curtailing the monitoring the user would like to prevent, seems not what the user would want. If a third-party cookie is needed, briefly switching to cookie-prompting with one-click addition of the third-party domain to a whitelist, then returning to blocking TPCs, would be terrific. Also, IE for some time has the concept of "leashing" cookies, which means noticing and tagging whether cookies are accepted in a first- or third-party context, and being smart about, for example, not returning a cookie to a third-party which was received in a first-party context. This addresses the problem you mentioned earlier, Dan, of clicking on a DoubleClick ad to acquire a cookie that could then be used for third-party tracking. What would be spectacular would be if Firefox's cookie UI management *showed* the first or third party context in which cookies were received, and, for example allowed TPCs to be removed with a click. (Anyway, just wishful thinking. :)
Blocks: 421823
I think an additional UI element is needed in the Tools|Options|Privacy|Show Cookies dialog. A button named Delete Cookie and Block (see attached jpg image) would offer the user pretty atomic control over 3rd party cookies without the necessity of a blanket setting. I think the underlying behavior is obvious.
Cookie blocking UI Suggestion
Abe, that suggestion would be more on-topic in bug 345989 than here.
Jesse, I'll add it to that discussion as well.
Attached patch le patch, v2 (obsolete) — Splinter Review
this should address comment 6... tests to follow.
Attachment #307954 - Attachment is obsolete: true
Attachment #308588 - Flags: superreview?
Attachment #308588 - Flags: review?(mconnor)
Attachment #308588 - Flags: superreview? → superreview?(bzbarsky)
Attached patch tests (obsolete) — Splinter Review
tests! - removes now-irrelevant tests from TestCookie.cpp (since we don't block based on the firstURI param anymore) - adds mochitests to cover the docshell-crawling code. these cover the methods document.cookie, META HTTP-EQUIV, and the http Set-Cookie: header, and test the cases where we have a docshell with the channel. most of the tests load a new tab with a given URI, which contains an iframe/image/css resource load from a second URI. these combinations are: same domains, different domains, same base domain (i.e. test1.example.org vs test2.example.org and several permutations of), ip addresses, and localhost. (some of these tests are aimed at tickling nsCookieService::IsForeign, and its effective TLD service logic.) i also made a test for the loadflags case, by sniffing the httpchannel's request headers to make sure cookies go out with the very first request. no channel - channel w/o docshell - xpcshell - adds xpcshell tests to cover the cases with no channel, or a channel with no docshell. i'm not sure if you want to review these, but here they are. :)
Comment on attachment 308588 [details] [diff] [review] le patch, v2 >Index: netwerk/cookie/src/nsCookieService.cpp > nsCookieService::IsForeign(nsIURI *aHostURI, > nsIURI *aFirstURI) Is aHostURI guaranteed to be an innermost URI? That is, how does this handle, say, jar: URIs? Which URI will end up here? The one from the HTTP channel (innermost) or the one from the webpage (not innermost)? Does it depend on the codepath? aFirstURI is definitely not guaranteed to be innermost as the code is currently written. Should it be? Maybe this should all be worries for a followup, if things are too tangled and we're not really changing behavior for such cases. The failure mode is to deny too much, with the current code. > if (NS_FAILED(aHostURI->GetAsciiHost(currentHost)) || > NS_FAILED(aFirstURI->GetAsciiHost(firstHost))) { >+ // assume foreign > return PR_TRUE; In particular, if the URIs are not innermost, those will typically throw. > nsCookieService::CheckPrefs(nsIURI > (NS_SUCCEEDED(aHostURI->SchemeIs("ftp", &ftp)) && ftp) { I guess this was already here... seems pretty odd to single out ftp like this, though. Might want a follow-up to check what we should really be doing here. sr=bzbarsky
Attachment #308588 - Flags: superreview?(bzbarsky) → superreview+
Tests look OK though I might prefer that you pass "width=100,height=100" to window.open() so that if that's ever run when windows open as windows it won't be opening these big windows...
(In reply to comment #12) > (In reply to comment #9) > > What about borrowing Safari's strategy of not letting third parties *set* > > cookies, but allowing them to *read* cookies that have already been laid down. > > That seemed pretty smart to me, mash-up-enablement-wise. > > That might be OK as a default setting but not as a "block 3rd party" setting > since it's ineffectve. Do we want to support 3 states? (complicates UI, needs > radio buttons) Or keep two states and make the "non-blocking" default match > what Safari and IE do. That really depends on how it affects websites, I guess. I think we need to have 4 states available, and the question becomes what we want to expose in the UI. The 4 states should be: 0: send/accept all cookies 1: send all cookies, accept originating-site cookies only 2: send/accept originating-site cookies only 3: send/accept no cookies My preferred UI would be: [x] Accept cookies from sites (on=1/off=3) [ ] Block cookies from third-party sites (on=2) But that relies on state 1 being a viable default in ways that don't break mashups and sites. I think that's the case, but would like to see it proven. If we need to expose all four states, the UI should look like: [x] Accept cookies from sites (off=3) (o) Block new third-party cookies (on=1) ( ) Block all third-party cookies (on=2) ( ) Allow all cookies (on=0)
The UI mentioned above should probably (definitely) be tracked either in bug 419596 or a new bug entirely. Also, I'm not 100% sure of "Block new third-party cookies", and don't want to rathole, but think it's mostly right and expressive without being too nerdy. In most cases people will stick to the defaults.
Comment on attachment 308588 [details] [diff] [review] le patch, v2 agree with bz that blocking ftp specifically seems wrong, should explicitly allow http/https/file >Index: netwerk/cookie/src/nsCookieService.cpp > } else if (mCookiesPermissions == BEHAVIOR_REJECTFOREIGN) { >- // check if cookie is foreign. >- // if aFirstURI is null, allow by default >+ // check if cookie is foreign >+ if (!mPermissionService) >+ return STATUS_REJECTED; >+ >+ nsCOMPtr<nsIURI> firstURI; >+ rv = mPermissionService->GetOriginatingURI(aChannel, getter_AddRefs(firstURI)); > >- // note: this can be circumvented if we have http redirects within html, >- // since the documentURI attribute isn't always correctly >- // passed to the redirected channels. (or isn't correctly set in the first place) >- if (IsForeign(aHostURI, aFirstURI)) { >+ if (NS_FAILED(rv) || IsForeign(aHostURI, firstURI)) { > COOKIE_LOGFAILURE(aCookieHeader ? SET_COOKIE : GET_COOKIE, aHostURI, aCookieHeader, "originating server test failed"); > return STATUS_REJECTED; > } This means that embeddors need an implementation of nsICookiePermission to use BEHAVIOUR_REJECTFOREIGN We should detect that case in Init and throw big scary warnings somewhere, probably NS_WARNING and also COOKIE_LOGFAILURE. Something big and scary should be easy to come up with, but I think we need to be explicit. r=me with bz's comments addressed
Attachment #308588 - Flags: review?(mconnor) → review+
Flags: blocking1.9? → blocking1.9+
Flags: blocking1.9+ → blocking1.9?
Connor meant to blocking+ this ... fwiw, I think that if we find that this ends up causing regressions we should remove that marking, but honouring his decision and restoring the flag.
Flags: blocking1.9? → blocking1.9+
Attached patch v3 (checked in)Splinter Review
tweaked per reviewer comments; i also added a browser-chrome mochitest to cover the favicon case (where the parent docshell is content).
Attachment #308588 - Attachment is obsolete: true
Attachment #308589 - Attachment is obsolete: true
fixed. (the interface changes caused thunderbird bustage - checked in a fix and filed bug 422347 to make sure it's right.)
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
filed followup bug 422353 re innermost URI issue, and bug 422357 re making UI more better.
Depends on: 430538
(In reply to comment #22) I just logged https://bugzilla.mozilla.org/show_bug.cgi?id=443784 to note that third-party cookies are not being *sent* if "accept third-party" is disabled.
This caused at least two undesirable effects for privileged callers. I've filed bug #444267 and bug #444272 about these.
Depends on: 444267
Depends on: 444272
This breaks most background requests in Firefox on pages that require cookies such as trying to install experimental add-ons or viewing page sources for sites that require the user to be logged in. It also breaks extensions that need to send cookies like Google Toolbar. See bug 421494
Oops, I meant 437174
Oops, I meant bug 437174
Depends on: 437174
test_loadflags.html had a test failure on "WINNT 5.2 mozilla-central qm-win2k3-unittest-hw dep unit test" http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1218999946.1219003990.21553.gz&fulltext=1#err4 Can this test be made more robust?
it probably can, i took a quick look but not sure what's going on yet - in the meantime we can disable it if it's causing intermittent problems.
Depends on: 450450
I'm going to disable this test shortly due to intermittent failures. I filed bug 454857 for getting the test fixed.
Depends on: 441166
Depends on: 469805
Depends on: 436471
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: