Implement chrome-only cookies

RESOLVED INCOMPLETE

Status

()

RESOLVED INCOMPLETE
9 years ago
3 years ago

People

(Reporter: Natch, Unassigned)

Tracking

({privacy, sec-low})

Trunk
privacy, sec-low
Points:
---
Dependency tree / graph
Bug Flags:
blocking1.9.2 -

Firefox Tracking Flags

(blocking2.0 -)

Details

(Whiteboard: wanted1.9.3, [sg:low privacy])

(Reporter)

Description

9 years ago
This would help for geolocation (bug 491759) and safebrowsing (bug 368255) and perhaps others as well.
Flags: wanted1.9.2?

Comment 1

9 years ago
This wouldn't be hard to do. But I have lots of questions. How would we communicate this property to users effectively? Not display chrome cookies in the cookieviewer? (Evil.) Display them but grayed out, or some variant of?

And if we're displaying them, do we provide the user with a way to create exceptions for them, the way you can do per site now? I assume we'd want to make existing site exceptions apply to chrome cookies as well. So we're not implying any kind of privilege here, just... separate.

I'm assuming there are zero instances where we'd want a chrome cookie to cross over into the real world. (Are there legitimate use cases, for example in geolocation, where a website would embed, say in an iframe, some google-hosted bits and expect the cookie to be sent? Would not sending that cookie affect the usefulness of the service?)

Is chrome a granular enough property here? It almost seems like we want service-based sandboxing. Geolocation and safebrowsing both go through Google, for instance, but why should their cookies be related? Attaching a sandbox uuid to the cookie, and only serving it up when that magic key is present, could achieve this. (I'd have to think about how we'd get a key through the channel machinery.)

There are other things we do as chrome that we don't want cookies for. Favicon fetching is one example. Pretty much every website you visit will have a chrome cookie resulting from that. You can presently use third party checks to prevent those cookies, which is a lucky side effect. If chrome is specialcased then that goes away. Is that what we want?
(Reporter)

Comment 2

9 years ago
The idea behind this bug was 1) for geolocation to re-implement the hacked hack of making pseudo-cookies with the pref service into just using normal cookies. The problem with that is that there were questions as to what host name could be used that can *never* collide with a real host name, and then, even if it were to be done, it would an abuse of cookies. 2) For safebrowsing, a cookie is needed for QoS, and it is in fact being supplied now, problem is they're using the regular google.com cookie, while what they really need is a cookie for this service alone.

That said, I'm also not sure if "Chrome cookies" is the right terminology here, but in answer to the question of how to treat these cookies, I don't think they need to be (or even should be) treated like normal cookies, that is to be able to make exceptions for them, or to even see them in the cookieViewer. Both services that need these cookies (and I'm assuming future services will be required to behave similarly) can be turned off via a pref. If you don't want the service, you have to shut it off, otherwise these services need the cookies, just like any other part of the browser works, we don't allow you to omit whatever bits you want without running into issues.

That said, I think when you remove *all* cookies, and when you're in private browsing mode, these cookies should behave like any other cookies. So, no ui-specific needs for this, but gets cleared/hidden when the user explicitly chooses to clear *all* cookies or when he wants no tracks of his web usage.

My .02(TM)

Comment 3

9 years ago
Hmm, couldn't we do something like password manager does for storing "chrome passwords"?  IINM, in passwordmgr, you can store a login info for a chrome URI (such as chrome://extension/content/file.xul).  Logins stored in that way are not manipulated via the UI, but the standard passwordmgr APIs can be used to enumerate and manipulate them.

The way I'm envisioning this, nsICookieManager2::Add can accept "<namespace>:<blah>" as aDomain.  For example, the safebrowsing service can store a cookie using "safebrowsing:google.com" as the host.  Such a cookie doesn't get mixed with usual HTTP(S) cookies for google.com.  In the cookie manager UI, we just make sure not to display cookies with host names containing colons.

Let's try to consider your concerns in comment 1 with this proposal:

(In reply to comment #1)
> How would we
> communicate this property to users effectively? Not display chrome cookies in
> the cookieviewer? (Evil.) Display them but grayed out, or some variant of?

I agree with Nochum that we don't need to display them to the user at all.  Why would you consider it evil?  The cookie will still be available to extensions, if one chooses to somehow present them to users.

> And if we're displaying them, do we provide the user with a way to create
> exceptions for them, the way you can do per site now? I assume we'd want to
> make existing site exceptions apply to chrome cookies as well. So we're not
> implying any kind of privilege here, just... separate.

We could use the part of host after the column for exception matching.  I agree that it would be nice to allow users to set exceptions for cookies.  For example, if someone has blocked cookies on google.com, chances are that they really want to block all Google cookies, even safebrowsing ones.

> I'm assuming there are zero instances where we'd want a chrome cookie to cross
> over into the real world. (Are there legitimate use cases, for example in
> geolocation, where a website would embed, say in an iframe, some google-hosted
> bits and expect the cookie to be sent? Would not sending that cookie affect the
> usefulness of the service?)

Chrome cookies are not meant for use by websites, so no there shouldn't be any such cross over (and with this proposal, there won't be any.)

> Is chrome a granular enough property here? It almost seems like we want
> service-based sandboxing. Geolocation and safebrowsing both go through Google,
> for instance, but why should their cookies be related? Attaching a sandbox uuid
> to the cookie, and only serving it up when that magic key is present, could
> achieve this.

Hmm, yes I think by "chrome cookies" we mean "service-specific cookies".  We can have both safebrowsing:google.com and geo:google.com cookies living side by side happily.

> (I'd have to think about how we'd get a key through the channel
> machinery.)

I don't think we'd want to integrate chrome cookies with http channels, because they might actually have other usages (for example, a service might need to send a cookie like value in an XML message to a server.)

> There are other things we do as chrome that we don't want cookies for. Favicon
> fetching is one example. Pretty much every website you visit will have a chrome
> cookie resulting from that. You can presently use third party checks to prevent
> those cookies, which is a lucky side effect. If chrome is specialcased then
> that goes away. Is that what we want?

Like I said above, I don't think by "chrome cookies" we mean cookies which are set by http(s) requests generated from chrome, so this should not be an issue.

And the nice point is that if I'm reading the code right, chrome cookies in this sense don't actually need any specific implementation - the current cookie manager should already support that concept, because I don't see any place where we actually try to validate the cookie hosts.

Thoughts, comments?
Dan, primarily we need this for things like the SafeBrowsing service, where the provider wants a cookie for DOS protection and such, but we don't want that to give them access to the rest of their cookies that are sent along. It'd be nice to spawn an http request that said "only send the cookies required for services."
blocking2.0: --- → ?
(Reporter)

Comment 5

9 years ago
This bug blocks a blocker (bug 491759), should be a blocker as well...
Flags: wanted1.9.2? → blocking1.9.2?

Comment 6

9 years ago
Dan, do we have any plans on what this feature should support and how it should be implemented?
From chatting with dwitte about this it didn't sound like this really needs to block 1.9.2, so minusing. Doug, if this is absolutely required for other 1.9.2 blockers, please re-nominate.
Flags: blocking1.9.2? → blocking1.9.2-

Updated

9 years ago
Blocks: 524790

Updated

9 years ago
No longer blocks: 491759
Not blocking, but this is something we eventually should do, but won't hold a release for.
blocking2.0: ? → -
Whiteboard: wanted1.9.3, [sg:low privacy]
Keywords: privacy

Comment 9

8 years ago
Okay, coming back to this, it seems doable easily enough. We can either put a property on nsIHttpChannelInternal or abuse the hashpropertybag to flag the channel "for service use". Then cookieservice can create a sandbox for it.

I think these cookies should be viewable just like any other cookie, e.g. in cookiemanager -- a common way for people to clear Google cookies is to search for "google", highlight, and delete. These cookies should show up in that search. (We don't have to display the fact that they're a service cookie, or worry too much about them having the same name as another cookie.)

Do these need to be persistent? If so, that involves a database schema change. If we can live with session persistence here, that makes things easier. I'm not sure what the SafeBrowsing or Geolocation requirements are, or what incremental benefit they gain by having persistence. I imagine their primary interest is in preventing DoS, in which case session-only should be acceptable. Ian, Jonathan, what do you think?
This is a really good privacy win, and I'd approve the patch, though I agree that it doesn't hold ship.

Comment 11

8 years ago
Cool -- if I get time after fixing all my other betaN/beta2 blockers and reviews for such, I'll see about spending some time on this.
Status: NEW → RESOLVED
Last Resolved: 3 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.