Open Bug 1189271 Opened 9 years ago Updated 2 years ago

Consider mitigations against cookie bombing

Categories

(Core :: Networking: Cookies, defect, P3)

defect

Tracking

()

People

(Reporter: gerv, Unassigned)

References

()

Details

(Keywords: sec-want, Whiteboard: [necko-backlog])

(I couldn't find a bug on this.)

Cookie bombing is overloading a browser with cookies for a domain, so that the webserver refuses to serve any more content to the browser:
http://homakov.blogspot.co.uk/2014/01/cookie-bomb-or-lets-break-internet.html
It can be executed anywhere where the attacker can execute JS in the context of the site. There are some sites which need to allow this in order to function.

This DOS is hard for users to recover from - they need to clear their cookies and, even if they figure that out, there will be dataloss unless they work out how to clear them just for the site in question, which is even harder.

There is also a related security bug in another product - bug 1179241.

I'm not sure what the best solution is. I would suggest that we limit the cookies we send to a value just under the standard limit for the webserver, but I suspect that won't fly because a) webservers can be reconfigured, and b) browsers don't like arbitrary and small cookie limits. 

There is supposed to be a specific HTTP response - 413 - but it seems that, in my limited testing, Apache just sends back a generic 400 response, so we can't detect a 413, measure the cookie size and throw them away if it's too big.

Other ideas?

I'm marking this bug confidential for now, even though the above blog post is public, because I am concerned the discussion will involve frank assessments of weaknesses in our platform. If Dan or other security triage person disagrees, feel free to unhide.

Gerv
This isn't a weakness in _our_ platform, this is something that will have to be standardized or different browsers will break different sites in different ways. The earliest cookies specs suggested MINIMUM storage of 20 cookies per host at a max of 4K _each_. No server at the time would handle Cookie: headers of anywhere near that theoretical maximum length of 80K, most started returning errors on headers somewhere in the 8-12K range. When I was testing (years ago) I usually saw 500 errors, never saw a 413.

We should post to the http-state ietf list and see if we need to have the specs changed, and how browsers should consistently trim (by oldest date? by expiration time?) to get down to whatever we decide the 'on the wire' limit is.
Group: core-security
Keywords: sec-want
dveditz: do we have anyone already involved in http-state?

Gerv
Flags: needinfo?(dveditz)
I was actively engaged in the http-state group and participated in the making of RFC 6265. This said, there's been no active work there to actually _change_ anything in regards to cookies. All the group was set out to do was to clearly document how cookies were used on the web and the limits and numbers included in that RFC is basically the lowest common denominators that we found at the time.

The WG is pretty much dead at this point.
So we need to convert this to a bug report in Apache, to get it to better handle cookie bomb either by dropping all cookies and then processing the request, or with a 413 response?

Gerv
We could fork a bug in Apache and other servers about things they could do if they get too many cookies. Rather than simply return a 413, for example, they could blank out the cookies. That requires processing the header to get the names out, though. By time servers actually get around to changing anything maybe they could take advantage of http://www.w3.org/TR/clear-site-data/ (brand new proposal, no review, subject to change, do not use yet!).

But that will take a long time to get a patch into server code and get millions of sites to upgrade to those new versions. If there's anything useful Firefox can do then we should keep this as a Platform bug. One possibility would be that if we are sending huge headers (generally? or just cookies?) we could put up an infobar warning the user that the site may not work correctly, with a "Forget this site" or "Clear this site's cookies" button. Sending headers > 8K may break old sites so that might be a reasonable cut-off. Or if we think there are few enough of them we could raise the limit a little to 10 or 12K.
Flags: needinfo?(dveditz)
I think we should only take action if the site returns a 413 or a 5xx. That way, if a site configures a higher cookie limit for some purpose, we aren't warning the user all the time that they use the site.

And if we do that, could we not just automatically clear cookies when we get such a response from a site and the cookies are over a certain size? Otherwise, I think we run the risk of putting up the equivalent of a "Yes, Fix the Site Already" or a "Whatever" button, and those are bad UX.

In terms of which headers, we could either just do cookies, or we could enumerate all headers where the user agent stores values for the server. (cache control? e-tag?) Or perhaps only the ones where there is not a (small-ish) limit on size.

Gerv
Whiteboard: [necko-backlog]
Bulk change to priority: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: -- → P1
Priority: P1 → P3
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.