Closed Bug 341848 (grind) Opened 19 years ago Closed 13 years ago

limit length of cookie response headers

Categories

(Core :: Networking: Cookies, defect)

x86
Windows XP
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: grindordie, Unassigned)

Details

(Whiteboard: [sg:dos])

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4) Gecko/20060508 Firefox/1.5.0.4 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4) Gecko/20060508 Firefox/1.5.0.4 Hello, I was coding my website when I accidentally made a loop of cookies with garbage as the value. The script made about 10000 cookies and the website no longer shows up but this page shows up instead ---------------------------------- Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Cookie: 0=%40%23%24%24%40%40%40%23%24%24%40%40%40%23%24%............. ------------------------------------------ And then everything starts lagging (I belive the cookies were few megabytes). So to fix this problem I had to clear my cookies and then it worked, I showed it to a friend and the same thing happened. He also said that it does more damage to the server than to the client. I've made a php script which shows how it works ---------- <?php set_time_limit(0); $value = "@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@@#$$@@"; for($x = 0; $x <= 10000; $x++) { setcookie($x, $value); } ?> ---------- This script makes 10000 cookies with the value "@#$$@@@#$$@@@#$$@...", then when you go to another page in the same server you get the error I told you about. Reproducible: Always Steps to Reproduce: 1. Make hundreds of cookies 2. Go to any page in the same site 3. You see the error Actual Results: Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Cookie: 0=%40%23%24%24%40%40%40.... [Note: this keeps on going] Expected Results: Cut the string? ignore it? How should I know
Alias: grind
You did not actually set 10,000 cookies, Mozilla stores at most 50 cookies per domain and when that limit is reached it deletes older ones in order to add the new one. The spec (RFC 2109) specifies 20 as a minimum; 20 is what Opera and IE currently support. The servers aren't bothered by the number of cookies, it's the combined length they reject. The spec says the user agent should reject individual cookies over 4K, which Mozilla does, but 50 cookies times 4K is way over what servers will accept in a Cookie: header. Opera appears to additionally cap the entire Cookie: header (all cookies combined) at 4K as well. I can specify up to 20 long cookies, but they'll only send as many as fit in that 4k and drop the rest even though they show up in their cookie-manager dialog. IE does what we do, they'll send all the cookies you've got for a host. A bit of quick testing shows both www.mozilla.com and www.yahoo.com start rejecting requests when document.cookie.length is bigger than 8128, and www.google.com rejects requests with cookies longer than 12306. Dunno why those numbers, but they're probably the defaults for the webservers being used rather than conscious decisions. Should we institute a cap at 8000 or 8100? Setting cookies longer than those values is an effective Denial of Service attack against a domain. I wouldn't expect a site to DoS itself, but if a malicious joker can find a way to inject some script they could cause trouble. A page on a sub-domain could knock the parent off the air, perhaps a more realistic attack. For example, set up a free blog at meanie.blogservice.com and set enough .blogservice.com domain cookies to prevent the user from seeing any other blog on that service. Even Opera's 4K overall limit makes sense, there's no good reason for storing that much data in a header that gets sent with each and every request. A server that legitimately does that is creating a mini-DOS against itself. It's hard to imagine we'd break anything with a 4k limit, we definitely wouldn't break things if we picked a value where anything longer breaks most servers. It's probably worth considering. Yes, users can get themselves out of trouble by clearing cookies, but how easily will they be able to figure out that's what the problem is? I guess in the case of an actual attack the word will get out. Anyone know where the 8128 limit comes from? Is that a default HTTP header size limit in Apache or something? Would that be a good value to base this on, or do other popular web servers use a smaller or larger limit? Is this enough of a known problem that we should clear the confidential flag, or is there some value in keeping this hidden for now?
Group: security
Status: UNCONFIRMED → NEW
Component: General → Networking: Cookies
Ever confirmed: true
Product: Firefox → Core
Whiteboard: [sg:low dos]
Version: unspecified → Trunk
QA Contact: general → networking.cookies
Group: security
I say we keep it hidden. Otherwise, the next Blogger/Blogspot self-replicating virus will give itself the neat side-effect of locking out all Firefox users from the site. We should try and find out what popular webservers do, and either adopt the Opera 4k limit, or perhaps better the 8k limit you suggest. Gerv
What exactly should be limited: the size of the cookie header, the size of every header, the total size of the request? There might be similar problems in setting for example the referer header to something very large. Or setting a few of those headers each to a reasonable size, but adding to a large request. What causes the problems at the webserver? The entire request?
i think we should talk to the web server vendors and web browser vendors. either web browsers in response to this error could reduce the number of cookies they send to servers or servers in their response could send messages forcing cookies to expire. perhaps both. iow, i'm not fond of the idea of browsers having more caps, if a site wants to try to use more cookie data, i don't think we should stop it. but when the server complains, we should tune down what we send. i'm pretty sure that in general we can recognize bad request and realize that what we did was too big.
updating severity and title to better reflect bug.
Severity: critical → normal
Summary: Multiple Cookies causing website not to show up, possible vulnerability → limit length of cookie response headers
Whiteboard: [sg:low dos] → [sg:dos]
Patrick - any interested in looking into this?
(In reply to Daniel Veditz [:dveditz] from comment #1) > Anyone know where the 8128 limit comes from? Is that a default HTTP header > size limit in Apache or something? at least for some releases - yes that's the apache request size limit.. 12K and 16K are also common values. By specification there is no maximum - they are just site-local dos protections. between ie7 and ie8 IE upped their max cookie bytes per domain from 4KB to 10KB. Chrome, like us, has no limit. So an 8KB limit would probably break some part of the web today. (http://myownplayground.atspace.com/cookietest.html) Given that this is basically under the servers control through its design and has "only" dos semantics I would mark it wontfix.
Opening this up, seems like a pretty obvious thing to investigate if you're trying to be malicious and I'd be surprised if this wasn't already publicly discussed. Public discussion seems more productive as well and this isn't particularly threatening to our users. Marking WONTFIX based on comment 7.
Group: core-security
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.