Created attachment 271669 [details] [diff] [review] patch and test The fix in bug 383181 missed a case: a script from web content is able to set an httponly cookie by replacing an existing non-httponly cookie. So if a server has not already set an httponly cookie, a script can first create a normal cookie and then replace it with the httponly attribute in order to set one.
This could be used to DoS any web-app that relies on using cookies from script by hiding them. The server could reset them as non-httponly, but many web-apps don't send cookies on every request if they're getting a Cookie header from the client.
Created attachment 271676 [details] [diff] [review] alternate approach Or we could just move the "if (!aFromHttp && aCookie->IsHttpOnly())" check into a common spot and fix it that way. Have a preference? I think this one is clearer
Comment on attachment 271676 [details] [diff] [review] alternate approach good catch, i like this version. if you approve this one i'll land this on branch along with the rest.
Comment on attachment 271676 [details] [diff] [review] alternate approach approved for 18.104.22.168, a=dveditz for release-drivers
Comment on attachment 271676 [details] [diff] [review] alternate approach sr=mconnor, let's get this landed ASAP
checked in on trunk and 1.8 branch.
Can someone give a test case to repro this issue?
I dislike having a compiled-code test for this because 1) it's harder to edit and read, 2) the test environment is less like the real web than it needs to be, and 3) the cookies tests in particular are extremely noisy even when they pass, but this *is* in an automated test, I guess.
in most cases, i'm the one writing and maintaining the tests, and it's easy for me. if you want to write tests, please feel free to file a bug and convert them to a different language. if you want the noise reduced, file a bug, even better with a patch?
The conversion process is tedious and involved enough that I probably won't do it, especially for tests which already run automatically and already report success/failure in a useful manner such that failures can happen and actually get noticed when they do. I've done this once before, in bug 398952, and the experience leaves me somewhat sour on rewriting large existing tests (particularly since in this case I'd port them to Mochitests to better emulate the real world, rather than just modifying the C++ tests to not spew so much), particularly ones which fit the bill above but which have some niggling problems.
fixed22.214.171.124 with combined checkin to bug 178993