2.11 KB, text/plain
11.96 KB, patch
|Details | Diff | Splinter Review|
11.97 KB, patch
|Details | Diff | Splinter Review|
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:18.104.22.168) Gecko/20060728 Firefox/22.214.171.124 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:126.96.36.199) Gecko/20060728 Firefox/188.8.131.52 On certain pages, when pressing F5, Firefox adds text from another page (from the cache?) to the end of the current page. Pressing Ctrl-F5 then shows the page as it should be. This problem does not occur in IE, or when the Firefox cache has been disabled. Reproducible: Always Steps to Reproduce: 1.Go to http://www.oldwings.nl/content/c47_yic/c47x.htm 2.Then go to http://www.oldwings.nl/content/c82_yic/c82x.htm 3.Go to the end of the page, everything is ok. 4.Now press F5 and voila, there is the garbage at the end of the page. 5.Press Ctrl-F5 and the garbage is gone. Actual Results: Showed garbage from the previous page after the end of the current page. Expected Results: It should not have shown the extra garbage at the end. I don't know why it is only this particular combination of pages that produces this. It appears to be related to the file name. The problem does NOT occur if you do this: 1.Go to http://www.oldwings.nl/content/c47_yic/xc47.htm 2.Then go to http://www.oldwings.nl/content/c82_yic/xc82.htm Pages are identical to the ones mentioned above, only the filename differs. Trying to disable the browser cache through meta tags does not help, I have yet to try adding HTTP headers. Also happens in Firefox 1.0.4 and can be reproduced as above. The garbage has also be seen by a friend using the Mozilla suite (version unknown), but it could not be reproduced using the above method.
Reproduced in Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9a1) Gecko/20060808 Minefield/3.0a1, and in Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8.1b1) Gecko/20060808 BonEcho/2.0b1. In a debug build, upon reloading the second page I see the message "ASSERTION: unexpected progress values: 'progress <= progressMax', file [...]/mozilla/netwerk/protocol/http/src/nsHttpChannel.cpp, line 4199".
Yeah, I saw this too happening in the latest trunk build on windows. After I cleared my cache, I couldn't reproduce it anymore.
Correction: I wrote that it also happened in Firefox 1.0.4, this should have read 184.108.40.206. Also, I can confirm it is somehow related to the filename. The problem DOES occur for these name combinations: c47.htm and c82.htm c47x.htm and c82x.htm But it does NOT occur for these combinations: xc47.htm and xc82.htm c47.htm and c82a.htm I have not tried any other combinations.
FYI: Two months now. Since obviously nothing is being done about this, I have now removed the test pages from the site. Therefore the mentioned links no longer work.
Aad, sorry about that. Could you please put the test pages back? Then I'll cc some people who might have the time/knowledge to fix this bug.
OK, the pages are back. Just checked using Firefox 220.127.116.11, and the problem still occurs as described. Hopefully someone will pick this up this time.
Boris, Darin, biesi, could anyone of you take a look? Aad, I really hope you can keep the testcase available. I'm not sure I could make a similar testcase on my own server. Thanks for putting the testcases back!
Martijn, I have no problem keeping these pages available, at least not as long as someone is working on this bug. One additional bit of info. In Comment #3 I wrote: --- The problem DOES occur for these name combinations: c47.htm and c82.htm c47x.htm and c82x.htm But it does NOT occur for these combinations: xc47.htm and xc82.htm c47.htm and c82a.htm --- To avoid any confusion I would like to add that the pages c47, c47x and xc47 are identical, except for their filenames. The same applies to the pages c82, c82x, xc82 and c82a. Therefore this problem appears to be related to the filename, and not to the contents of the page.
Heheh, I'm not sure what's going on down in Necko-cache-land, but I can tell you that both pages are writing to the same file on disk, and it always stays the size of the first page (~24KB) even it's had the second page's contents written to it (which is only the first ~16KB). NSPR log of cache:5 was utterly useless, as it neglects to mention anything sane.
sounds like a hash collision...
But we seem to handle those OK in nsDiskCacheDevice::FindEntry and in the nsCacheEntryHashTable ops.... I can't seem to reproduce the garbage; I'll try the assert when I get home, I guess.
OK, I can certainly confirm the assert. The two URLs do in fact hash to the same thing, so we use the same cache entry, which we looks like blow away... but do we never reset the size in the block file or something? I tried tracing the cache code, and I can't see where we actually write out the entry size... Alfred, biesi, darin do you have time to look at this? You all know this code better than I, I think. In any case, this looks like a major issue to me, since it allows one site to try to steal another site's data by coming up with URIs that hash to the same thing and having smaller pages. So I think we really want to get this fixed.
11 years ago
Our hash is probably not robust against collision-guessing since it's not intended for anything cryptographic. Severity possibly "high" assuming collisions are easy to calculate and given that it's easy to refresh a frame. There might be some difference depending on whether one or both of the pages in question get cached to a separate file or to the lumped __CACHE_00x__ files. In this case they both cache in the file "6EDEB24Ed01". That may limit the minimum size of the attack stub and thus only work against somewhat larger targets. Still, something like an account transaction history page would have a lot of data on it. Worst-case scenario says "high", but in practice might only be "moderate".
Need someone to step up for this one.
This, of course, a complex problem. In essence, what happens is that when there is a hash collision there are two objects active within the diskcache system with the same hash key. Options are: a. extend hashkey (but still collisions can occur). b. make cache system collision proof, by allowing multiple entries per hash key (in the 'DiskCacheMap' area) c. don't store multiple objects in the cache, so when a collision occurs prevent the second entry to be cached (in the disk) at all. (so treat it as 'non-cacheable' because of the collision). What currently happens is that in FindEntry, the collision is sort of detected because a record for that hash exists, but the matching of the key fails. So FindEntry then just reports not found (by returning null), and then ActiveEntry still creates a new entry (but then that one will end up re-using an diskmap record of another entry). So there we could intervene and refuse to diskcache these kinds of hash collisions... So, option c is probably "easiest" for a quickfix. Option b would be better, allowing to also cache hash collisions...
Created attachment 246772 [details] [diff] [review] V1: Temporary fix to refuse to cache collision items Darin, a quick stop to prevent collision issues in the disk cache. Could take a look and verify if this is a safe way of preventing collisions? See also my previous comment
Comment on attachment 246772 [details] [diff] [review] V1: Temporary fix to refuse to cache collision items Also requesting review from bzbarsky.
Comment on attachment 246772 [details] [diff] [review] V1: Temporary fix to refuse to cache collision items Yeah, looks reasonable... would be easier to review if you used the -p option to diff, btw. ;) sr=bzbarsky
Comment on attachment 246772 [details] [diff] [review] V1: Temporary fix to refuse to cache collision items maybe return NS_ERROR_CACHE_IN_USE instead? or perhaps invent a new error code for this? also, would be good to file a new bug about implementing a better solution and reference that bug number in this patch. r=darin
Created attachment 246963 [details] [diff] [review] V2: Use NS_ERROR_CACHE_IN_USE and diff -u8Np
As usual, who can do the checkin for me?
I can check this in as soon as the tree reopens.
Fixed on trunk for 1.9a2.
Comment on attachment 246963 [details] [diff] [review] V2: Use NS_ERROR_CACHE_IN_USE and diff -u8Np This is a pretty straightforward merge to branch (just s/strcmp/nsCRT::strcmp/ in a context line in one spot), and should be reasonably safe, I think (though I'd like to let it bake on trunk for a few days).
Comment on attachment 246963 [details] [diff] [review] V2: Use NS_ERROR_CACHE_IN_USE and diff -u8Np Moving approvals to the next release. We need more bake time on the Trunk, and with code freeze for 18.104.22.168/22.214.171.124 today, we won't get enough to feel comfortable on the branches.
Comment on attachment 246963 [details] [diff] [review] V2: Use NS_ERROR_CACHE_IN_USE and diff -u8Np approved for 1.8/1.8.0 branches, a=dveditz for drivers
Fixed for 126.96.36.199 and 188.8.131.52.
Confirmed presence of bug in 184.108.40.206 by following steps described in comment 1. Steps do not produce the garbage at the end of the second page for 220.127.116.11 and 18.104.22.168. Marking as verified fix for both. Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:22.214.171.124pre) Gecko/20070130 Firefox/126.96.36.199pre Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:188.8.131.52pre) Gecko/2007013007 BonEcho/184.108.40.206pre