Closed
Bug 112384
Opened 23 years ago
Closed 22 years ago
Unable to download some CRLs
Categories
(Core Graveyard :: Security: UI, defect, P1)
Tracking
(Not tracked)
VERIFIED
FIXED
psm2.2
People
(Reporter: KaiE, Assigned: rangansen)
References
()
Details
(Whiteboard: [adt1 rtm])
Attachments
(1 file, 2 obsolete files)
2.67 KB,
patch
|
KaiE
:
review+
KaiE
:
superreview+
jesup
:
approval+
|
Details | Diff | Splinter Review |
I see this bug with both NS 6.2 and the current Mozilla trunk. Try to download: https://www.thawte.com/cgi/lifecycle/getcrl.crl?skeyid=%07%15%28mps%aa%b2%8a%7c%0f%86%ce8%93%008%05%8a%b1 The browser tries to download forever. Just to make sure it is not caused by transfering this over SSL, I made the same content available at: www.kuix.de/misc/test8/crl.php This shows the same endless load problem. Expected: CRL is downloaded quickly and imported.
Comment 1•23 years ago
|
||
rangansen
Assignee: ssaux → rangansen
Priority: -- → P1
Target Milestone: --- → 2.2
Reporter | ||
Comment 2•23 years ago
|
||
I repeated the test. When I download from http://www.kuix.de/misc/test8/crl.php it works. When I download from https://www.kuix.de/misc/test8/crl.php (using https instead of http) it does not work. Trying to download from thawte (which always uses https) does not work either. Is it possible that downloading CRLs over https does not work at all?
Comment 4•23 years ago
|
||
This works in build ID 2002021803.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → WORKSFORME
Reporter | ||
Comment 5•23 years ago
|
||
Reopening. This does not work with my current build on Linux. Maybe this is a Linux/Unix only problem. Stephane, I assume you tested Windows. Can somebody with Linux please confirm the problem?
Status: RESOLVED → REOPENED
Resolution: WORKSFORME → ---
Comment 6•23 years ago
|
||
The test site - https://www.kuix.de/misc/test8/crl.php doesn't work for me on any platform, however, the secure site https://lab212sun.mcom.com:444/DisplayCRL.html to retrieve CRLs works for me on all platforms.
Comment 7•22 years ago
|
||
wfm
Status: REOPENED → RESOLVED
Closed: 23 years ago → 22 years ago
Resolution: --- → WORKSFORME
Comment 8•22 years ago
|
||
Reopening. I'm seeing the same thing as on 2002-02-25 08:54
Status: RESOLVED → REOPENED
Resolution: WORKSFORME → ---
Comment 9•22 years ago
|
||
Is this still an issue?
Comment 10•22 years ago
|
||
WFM
Status: REOPENED → RESOLVED
Closed: 22 years ago → 22 years ago
Resolution: --- → WORKSFORME
Reporter | ||
Comment 12•22 years ago
|
||
Reopening. It still does not work for me correctly. While the URL on my own test page now seems to work, the page from the Thawte server still does not work. I still see it loading forever, I waited 10 minutes. Can you reproduce the problem? Did you try on Linux? Seen with a branch build.
Status: VERIFIED → REOPENED
Resolution: WORKSFORME → ---
Comment 13•22 years ago
|
||
The Apache web server at thawte is using HTTP/1.1 chunked encoding. The problem is that Mozilla and Netscape/6.2 are unable to download the object correctly. This is an HTTP parsing bug. I have tested with another command-line HTTP/1.1 client that I have written that handles chunked decoding of objects from the server, and was able to successfully download it. So this is not a server-side problem, but a mozilla networking bug.
Comment 14•22 years ago
|
||
An an additional note, the non-HTTP test site that Kai provided also fails for me in mozilla. This is probably due to the difference in link speed between the US and germany. As packets need to be fragmented, the web server switches to transmitting the HTTP in chunks. So you probably don't see the problem locally, but I do. Chunked encoding is defined in RFC2616, HTTP/1.1 . It is a required feature ("MUST") of both servers and clients. A client cannot claim HTTP/1.1 compliance in its request if it does not properly understand chunked responses from a server. This bug shows that Mozilla is not HTTP/1.1 compliant at the moment. The bug is in the HTTP/1.1 chunk decoder in mozilla. The CRL is a binary object. The chunk parser probably has some wrong string logic that checks for NULLs, and therefore it chokes. This is a serious Mozilla problem that does not just affect CRLs but all HTTP browsing in general. It should be very high priority. Both Apache and iPlanet/Netscape Enterprise server versions 4.1 and higher will automatically enable chunk-encoding when they encounter HTTP/1.1 clients. So this means that until this bug gets resolved, Mozilla will have problems downloading from these servers.
Comment 15•22 years ago
|
||
That should have read non-SSL at the beginning of the previous comment of course.
Updated•22 years ago
|
OS: Linux → All
Hardware: PC → All
Reporter | ||
Comment 16•22 years ago
|
||
Darin, this bug turns out as being a HTTP 1.1 bug. Can you please read Julien comments?
Comment 17•22 years ago
|
||
julien: can you provide a testcase? or at least a packet trace of the data mozilla fails to parse? many websites serve up chunked transfer coded documents (binary... not just text), and mozilla is able to view those websites just fine. so, i'm at a loss... can you provide more details about the failure you're observing? thx!
Comment 18•22 years ago
|
||
Darin, My main observation was that the browser hung, and if I pressed "Escape" after waiting a while, I got the following pop-up dialog : "The browser cannot import the Certificate Revocation List (CRL). Download of the CRL failed due to Network problems. Please ask your system administrator for assistance." . This told me that something is wrong with the download code in Mozilla. I don't think the security code to import the CRL is even called, though I will check that. I used my other client to trace the full HTTP response and saw that chunking was enabled on the server-side. Chunking typically gets enabled on servers only for dynamic documents which size is not known ahead of time, not for static documents for which Content-length can be prepended. I tried disabling HTTP/1.1 which should disable chunking, but that didn't resolve the problem with Thawte, so it looks like I may have bene on the wrong track. I will try to generate a packet trace, but this is harder now that Kai's non-SSL test server has been brought down - only the Thawte SSL test server can be used to reproduce it.
Comment 19•22 years ago
|
||
I reproduced the mozilla problem with a CRL on a netscape enterprise server. It didn't require SSL or chunked-encoding, even with plain insecure HTTP/1.0 I was able to get the browser to stall in the download. It's definitely a download problem . The packet trace (which I emailed privately) shows that only the beginning of the file is downloaded, then mozilla stalls trying to download the rest. I used a 2 MB CRL for testing and the binary packet trace (before being converted to human-readable form) was only about 70 KB after several minutes of sitting in the browser. It seemed that there was still some network activity, but it was extremely slow. Perhaps the browser was in a download loop with a delay in it. I am not familiar with the code but I can definitely rule out NSS as a source of problem here, since the CRL import API require you to have the whole data buffer before you can call them.
Reporter | ||
Comment 20•22 years ago
|
||
Thanks for your research, your input showed me the right path to the solution. The implementation of nsIStreamListener::OnDataAvailable is broken. It fails to read the amount of data it is allowed to read. Well, maybe broken is the wrong word. The implementation relies on the given content length, and is unable to work correctly with those streams, where the content length is different or unknown. I have a working patch.
Status: REOPENED → ASSIGNED
Reporter | ||
Comment 21•22 years ago
|
||
Javi, can you please review?
Reporter | ||
Updated•22 years ago
|
Comment 22•22 years ago
|
||
Comment on attachment 82571 [details] [diff] [review] Suggested Fix r=javi Assumimg call to aIStream is correct since kaie claims this works. (Since I'm not really sure if that's the correct way to call that interface.)
Attachment #82571 -
Flags: review+
Reporter | ||
Comment 23•22 years ago
|
||
Comment on attachment 82571 [details] [diff] [review] Suggested Fix The second parameter to aIStream->Read says, how many bytes should be read. The old code tried to read the maximum that the buffer allows, but failed to use the correct buffer size. I corrected that. But now that I look at that more closely, I realize that trying to read more data than available is not a good idea either. We should not read more data than available, or otherwise we could block. I suggest we read at most the number of bytes we were given as a parameter. I looked at other implementations of that interface, they do so, too.
Attachment #82571 -
Flags: needs-work+
Reporter | ||
Comment 24•22 years ago
|
||
This patch has only one line changed, the second parameter to aIStream->Read is now "aLength".
Attachment #82571 -
Attachment is obsolete: true
Reporter | ||
Comment 25•22 years ago
|
||
Comment on attachment 82582 [details] [diff] [review] Updated Fix Carrying over the r=javi from the previous patch, since I only changed the line he asked me to check.
Attachment #82582 -
Flags: review+
Comment 26•22 years ago
|
||
Comment on attachment 82582 [details] [diff] [review] Updated Fix >Index: mozilla/security/manager/ssl/src/nsNSSComponent.cpp >+ PRInt32 contentLength; >+ rv = channel->GetContentLength(&contentLength); >+ if (rv != NS_OK || contentLength <= 0) >+ contentLength = kDefaultCertAllocLength; use NS_FAILED(rv) instead of comparing against NS_OK actually, you should be careful about trusting nsIChannel::contentLength like this. it is incorrect if the document has been served up with a "Content-Encoding: gzip" and moreover, servers sometimes lie about the Content-Length header. please make sure that this code won't fail if the amount of data given to OnDataAvailable exceeds nsIChannel::contentLength when contentLength > 0.
Attachment #82582 -
Flags: needs-work+
Reporter | ||
Comment 27•22 years ago
|
||
Darin, with the patch, contentLength will only be used as the initial buffer size. It will grow dynamically as needed. I will attach a new patch, having only the single change to use NS_FAILED as requested.
Reporter | ||
Comment 28•22 years ago
|
||
Attachment #82582 -
Attachment is obsolete: true
Reporter | ||
Comment 29•22 years ago
|
||
Comment on attachment 82607 [details] [diff] [review] Updated patch using NS_FAILED carrying over r=javi sr=darin received by email
Attachment #82607 -
Flags: superreview+
Attachment #82607 -
Flags: review+
Reporter | ||
Comment 30•22 years ago
|
||
Checked in to trunk.
Status: ASSIGNED → RESOLVED
Closed: 22 years ago → 22 years ago
Resolution: --- → FIXED
Comment 31•22 years ago
|
||
Verified on 5/7 Win2000 trunk.
Comment 32•22 years ago
|
||
adding adt1.0.0+ for checkin to Mozilla 1.0 Branch. Please get drivers approval before checking in and then add the fixed1.0.0 keyword.
Comment 33•22 years ago
|
||
Comment on attachment 82607 [details] [diff] [review] Updated patch using NS_FAILED a=rjesup@wgate.com. Make sure it's checked into trunk as well if needed.
Attachment #82607 -
Flags: approval+
Comment 35•22 years ago
|
||
Verified on branch.
Status: RESOLVED → VERIFIED
Keywords: fixed1.0.0 → verified1.0.0
Updated•8 years ago
|
Product: Core → Core Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•