Closed Bug 141256 Opened 23 years ago Closed 22 years ago

NSS should give a better error than OCSP error -8073


(NSS :: Libraries, defect, P1)



(Not tracked)



(Reporter: KaiE, Assigned: julien.pierre)




(Whiteboard: [adt1 rtm])


(1 file, 8 obsolete files)

From :

These two entities have a certificate issued by Verisign with a AIA extension
that points to the Verisign OCSP responder, but they have not bought OCSP
service from Verisign, therefore all the request bounce with a unauthorized
error, that Mozilla seems to have problems to parse.

To solve this issue, Mozilla needs first to be able to correctly parse this
answer (it's a 5 byte unsigned response).

This is seen with or other URLs
reported in bug 130885 and its duplicates.

Actual behaviour: NSS reports a general error code, indicating general unknown

Expected behaviour: NSS should correctly parse the OCSP response, and return a
better error code, indicating specifically that there is no technical error, but
 an error at the application level, because the server is unwilling to provide
information. This allows the application to detect that kind of failure and
react in a more user friendly way (like warning the user and offering to connect
Blocks: 130885
No longer blocks: 130885
Assigned the bug to Julien.
Assignee: wtc → jpierre
would it be possible please to have Bishakha be the default QA contact?
QA Contact: sonja.mirtitsch → bishakhabanerjee
Blocks: 130885
Julien, I think this is a very important bug, because it blocks bug 130885.
Keywords: nsbeta1+
Whiteboard: [adt1 rtm]
I looked at and traced ocsp.c . The failure is caused by the following code :

	if (((char *)bufEnd - newline) < 40) {
	    len = (char *)bufEnd - newline;
	    PORT_Memmove(buf, newline, len);
	    bytesRead = ocsp_MinMaxRead(sock, buf + len, 40 - len,
					bufsize - len);
	    if (bytesRead <= 0) {
		if (bytesRead == 0)
		goto loser;
	    newline = (char *)buf;
	    bufEnd = buf + len + bytesRead;

Apparently we just require a certain amount of data, otherwise we consider the 
response invalid. That doesn't seem correct. Is there anything in OCSP that 
states the response must be at least 40 bytes ?
Or can it be as little as 5 bytes ?
Also, is the Verisign 5-byte response outside of the OCSP specification - ie. a 
proprietary response ? I am concerned about making a code change proprietary 
to their implementation. Should I not be ?
Usually, responses will be larger, but in actual fact, can be very small:

[ ]
   OCSPResponse ::= SEQUENCE {
      responseStatus         OCSPResponseStatus,
      responseBytes          [0] EXPLICIT ResponseBytes OPTIONAL }
   OCSPResponseStatus ::= ENUMERATED {
       successful            (0),  --Response has valid confirmations
       malformedRequest      (1),  --Illegal confirmation request
       internalError         (2),  --Internal error in issuer
       tryLater              (3),  --Try again later
                                   --(4) is not used
       sigRequired           (5),  --Must sign the request
       unauthorized          (6)   --Request unauthorized

If the request is malformed, then the appropriate response would be 
 OCSPResponse ::= SEQUENCE {
      responseStatus         1,
      responseBytes          [NOT PRESENT]

Which would be probably around 5 bytes. Thus, this is a valid response.
Thanks, Steve. I went to look for the RFC and found it. I am still not sure that 
the OCSP response from Verisign is correct, but we definitely have a bug in our 
HTTP parser within ocsp.c in NSS, which causes it to abort and not even look at 
the content of the OCSP response. I'm trying to fix that now.

Priority: -- → P1
Target Milestone: --- → 3.5
Attached patch proposed patch (obsolete) — Splinter Review
The response from Verisign is correct - it is "6", unauthorized.
I have attached a patch that resolves it. Now mozilla displays "Error trying to 
validate certificate from using OCSP - unauthorized 
request" . PSM could intercept that error via 
As far as the patch, I just got rid of the code that tries to load more data. As 
stated in the comment, the reason for doing it was to have enough data for 
PORT_Strncasecmp functions not to crash. However, these functions won't go past 
NULL-terminated strings. Our strings are NULL-terminated. So we just need to 
make sure the input strings are also null-terminated. I just noticed that I 
didn't do that. It can simply be accomplished by allocating a buffer of n + 1 
and setting the last byte to zero.

Attached patch correct patch (obsolete) — Splinter Review
Attachment #83121 - Flags: needs-work+
I have attached a patch that does all that's needed. Please review.
Also, I note that the browser is making two OCSP requests when I connect. I get 
two pop-up errors. I believe the client is making two connections and this would 
be a client or PSM bug.
Bob, could you please review this patch ? Thanks.
I'm still waiting for a review on the 2nd patch to check in to NSS 3.5 / tip.
Comment on attachment 83129 [details] [diff] [review]
correct patch

Looks like it's doing what it proports to do in the bug.

Attachment #83129 - Flags: approval+
Blocks: 54104
Checking in ocsp.c;
/cvsroot/mozilla/security/nss/lib/certhigh/ocsp.c,v  <--  ocsp.c
new revision: 1.6; previous revision: 1.5
No longer blocks: 54104
Closed: 23 years ago
Resolution: --- → FIXED

1. Do you think the initial read of at least 128 bytes
    bytesRead = ocsp_MinMaxRead(sock, buf, 128, bufsize);
will succeed for any legal response?  The comment in the
code sounds like the author was not sure about status-only

2. Do you think this piece of code could cause trouble?
I don't know why we want to read at least 64 bytes here.

         * When we are here, either:
         *      - newline is NULL, meaning we're at the end of the buffer
         *        and we need to refill it, or
         *      - newline points to the first character on the new line
        if (newline == NULL) {
            /* So, we need to refill the buffer. */
            len = pendingCR ? 1 : 0;
            bytesRead = ocsp_MinMaxRead(sock, buf + len, 64 - len,
                                        bufsize - len);
            if (bytesRead <= 0) {
                if (bytesRead == 0)
                goto loser;
            s = (char *)buf;
            bufEnd = buf + len + bytesRead;
            if (pendingCR) {
                buf[0] = '\r';
                pendingCR = PR_FALSE;

3. Regarding comment #10, I think the reason to have at
least 40 bytes is not to prevent PORT_Strncasecmp from
crashing but to ensure that we have enough input data
for PORT_Strncasecmp.

For example, suppose we have "content-type: applica"
at 'newline' and the rest of 'buf' is all null bytes.
The two comparisons
    PORT_Strncasecmp(newline, "content-", 8)
    PORT_Strncasecmp(s, "type: ", 6)
will both succeed, but the next comparison
    PORT_Strncasecmp(s, "application/ocsp-response", 25)
will fail because the input data is short.  So we break
out of the while loop and fail with the
SEC_ERROR_OCSP_BAD_HTTP_RESPONSE error, which is wrong.

The code your patch deleted would detect that the input
line is short and try to read more data before doing the
comparisons.  So it seems that your patch would introduce
the bug I described above.
1) This is not a problem because the server will close the connection as we are 
not using keep-alives. Even if the response is less than 128 bytes it will be 
OK because ocsp_MinMaxRead checks for EOS and succeeds in that case with less 
than the minimum amount requested.

2) No, it isn't a problem, for the same reasons as in 1).
64 is probably the expected length of an HTTP header + value so this is what's 
being requested.

3) Yes, you are right, there could be a problem if the response is fragmented 
in multiple packets. I thought it was still refilling the buffer in the loop, 
but it only does so if newline is NULL, which is not always the case.

In reality, we can't make big assumptions like 40 bytes about the size of a 
header like before - the header line could be as short as "a: b\n" , which would 
be 5 bytes. But since we are not using keep-alive and non-blocking sockets, we 
could always try to read more - the only limit is the buffer size. The only 
thing we would need to do is make sure we have a \r or \n to know that we have a 
complete header; it isn't based on size.

I see many other problems in the HTTP parsing code unfortunately, like the fact 
that there are other content- headers that are valid for the server to send but 
that will screw it up. And it is overly and needlessly complicated with its 
parsing and networking intermixed in loops, given what it does at this time 
anyway. This calls for a rewrite.

For now I am reopening this bug and will add one more patch for this problem 
mentioned in your 3rd comment.
Resolution: FIXED → ---
Attachment #83121 - Attachment is obsolete: true
Attachment #83833 - Attachment is obsolete: true
Comment on attachment 83970 [details] [diff] [review]
improvement patch - make sure there is always a null termination right after the last byte read. This helps make sure we only search relevant parts of the buffer and never random data


Storing a null byte after each read is a simple way to
make string functions work correctly on the input buffer.
Good idea!

This means

     buf = PORT_ZAlloc(bufsize+1);
     buf[bufsize] = 0; /* NULL termination so string functions are OK */

can be changed to simply

     buf = PORT_Alloc(bufsize+1);

The extra byte (bufsize+1) is still necessary for the null
byte when we have a full buffer.

I will review the rest of your patch more thoroughly and
may have more comments.

I did the null-termination after buffer reads because I found that strnchr 
which I wanted to use does not exist in the C library.
I agree otherwise the zero'ing is no longer necessary on alloc, but the one 
extra byte is so we never overflow when terminating.

Can you review this one so I can check it in ?
Comment on attachment 83970 [details] [diff] [review]
improvement patch - make sure there is always a null termination right after the last byte read. This helps make sure we only search relevant parts of the buffer and never random data

>+    buf = PORT_ZAlloc(bufsize+1);
>     buf[bufsize] = 0; /* NULL termination so string functions are OK */

These two lines should be changed back to
      buf = PORT_Alloc(bufsize+1);

>+    /* make sure we have the full HTTP header and value */
>+    while (!strchr(newline, '\r') && !strchr(newline, '\n')) {
>+        /* need more data . add more to the buffer by first
>+         * copying what's left to the beginning.
>+         */
>+           len = (char *)bufEnd - newline;
>+           PORT_Memmove(buf, newline, len);
>+           bytesRead = ocsp_MinMaxRead(sock, buf + len, 40 - len,
>+                                       bufsize - len);
>+           if (bytesRead <= 0) {
>+               if (bytesRead == 0)
>+                   PORT_SetError(SEC_ERROR_OCSP_BAD_HTTP_RESPONSE);
>+               goto loser;
>+           }
>+           newline = (char *)buf;
>+           bufEnd = buf + len + bytesRead;
>+           *bufEnd = 0 ; /* ensure null termination after content */
>+    }

The indentation of the while and the "need more data" comment is wrong.

It took me a while to convince myself that this block of code can be
executed repeatedly in the while loop.	It's difficult to understand
because of the vestige of the original code, such as the PORT_Memmove
call and the magic constant 40.
don't you mean PORT_ZAlloc?
If you use ZAlloc it will already be null terminated so indeed the 
buf[bufsize] = 0;
line is not needed, but PORT_Alloc would require the null termination, wouldn't it?
But maybe I don't quite understand the PORT_{Z,}Alloc routines.
It is not necessary to zero the buffer right after it is
allocated because whenever we read data into the buffer
we add a null byte after the data.

Julien, I need to understand the original code before I
can say whether your patch is good or not.  My gut feeling
is that your patch is good.
This is Julien's patch (attachment 83970 [details] [diff] [review]) with some minor changes.
1. The buffer is allocated with PORT_Alloc and is not null-terminated
at creation time.
2. Use '\0' instead of 0 for the null character. (a stylistic change)
3. Fixed the indentation of the while loop. (a stylistic change)

I am still reviewing this patch.
I am now rewriting the entire function doing the HTTP response parsing for OCSP, 
since I believe the current implementation is too confusing to be able to be 
maintained with patches in confidence.
This rewrite provides the following enhancements :
1) only two simple I/O paths, one for headers and one for body
2) checking of HTTP status line, including the presence of HTTP/ and the
response code 200
3) more readable and maintainable code, especially for the header parsing
4) HTTP headers limited to 8 KB
5) HTTP response body limited to 8 KB. Someone needs to tell me whether this is
a realistic limit or not. We definitely need an upper bound to prevent a denial
of service attack caused by very large OCSP responses causing big allocations
of RAM. This would affect servers that use the NSS OCSP support to verify their
client certs.

I tested this patch with the original test case and it works with the Verisign
unauthorized responses.
Attachment #83129 - Attachment is obsolete: true
Attachment #83970 - Attachment is obsolete: true
Attachment #84528 - Attachment is obsolete: true
I almost forgot the additional enhancement :

6) an I/O timeout of 30s, currently hard-coded, but much better than the 
previous code that was blocking forever.

Wan-Teh, please review. I know this is over 200 lines of new code, but hopefully 
it is more readable and the comments I put should help and I'd like this to go 
into NSS 3.5.
style changes
report SEC_ERROR_NO_MEMORY when appropriate
simplifying of while receive loops
freeing of buffer upon success
Attachment #84593 - Attachment is obsolete: true
Attached patch patch update (obsolete) — Splinter Review
Attachment #85349 - Attachment is obsolete: true
changing impact to [adt3 rtm] per adt.
Whiteboard: [adt1 rtm] → [adt3 rtm]
Attachment #86166 - Flags: review+
Comment on attachment 86166 [details] [diff] [review]
patch update


Please remove the "if (offset)" around the PORT_Memcpy
call before you check in because at that point, 'offset'
is guaranteed to be nonzero:

>+     * And copy the data left in the buffer.
>+    */
>+    if (offset)
>+    {
>+        PORT_Memcpy(result->data, inBuffer, offset);
>     }

I talked to Julien about the problems of his previous
patches, which he has addressed in this patch.	Here
are some final remarks.

1. The original code allows leading white space in
the HTTP response.  I don't know why that's necessary.

2. The original code allows an HTTP status line of
the form "HTTP/x.y zzz\n", with no comment (e.g., "OK").
The new code requires a comment in the HTTP status
line.  That is, it requires two spaces in the HTTP
status line.

3. The original code is more tolerant of the line
endings in the HTTP response.  The original code
accepts not only CRLF but also CR and LF.  The new
code only accepts CRLF (required by the HTTP spec).

4. The original code accepts any status code in the
200-299 range.	The new code only accepts the status
code 200.

5. I am a little worried that we may be calling
PORT_Realloc too often, but I can't come up with a
good solution to avoid it.  We should choose the
value of 'bufSizeIncrement' so that we are not
calling PORT_Realloc too often and we are not wasting
too much buffer space.
Wan-Teh, in response to #34 :

1. I don't think it is necessary

2. The new code require two spaces as the BNF for HTTP in the RFC does

3. Again I try to follow the RFC more strictly

4. 200 is the only currently defined code that's associated with a valid full 
HTTP response for a regular GET request. The other codes are reponses to other 
action (PUT, DELETE), or byte-range requests. So checking for 200 is correct.

5. I think 1 KB will be enough for most OCSP responses, certainly for any 
unsigned responses. Signed ones may require 2 KB. I doubt this is an issue . It 
might be if we tried to verify an entire cert chain in one request and response. 
I am not familiar enough with OCSP to know if that's possible, but I know the 
NSS doesn't even attempt to verify any cert but the leaf with OCSP.
very last update
Attachment #86166 - Attachment is obsolete: true
Checked in to NSS_3_5_BRANCH :

Checking in ocsp.c;
/cvsroot/mozilla/security/nss/lib/certhigh/ocsp.c,v  <--  ocsp.c
new revision:; previous revision: 1.6

Checked in to tip :

Checking in ocsp.c;
/cvsroot/mozilla/security/nss/lib/certhigh/ocsp.c,v  <--  ocsp.c
new revision: 1.7; previous revision: 1.6
The fix is now in NSS_CLIENT_TAG, which is used by the Mozilla
client's trunk.  Marked the bug fixed.
Closed: 23 years ago22 years ago
Resolution: --- → FIXED
fix is on trunk. This is good behavior. We should land this on the trunk.
We now detect the true error and display the true reason for the failure.
This is also low risk as the fix is only in one NSS file, and only people who
use OCSP will be affected.
Whiteboard: [adt3 rtm] → [adt1 rtm]
marking adt1.0.1- per discussion in the adt.  Let's get this into a future release.
Keywords: adt1.0.1adt1.0.1-
IMO the risk of not checking this in is higher than the risk of checking it in .
There are potential buffering logic problems in the old code that got resolved
in this patch. Also, this fix is already a part of NSS 3.5, which is a release
that we are making for the sole purpose of Mach V. Nobody else is using that
Verified on trunk with this test URL -
adding adt1.0.1+.  Please get drivers approval before checking into the branch.
Keywords: adt1.0.1-adt1.0.1+
has anyone super reviewed this patch? also, does the last r= carry forward to
the "checked in patch"?
Comment on attachment 86527 [details] [diff] [review]
patch as checked in


Yes, my r= carries forward to this patch.

NSS is exempt from Mozilla's super review requirement.
rule #2.)  This patch has only been reviewed by me.
Do you want another code review?
Attachment #86527 - Flags: review+
please checkin to the 1.0.1 branch. once there, remove the "mozilla1.0.1+"
keyword and add the "fixed1.0.1" keyword.
Checked in to MOZILLA_1_0_BRANCH, mozilla/security/nss/lib/certhigh/ocsp.c , r 

The wrong cvs comment went into the branch for the check-in - it was referring
another bug. I don't think it's possible to change the cvs checking comment
afterwards unfortunately.
You need to log in before you can comment on or make changes to this bug.