Closed
Bug 934016
(CVE-2013-5605)
Opened 11 years ago
Closed 11 years ago
Null_Cipher (used during handshake) does not respect maxOutputLen, copying an attacker-supplied # of bytes
Categories
(NSS :: Libraries, defect)
NSS
Libraries
Tracking
(firefox25+ verified, firefox26+ verified, firefox27+ verified, firefox28+ verified, firefox-esr1725+ verified, firefox-esr2425+ verified, b2g1825+ fixed, b2g-v1.1hd fixed, b2g-v1.2 fixed, b2g-v1.3 fixed)
People
(Reporter: ryan.sleevi, Assigned: ryan.sleevi)
References
Details
(Keywords: sec-critical)
Attachments
(1 file, 1 obsolete file)
718 bytes,
patch
|
wtc
:
review+
ryan.sleevi
:
checked-in+
|
Details | Diff | Splinter Review |
The implementation of Null_Cipher uses PORT_Mempcy to copy data from the input buffer to the output buffer, as seen at http://mxr.mozilla.org/nss/source/lib/ssl/ssl3con.c#819
However, it does not validate or cap inputLen to be <= maxOutputLen. As a result, an attacker that sends a malformed SSL record may overwrite memory, including potentially memory on the stack.
Assignee | ||
Comment 1•11 years ago
|
||
This was discovered by my colleague, amtinits@chromium.org, while performing fuzzing on NSS.
Assignee: nobody → ryan.sleevi
Status: NEW → ASSIGNED
Target Milestone: --- → 3.15.3
Assignee | ||
Comment 2•11 years ago
|
||
Attachment #826186 -
Flags: review?
Comment 3•11 years ago
|
||
Comment on attachment 826186 [details] [diff] [review]
Cap to maxOutputLen in Null_Cipher
Review of attachment 826186 [details] [diff] [review]:
-----------------------------------------------------------------
Ryan, you didn't add a reviewer to the review request. This can happen if the reviewer you chose isn't part of the security group. In that case, CC them to the bug first.
Anyway, I suggest that this check be made a PORT_Assert in Null_Cipher, and instead we do the real check in ssl3_HandleRecord.
I believe the problem is actually here in ssl3_HandleRecord:
if (isTLS && cText->buf->len - ivLen > (MAX_FRAGMENT_LENGTH + 2048)) {
ssl_ReleaseSpecReadLock(ss);
SSL3_SendAlert(ss, alert_fatal, record_overflow);
PORT_SetError(SSL_ERROR_RX_RECORD_TOO_LONG);
return SECFailure;
}
...
/* decrypt from cText buf to plaintext. */
rv = crSpec->decode(
crSpec->decodeContext, plaintext->buf, (int *)&plaintext->len,
plaintext->space, cText->buf->buf + ivLen, cText->buf->len - ivLen);
...
It seems to me we need to be checking (cText->buf->len - ivLen > plaintext->space) instead, or in addition to, the above check.
If you think adding the check to Null_Cipher only is the best approach, then please add a comment about why only Null_Cipher needs to do the check.
Attachment #826186 -
Flags: review? → review-
Comment 4•11 years ago
|
||
Comment on attachment 826186 [details] [diff] [review]
Cap to maxOutputLen in Null_Cipher
Review of attachment 826186 [details] [diff] [review]:
-----------------------------------------------------------------
I think this patch is reasonable behavior for Null_Cipher
when the output buffer is too small. I suggest an alternative
fix that causes a failure in that case.
::: lib/ssl/ssl3con.c
@@ +818,5 @@
> static SECStatus
> Null_Cipher(void *ctx, unsigned char *output, int *outputLen, int maxOutputLen,
> const unsigned char *input, int inputLen)
> {
> + *outputLen = PR_MIN(maxOutputLen, inputLen);
This function should fail if inputLen > maxOutputLen.
I suggest adding the following at the beginning of the function:
if (inputLen > maxOutputLen) {
*outputLen = 0; /* Matches what PK11_CipherOp does */
PORT_SetError(SEC_ERROR_OUTPUT_LEN);
return SECFailure;
}
... the original code ...
I suspect the callers of SSLCipher ignore *outputLen on failure.
Assignee | ||
Comment 5•11 years ago
|
||
(In reply to Brian Smith (:briansmith, was :bsmith; NEEDINFO me if you want a response) from comment #3)
> Comment on attachment 826186 [details] [diff] [review]
> Cap to maxOutputLen in Null_Cipher
>
> Review of attachment 826186 [details] [diff] [review]:
> -----------------------------------------------------------------
>
> Ryan, you didn't add a reviewer to the review request. This can happen if
> the reviewer you chose isn't part of the security group. In that case, CC
> them to the bug first.
>
> Anyway, I suggest that this check be made a PORT_Assert in Null_Cipher, and
> instead we do the real check in ssl3_HandleRecord.
>
> I believe the problem is actually here in ssl3_HandleRecord:
>
> if (isTLS && cText->buf->len - ivLen > (MAX_FRAGMENT_LENGTH + 2048)) {
> ssl_ReleaseSpecReadLock(ss);
> SSL3_SendAlert(ss, alert_fatal, record_overflow);
> PORT_SetError(SSL_ERROR_RX_RECORD_TOO_LONG);
> return SECFailure;
> }
>
> ...
>
> /* decrypt from cText buf to plaintext. */
> rv = crSpec->decode(
> crSpec->decodeContext, plaintext->buf, (int *)&plaintext->len,
> plaintext->space, cText->buf->buf + ivLen, cText->buf->len - ivLen);
>
> ...
>
> It seems to me we need to be checking (cText->buf->len - ivLen >
> plaintext->space) instead, or in addition to, the above check.
>
> If you think adding the check to Null_Cipher only is the best approach, then
> please add a comment about why only Null_Cipher needs to do the check.
Because Null_Cipher should match the API contract of the other ciphers. That's the root of the bug. I think the additional checking could be seen as orthogonal, and could certainly bolster against additional concerns, but PK11CipherOp or the individual operations are all required to not write more than maxOutputLen.
That said, Wan-Teh is correct, that instead of copying the max, I should be mimicing the full error behaviour (SEC_ERROR_OUTPUT_LEN) instead.
Assignee | ||
Comment 6•11 years ago
|
||
Properly match the API contract.
Attachment #826186 -
Attachment is obsolete: true
Attachment #826216 -
Flags: review?(wtc)
Comment 7•11 years ago
|
||
Comment on attachment 826216 [details] [diff] [review]
patch2.diff
r=wtc.
Attachment #826216 -
Flags: review?(wtc) → review+
Assignee | ||
Comment 8•11 years ago
|
||
Comment on attachment 826216 [details] [diff] [review]
patch2.diff
Checked in as https://hg.mozilla.org/projects/nss/rev/e79a09364b5e
Attachment #826216 -
Flags: checked-in+
Assignee | ||
Comment 9•11 years ago
|
||
dveditz: We'll presumably need a CVE for this. Remotely exploitable ability for attacker to copy arbitrary memory = sad.
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Updated•11 years ago
|
Keywords: sec-critical
Whiteboard: [needs CVE]
Comment 10•11 years ago
|
||
Comment on attachment 826216 [details] [diff] [review]
patch2.diff
Review of attachment 826216 [details] [diff] [review]:
-----------------------------------------------------------------
::: lib/ssl/ssl3con.c
@@ +819,5 @@
> Null_Cipher(void *ctx, unsigned char *output, int *outputLen, int maxOutputLen,
> const unsigned char *input, int inputLen)
> {
> + if (inputLen > maxOutputLen) {
> + *outputLen = 0; /* Match PK11_CipherOp in setting outputLen */
I have to admit I was surprised that PK11_CipherOp
sets *outputLen to 0 for any PKCS #11 error:
http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/pk11wrap/pk11cxt.c&rev=1.10&mark=705-710,716-722#705
For CKR_BUFFER_TOO_SMALL, I think PK11_CipherOp should
set *outputLen to the required output buffer size.
Comment 11•11 years ago
|
||
Comment on attachment 826216 [details] [diff] [review]
patch2.diff
Review of attachment 826216 [details] [diff] [review]:
-----------------------------------------------------------------
Just setting sec-approval? so that people who handle approvals are aware that this already got checked into the NSS public repository, and so it is publicly-disclosed. I think we're supposed to use the sec-approval process for NSS, too, right?
Attachment #826216 -
Flags: sec-approval?
Comment 12•11 years ago
|
||
We should be using sec-approval for everything but using it as a notification is kind of pointless unless you're telling us that someone screwed up, checked it in, and won't do it again.
The whole point of sec-approval is that we gate when security work gets checked in to lessen the chances of a zero day and to lower exposure.
Comment 13•11 years ago
|
||
well, it's also "hey this got checked in, I don't know what flags to set please make sure it's all OK for other branches". I'd rather have that sort of thing flagged than not if the people involved in the bug aren't sure.
Comment 14•11 years ago
|
||
Comment on attachment 826216 [details] [diff] [review]
patch2.diff
All right. I'm clearing the flag unless there is something actionable to do with it.
Attachment #826216 -
Flags: sec-approval?
Assignee | ||
Comment 15•11 years ago
|
||
I checked this in to Moz-NSS because our security team wanted this checked into Chromium's repository so that we could get a release out. Checking it into Chromium would have had the effect of disclosing, independent of sec-approval.
Comment 16•11 years ago
|
||
Ryan, is there a way we can coordinate these checkins in the future so we can know ahead of time and possibly massage dates of the checkins, if necessary?
Assignee | ||
Comment 17•11 years ago
|
||
Al: That was the path I was advocating, but there was some resistance from our security-team on that coordination. I'll let inferno respond to the coordination effort, which is why I added him explicitly.
Updated•11 years ago
|
Alias: CVE-2013-5605
Updated•11 years ago
|
Whiteboard: [needs CVE]
Comment 18•11 years ago
|
||
(In reply to Ryan Sleevi from comment #17)
> Al: That was the path I was advocating, but there was some resistance from
> our security-team on that coordination. I'll let inferno respond to the
> coordination effort, which is why I added him explicitly.
Here is what I suggested in the chromium bug (https://code.google.com/p/chromium/issues/detail?id=314225) - to commit the patch to Chromium trunk (since we have an upcoming m31 release and it is a sec-critical bug.) and we won't declare it in the release notes (or assign CVE) until all the consumers have patched it. Also, it didn't mean we can commit it to Firefox trunk since we need to follow Mozilla's sec-approval process (which as i understand is to explicitly request for approval for committing and not just merging to a branch).
In the future, we are open to ideas to improve here. Like we will make sure to give heads up on upcoming chromium side commit. We won't be able to hold the commit for too long since usually our codereviews are public and searchable, and our security releases go around every 2 weeks (one stable release and two stable patches).
Updated•11 years ago
|
status-b2g18:
--- → affected
status-b2g-v1.1hd:
--- → affected
status-b2g-v1.2:
--- → affected
status-firefox25:
--- → affected
status-firefox26:
--- → affected
status-firefox27:
--- → affected
status-firefox-esr17:
--- → affected
tracking-b2g18:
--- → ?
tracking-firefox26:
--- → ?
tracking-firefox27:
--- → ?
Updated•11 years ago
|
status-firefox28:
--- → affected
tracking-firefox28:
--- → ?
Updated•11 years ago
|
Comment 19•11 years ago
|
||
(In reply to Abhishek Arya from comment #18)
> and we won't declare it in the release notes (or assign CVE) until all
> the consumers have patched it.
"Assign" or "publish" the CVE? Given the nature of NSS as a shared library this bug should only get a single CVE, it should not have a "firefox one" and a "chrome one". Since NSS is currently hosted by Mozilla we assigned it one from our pool yesterday (CVE-2013-5605) but if you've already given it one we can use that.
Comment 20•11 years ago
|
||
(In reply to Daniel Veditz [:dveditz] from comment #19)
> (In reply to Abhishek Arya from comment #18)
> > and we won't declare it in the release notes (or assign CVE) until all
> > the consumers have patched it.
>
> "Assign" or "publish" the CVE? Given the nature of NSS as a shared library
> this bug should only get a single CVE, it should not have a "firefox one"
> and a "chrome one". Since NSS is currently hosted by Mozilla we assigned it
> one from our pool yesterday (CVE-2013-5605) but if you've already given it
> one we can use that.
I think our release is going out next week. I will make sure that we don't declare this in the release notes (since i think you won't have time to release the patch by then).
Since we are not declaring this, we have not assigned the CVE yet. So, we will just use CVE-2013-5605 (and i agree we don't need multiple cves for this :)).
Comment 21•11 years ago
|
||
The only way this will go out by next week is if we did an emergency chemspill release to get it out. We're debating that right now.
It *would* be nice if our partners didn't do things in a way that caused us to have to do emergency security releases though (and warned us when they were going to, rather than after the fact).
Updated•11 years ago
|
Updated•11 years ago
|
Comment 22•11 years ago
|
||
From discussion in email it looks like we will not chemspill for this, and we'll get it into FF26 (releasing on Dec 10th) as well as other branches including B2g 26. Does this impact ESR24?
status-firefox-esr24:
--- → ?
tracking-firefox-esr24:
--- → ?
Comment 23•11 years ago
|
||
Yes, Esr24 is affected. I don't know why we're not planning an interim release to address this.
Comment 24•11 years ago
|
||
It has been proposed to use several patches for the intermediate NSS release, appropriate for ESR24 and 26.
I've filed bug 935959 to track the landing of that NSS version into the various Mozilla branches.
I propose to move approvals over to that bug.
Comment 25•11 years ago
|
||
Hello Kai, Brian, other developers.
Do you have any ideas on how to ensure that
a) we've addressed the problem, and
b) we haven't broken anything?
Aside from a coordinated effort to use SSL-enabled sites and look for regressions, any other information you might have would be greatly appreciated by QA.
Thanks.
Updated•11 years ago
|
Comment 26•11 years ago
|
||
I am curious about the conditions under which this bug can be reproduced.
Comment 1 says that this was found by amtinits@chromium.org by fuzzing NSS. Was that fuzzing done by writing a program that calls ssl3_HandleRecord directly with various arguments? Or, was that fuzzing done in a way that simulates a real SSL connection? If the latter, could you be more specific regarding the circumstances in which the bug can be triggered.
Comment 27•11 years ago
|
||
(In reply to Brian Smith (:briansmith, was :bsmith; Please NEEDINFO? me if you want a response) from comment #26)
> I am curious about the conditions under which this bug can be reproduced.
>
> Comment 1 says that this was found by amtinits@chromium.org by fuzzing NSS.
> Was that fuzzing done by writing a program that calls ssl3_HandleRecord
> directly with various arguments? Or, was that fuzzing done in a way that
> simulates a real SSL connection? If the latter, could you be more specific
> regarding the circumstances in which the bug can be triggered.
The fuzzing was done on real SSL connections between Chromium and a custom SSL server. The bug can be triggered by a server sending a single malformed record in response to a client hello.
Comment 28•11 years ago
|
||
(In reply to Andrew Tinits from comment #27)
> The fuzzing was done on real SSL connections between Chromium and a custom
> SSL server. The bug can be triggered by a server sending a single malformed
> record in response to a client hello.
Thanks Andrew. I tried to reproduce this yesterday in Firefox, but I was finding that NSS was rejecting the malformed records due to other checks on the record size. Could you share which record size you used in the header of that malformed record? Thanks again.
Comment 29•11 years ago
|
||
Brian: use a record length of 18434, which is MAX_FRAGMENT_LENGTH + 2048 + 2.
This will evade the gs->remainder > (MAX_FRAGMENT_LENGTH + 2048 + 5) check
in ssl3_GatherData, but is bigger than the MAX_FRAGMENT_LENGTH + 2048 space
allocated by the sslBuffer_Grow(&gs->buf, 4096) call in ssl_InitGather.
sslBuffer_Grow always allocates at least MAX_FRAGMENT_LENGTH + 2048 bytes.
ss->gs.buf is passed to ssl3_HandleRecord as the 'databuf' argument, which
gets assigned to the 'plaintext' local variable.
Do you want the 18434-byte record? It seems that you know how to generate
one. You just need to know the right size.
Comment 30•11 years ago
|
||
Brian: I noticed that the record size checks in ssl3_HandleRecord are only
performed if 'isTLS' is true. That seems wrong.
We should also find out if ssl3_GatherData should allow 5 more bytes than
MAX_FRAGMENT_LENGTH + 2048.
Comment 31•11 years ago
|
||
(In reply to Wan-Teh Chang from comment #30)
> Brian: I noticed that the record size checks in ssl3_HandleRecord are only
> performed if 'isTLS' is true. That seems wrong.
>
> We should also find out if ssl3_GatherData should allow 5 more bytes than
> MAX_FRAGMENT_LENGTH + 2048.
The value 2048 represents that absolute largest overhead possible of a ciphertext record over a plaintext record. For a plaintext record (which is what this bug is about), the maximum allowed size is MAX_FRAGMENT_LENGTH. It is also possible to calculate tighter bounds for encrypted records, but the overhead depends on the symmetric encryption and authentication method used.
I believe that the record length checks were originally restricted to TLS 1.0 because some SSL 3.0 implementations might not obey the limits in the TLS 1.0 spec. However, my memory about this is very hazy.
Comment 32•11 years ago
|
||
Last one - Matt did bug 935959 make this bug fixed or is there other work to be done here?
Flags: needinfo?(mwobensmith)
Comment 33•11 years ago
|
||
My understanding is that all work was covered in related bug 935959. If someone disagrees, they can add more info.
Flags: needinfo?(mwobensmith)
Comment 34•11 years ago
|
||
marking this accordingly then, it went out in the dot release for 25.0.1
Updated•11 years ago
|
status-b2g-v1.3:
--- → fixed
Updated•11 years ago
|
Updated•10 years ago
|
Group: core-security
You need to log in
before you can comment on or make changes to this bug.
Description
•