Closed Bug 159366 Opened 23 years ago Closed 23 years ago

PSM fails to load client certificate for user from multipart/mixed page

Categories

(Core Graveyard :: Security: UI, defect)

1.0 Branch
x86
All
defect
Not set
major

Tracking

(Not tracked)

VERIFIED DUPLICATE of bug 158500

People

(Reporter: jmdesp, Assigned: darin.moz)

Details

Attachments

(4 files)

Tested with 1.1a+ 20020721, Mozilla 1.0, Netscape 6.2.1. OS : Linux, Windows 2000/NT 4. We have a PKI application that enable a user to enroll and get a certificate. After the key generation process, the web server returns the new certificate to the user through a multipart/mixed page. The first part is the certificate with a type of application/x-x509-user-cert and the second part is an html page (text/html). This technic is needed in order to both send back the certificate for integration inside the browser and provide a visual feedback to the user that the process is over. This is what is used inside all the Verisign based PKI, but probably by other people too. What happens is that sometimes the certificates is properly imported, and sometimes if fails. Our investigations have lead us to the conclusion that a bogus end of line is added at the end of the certificate when returned from a multipart/mixed part page. ( see bug 158500 for the same problem in another context ). Most DER parsing products will ignore such a bogus end of line. In fact when the outer part of the returned content is encoded in BER as of indefinite length, Mozilla/Netscpae will ignore the bogus end of line and process the certificate properly. But if the outer part of the returned content is of definite length, then Mozilla will choke on the added end of line and refuse to load the certificate. To solve this problem, Mozilla needs to either : - solve bug 158500 so that the bogus end of line disappears - modify the certificate parser so that it is more tolerant of such bogus added characters, after the valid data. Our test have shown that other parsers are much more tolerant than the one in Mozilla, and this will lead to interoperability problems. I will include full data for reproduction of the problem.
Attachment #92730 - Attachment mime type: application/octet-stream → text/plain
For the certificate that neither is correctly imported from the page in attachment 92730 [details] , or is not recognised as a certificate in attachment 92732 [details] the NSS tool derdump gives the following header : C-Sequence (2894) Object Identifier (9) 1 2 840 113549 1 7 2 (PKCS #7 Signed Data) C-[0] (2879) C-Sequence (indefinite) Integer (1) 01 C-Set (0) C-Sequence (11) Object Identifier (9) For the certificate that is correctly imported from the page in attachment 92731 [details] , or recognised as a certificate in attachment 92733 [details], the NSS tool derdump gives the following header : C-Sequence (indefinite) Object Identifier (9) 1 2 840 113549 1 7 2 (PKCS #7 Signed Data) C-[0] (indefinite) C-Sequence (indefinite) Integer (1) 01 C-Set (0) C-Sequence (11) Object Identifier (9) If you use attachment 92730 [details] or attachment 92731 [details] from a web server, but replace the line Content-type: application/x-x509-user-cert by Content-type: application/octet-stream you can save the certificate locally. If you do that, you will see that the saved copy has a bogus end of line at the end, which confirms that this is the reason why Mozilla does not recognize it in some case. This bogus end of line is the one that separate the preceding part.from the mime boundary. RFC 2046 explicitly precise that the end of line before the boudary separator is part of the separator and not of the preceding body part. Some Mime implementations don't respect that though. > The boundary delimiter MUST occur at the beginning of a line, i.e., > following a CRLF, and the initial CRLF is considered to be attached > to the boundary delimiter line rather than part of the preceding > part.
Version: unspecified → 2.3
I believe certificate parsing is done by NSS. Bob, Julien, could the certificate parser be made tolerant against unnecessary additional end-of-lines at the end?
Kai, No, I don't think the certificate parsing could be made flexible in this way - it is the job of the ASN.1 / DER decoder to only decode correct input. Invalid data won't do. This is a problem with the MIME parser in the client. You need to be able to distinguish what is the part from the separator. Then feed it to the decoder. I am not sure what the correct component for MIME parsing is, but it's neither PSM nor NSS.
The best way to handle this is to strip the /0a before it is passed to us. I can give you a code fragment that will pull only the relevant code portion if you want. The Decoder is recursive, so, as Julian points out, it would be quite difficult to determine that this is simply data added to the end of the cert and not bogus data imbeded somewhere inside the cert itself. You could also get around the problem by shipping the cert as a Base64 blob. In this case the /0a won't be in the b64 encoded data and and the raw data will survive server/client manglings.
Do you mean by recursive that the DER decoder needs to be able to decode several concatenated der structures at the top level ? The problem with the bogus added "end of line" should be inside the MailNews/Mime component, but I'm not able to reproduce it in mail messages. Binary data is sent in base 64 and text message do NOT get an added end of line when they are received inside multipart messages. That's strange. ducarroz@netscape.com seems to be the personn who receives the bugs for MIME, and he should be the one who takes over this bug if it has to be solved at the MIME end.
Jean-Marc, Yes, the DER decoder decodes from the top-level, and then recurses for each subcomponent. It could be that the MIME parsing is somewhat different in HTTP due to the lack of base64 encoding, so the bug only shows for HTTP.
OK, let's go for fixing this bug at the HTTP level. My feeling now is that the data is being treated as x-mixed-replace given the code in http://lxr.mozilla.org/seamonkey/source/netwerk/streamconv/converters/nsConvFactories.cpp#263 that treats multipart/mixed content from HTTP sources as x-mixed-replace. So the bogus 0x0A that gets inserted should come from a problem inside nsMultiMixedConv.cpp And this is owned by Darin Fisher. I can't change the component directly to Browser/Networking, but anyway the impacted component in this scenario is still PSM.
Assignee: ssaux → darin
Status: UNCONFIRMED → NEW
Ever confirmed: true
*** This bug has been marked as a duplicate of 158500 ***
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → DUPLICATE
REOPEN: This is better handled as a depends of th bug where Darin will fix it. That way, the PSM testers will know to test this feature when it is fixed.
Status: RESOLVED → REOPENED
Depends on: 158500
Resolution: DUPLICATE → ---
*** This bug has been marked as a duplicate of 158500 ***
Status: REOPENED → RESOLVED
Closed: 23 years ago23 years ago
Resolution: --- → DUPLICATE
No longer depends on: 158500
Verified fixed with 2002081504 under linux.
Verified.
Status: RESOLVED → VERIFIED
Product: PSM → Core
Version: psm2.3 → 1.0 Branch
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: