Closed Bug 441169 Opened 16 years ago Closed 16 years ago

HTML injection into XUL error pages through badcert parameters

Categories

(Core Graveyard :: Security: UI, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED
mozilla1.9

People

(Reporter: dveditz, Assigned: johnath)

Details

(Keywords: fixed1.9.1, verified1.9.0.1, Whiteboard: [sg:critical] critical combined with a chrome-loading bug like 441120)

Attachments

(1 file, 2 obsolete files)

Ben Turner found that the changes to XUL error pages to handle SSL errors works by passing display text as a parameter and that this text is not sanitized against HTML injection. In fact, the intended behavior of those dialogs _relies_ on it. This is now trivial to inject arbitrary code into.

chrome://global/content/netError.xhtml?e=nssBadCert&u=https%3A//test.kuix.de/&c=UTF-8&d=%3Cspan%20onmouseover%3d%22try{alert(Components.stack)}catch(e){alert(e)}%22%3Etest.kuix.de%20uses%20an%20%3Cb%3Einvalid%3C/b%3E%20security%20certificate.%0A%0AThe%20certificate%20is%20only%20valid%20for%20%3Cscript%3Ealert(123)%3C/script%3E%3Ca%20id%3D%22cert_domain_link%22%20title%3D%22kuix.de%22%20href%3D%22http://www.mozilla.com%22%3Ekuix.de%3C/a%3E%0A%0A(Error%20code%3A%20ssl_error_bad_cert_domain)%0A%3C/span%3E

about:neterror?e=nssBadCert&u=https%3A//test.kuix.de/&c=UTF-8&d=<span%20onmouseover%3d"try{alert(Components.stack)}catch(e){alert(e)}">test.kuix.de%20uses%20an%20<b>invalid</b>%20security%20certificate.%0A%0AThe%20certificate%20is%20only%20valid%20for%20<script>alert(123)</script><a%20id%3D"cert_domain_link"%20title%3D"kuix.de"%20href%3D"http://www.mozilla.com">kuix.de</a>%0A%0A(Error%20code%3A%20ssl_error_bad_cert_domain)%0A</span>

when called via about:neterror any injected script only has normal privileges, it's primarily a spoofing vector.

When called as a chrome: uri this allows arbitrary code execution. Normally remote content can't load user code can't load chrome: URIs directly, but a potential danger when combined with other flaws.

There is currently such a flaw: this can be combined with bug 441120 to create an arbitrary code execution exploit.
Flags: wanted1.8.1.x-
Flags: blocking1.9.1?
Flags: blocking1.9.0.1?
Is there a way to get scripts to run on-load? So far I've only gotten event-based exploits to work like adding a link and luring users to click, or the onmouseover used in comment 0
Assignee: kaie → johnath
Whiteboard: [sg:moderate] sg:critical combined with a chrome-loading bug
Whiteboard: [sg:moderate] sg:critical combined with a chrome-loading bug → [sg:critical] critical combined with a chrome-loading bug like 441120
Grr.  I was quite deliberate about double checking that neterror didn't have privilege before landing this, but obviously if other bugs give it privilege, all bets are off.  Okay, so how do we fix this?  I see a few options:

1) Back out 402210.  This means a string change (and a loss of functionality, but the string change is the kicker)

2) Trivially, use .textContent instead of .innerHTML - which would neuter the attack, but also make the error pages look stupid, since they'd have plain-text tags.

3) Sanitize the string in netError's javascript.  It's intended to contain a single, specifically identified <a> tag, but everything else can be sanitized out.  Is there a general purpose sanitizer we can use, or do we write something specific here?  If the latter, a two-step sanitize that basically did:

 - s/</&lt;/, s/>/&gt;/
 - and then went back and repaired the specific <a id="cert_domain_link" title="localhost.localdomain"> tags should work?
(In reply to comment #2)
> 1) Back out 402210.  This means a string change (and a loss of functionality,
> but the string change is the kicker)


Please note bug 439062 where I have proposed a patch to strip away the html tags. 

Right now the patch only strips the tags when reporting in an error dialog. If you want to revert the change from bug 402210, but avoid the string change, we could extend the patch to always strip the tag.
This patch changes the code to build two text nodes and an anchor element explicitly, instead of just handing off to innerHTML.  I've confirmed that it continues to link for the typical cases, but barfs all the attempted attack source in comment 0 as plaintext. Not pretty but also not, I think, exploitable.

I'd like to attach tests, but I'm flying in a few hours and it was either get the process started now, or wait a day or two while travelling before I can get those tests written.  I think the obvious test here is just to use browser-chrome to visit a chrome url like the one in comment 0, and the DOM-walk to ensure none of the elements in the string become real DOM elements.
Attachment #326564 - Flags: superreview?(dveditz)
Attachment #326564 - Flags: review?(cbiesinger)
Flags: blocking1.9.1?
Flags: blocking1.9.1+
Flags: blocking1.9.0.1?
Flags: blocking1.9.0.1+
Attachment #326564 - Flags: review?(cbiesinger) → review?(bzbarsky)
Is the string we're parsing always XHTML-wellformed?  Would it make sense to parse it outside the document then verify that it only has the bits we want (the nodes and the attributes)?  I guess that's even more complex than the proposed patch code...



Comment on attachment 326564 [details] [diff] [review]
build elements manually to prevent overpermissive innerHTML treatment

>Index: docshell/resources/content/netError.xhtml
>+          // Remove sd's children
>+          while(sd.firstChild)
>+            sd.removeChild(sd.firstChild);

  sd.textContent = "";

is faster.  Not that it matters much here, all things considered.

>+          anchorEl.appendChild(document.createTextNode(result[1]));

So the title and text are always the same thing?

sd.appendChild(document.createTextNode(desc.slice(desc.indexOf("</a>") + 4)));

Please replace |4| with |"</a>".length| to make it clear what's going on.

r=bzbarsky with the nits picked.
Attachment #326564 - Flags: review?(bzbarsky) → review+
Attached patch With bz's nits and a test (obsolete) — Splinter Review
(In reply to comment #6)
> >+          // Remove sd's children
> >+          while(sd.firstChild)
> >+            sd.removeChild(sd.firstChild);
> 
>   sd.textContent = "";
> 
> is faster.  Not that it matters much here, all things considered.

No, but might as well.  Done.

> >+          anchorEl.appendChild(document.createTextNode(result[1]));
> 
> So the title and text are always the same thing?

They are, by design.  See:

http://mxr.mozilla.org/mozilla/source/security/manager/locales/en-US/chrome/pipnss/pipnss.properties#353

> sd.appendChild(document.createTextNode(desc.slice(desc.indexOf("</a>") + 4)));
> 
> Please replace |4| with |"</a>".length| to make it clear what's going on.

Done.

> r=bzbarsky with the nits picked.

Carried forward on to the new patch.

I've included a browser-chrome test that just loads a chrome url like the one in comment 0, and then checks to ensure that its markup isn't parsed into the DOM. I confirm that with my netError patch, the test passes, and without it, it fails.
Attachment #326564 - Attachment is obsolete: true
Attachment #327406 - Flags: superreview?(dveditz)
Attachment #327406 - Flags: review+
Attachment #327406 - Flags: approval1.9.0.1?
Attachment #326564 - Flags: superreview?(dveditz)
Comment on attachment 327406 [details] [diff] [review]
With bz's nits and a test

>+          // Treating description as innerHTML makes it possible to inject
>+          // tags/scripting.  This page is unprivileged, but combined with
>+          // a chrome-loading bug, this could be injection with privilege.
>+          // Instead we'll have to build up the elements manually. Bug 441169

This comment is a bit of a red flag, could you vagueify a bit and leave the bug reference for more detail? In fact, do you need the comment at all -- doesn't cvs blame cover it well enough? "innerHTML" should be minimized and protected everywhere it's used in our code.

If you need the comment it might be sufficient to say
  // sanitize tags to prevent HTML injection; see bug 441169

calling attention to the chrome escalation variant seems unnecessary.

>+          // Finally, append text for anything after the closing </a>
>+          sd.appendChild(document.createTextNode(desc.slice(desc.indexOf("</a>") + "</a>".length)));

I can't think of a way to abuse this, but if someone had a "<a></a>" early in the text before the matching link there'd be some amount of duplicated text (but safe plain text). Could use desc.indexOf("</a>", result.index) if that's something we cared about.

I don't care if trying to hack the page results in extra ugliness, your call.

>Index: docshell/test/browser/browser_bug441169.js
>===================================================================
>+var chromeURL = "chrome://global/content/netError.xhtml?e=nssBadCert&u=https%3A//test.kuix.de/&c=UTF-8&d=This%20sentence%20should%20not%20be%20parsed%20to%20include%20a%20%3Cspan%20id=%22test_span%22%3Enamed%3C/span%3E%20span%20tag.%0A%0AThe%20certificate%20is%20only%20valid%20for%20%3Ca%20id=%22cert_domain_link%22%20title=%22kuix.de%22%3Ekuix.de%3C/a%3E%0A%0A(Error%20code%3A%20ssl_error_bad_cert_domain)";

Could you use the about:neterror form for the test? chrome: is not required to show that the fix does what it's supposed to.

sr=dveditz with the comment and test nits.
Attachment #327406 - Flags: superreview?(dveditz) → superreview+
(In reply to comment #8)
> If you need the comment it might be sufficient to say
>   // sanitize tags to prevent HTML injection; see bug 441169

Done (kept the bug reference, dropped the mention of injection)

> calling attention to the chrome escalation variant seems unnecessary.

Agreed, gone.

> >+          // Finally, append text for anything after the closing </a>
> >+          sd.appendChild(document.createTextNode(desc.slice(desc.indexOf("</a>") + "</a>".length)));
> 
> I can't think of a way to abuse this, but if someone had a "<a></a>" early in
> the text before the matching link there'd be some amount of duplicated text
> (but safe plain text). Could use desc.indexOf("</a>", result.index) if that's
> something we cared about.
> 
> I don't care if trying to hack the page results in extra ugliness, your call.

Yeah, I think the ugliness of attacker pages is a non-issue here, and every time I try to think of another way to exploit this (e.g. by using it to somehow control the message contents) I remember that an attacker can put any text they want in there, or for that matter make a page that looks like our error page, but has totally arbitrary content (and behaviour!)  As long as they can't insert script here which risks running *with privilege*, I'm happy.

> Could you use the about:neterror form for the test? chrome: is not required to
> show that the fix does what it's supposed to.
> 
> sr=dveditz with the comment and test nits.

Done and confirmed that the new test still passes with the patch, fails without it.

Carrying forward r and sr, and tagging for approval.  I might not be able to land this in time, and the tree has been misbehaving all day, so I'll flag it checkin-needed once I have approval.
Attachment #327406 - Attachment is obsolete: true
Attachment #327498 - Flags: superreview+
Attachment #327498 - Flags: review+
Attachment #327498 - Flags: approval1.9.0.1?
Attachment #327406 - Flags: approval1.9.0.1?
Keywords: checkin-needed
Comment on attachment 327498 [details] [diff] [review]
With dveditz' nits

a=beltzner
Attachment #327498 - Flags: approval1.9.0.1? → approval1.9.0.1+
Marking checkin-needed as per.
cc'ing reed so he can check this in.
Checking in docshell/resources/content/netError.xhtml;
/cvsroot/mozilla/docshell/resources/content/netError.xhtml,v  <--  netError.xhtml
new revision: 1.35; previous revision: 1.34
done
Checking in docshell/test/browser/Makefile.in;
/cvsroot/mozilla/docshell/test/browser/Makefile.in,v  <--  Makefile.in
new revision: 1.7; previous revision: 1.6
done
RCS file: /cvsroot/mozilla/docshell/test/browser/browser_bug441169.js,v
done
Checking in docshell/test/browser/browser_bug441169.js;
/cvsroot/mozilla/docshell/test/browser/browser_bug441169.js,v  <--  browser_bug441169.js
initial revision: 1.1
done
Status: NEW → RESOLVED
Closed: 16 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → mozilla1.9
Flags: in-testsuite+
Status: RESOLVED → REOPENED
Keywords: fixed1.9.0.1
Resolution: FIXED → ---
Keywords: checkin-needed
Hmm, any testcases or STR to verify this for the 3.0.1 release?
(In reply to comment #14)
> Hmm, any testcases or STR to verify this for the 3.0.1 release?

There's an automated test in the patch.  If you want to verify manually, load the URLs in comment 0.  If the onmouseover works, the bug is not fixed.  If, instead, you get all the html rendered out as plaintext and no mouseover, the bug is no longer exploitable.
The URL's in comment 0 are too long to fit in the Run field. It cuts them off. Is it possible to shorten it without losing the testcase? Or is there another URL that is a bit shorter?

BTW, thanks Johnathan for the response! Since the automated tests in the patch passed, this bug is probably verified but I just want to be sure with a manual testcase. 
Flags: blocking1.8.0.15-
Pushed in 15866:ed55fa85231d.
Status: REOPENED → RESOLVED
Closed: 16 years ago16 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Group: core-security
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: