Closed Bug 418996 Opened 17 years ago Closed 17 years ago

[FIX]Unsigned documents can inject script into signed JARs

Categories

(Core :: Security: CAPS, defect, P1)

defect

Tracking

()

RESOLVED FIXED

People

(Reporter: mozilla, Assigned: bzbarsky)

References

(Blocks 1 open bug, )

Details

(Keywords: verified1.8.1.15, Whiteboard: [sg:high])

Attachments

(3 files, 2 obsolete files)

A malicious web page can embed an iframe to a document inside a signed JAR file and inject script into that document. The injected script runs with privileges of the JAR file, even though it was not written by the signer of the JAR file.

Because netscape.security.PrivilegeManager.enablePrivilege uses the common name of the principal who signed the JAR file in its security UI, a malicious web page can impersonate anyone who signs a JAR file. The below web page can impersonate Google, Yahoo!, eBay, and LinkedIn:

http://crypto.stanford.edu/~collinj/research/signed-scripts/

It is not hard for an attacker for find such a JAR file, because signed JAR files are commonly used to distribute extensions; addons.mozilla.org contains JAR files signed by various well-known brands.

If a user ever checks "Remember this decision," any web site can now obtain that principal's capabilities without prompting the user, provided that the attacker can obtain a JAR file signed by that principal.
Flags: blocking1.9?
Flags: blocking1.8.1.13?
Whiteboard: [sg:high]
Another issue to watch out for is <script src="some_relative_path.js"> tags inside signed JAR files. An attacker can serve up a different JAR file on subsequent loads for the same JAR file URL, sometimes displaying the victim's signed JAR, and other times displaying the attacker's JAR. This tricks the victim's signed JAR file into embedding the attacker's script using a relative path.

Firefox extensions are distributed as JAR files with the XPI extension. Here's an example attack against a signed Stanford Firefox extension:

http://crypto.stanford.edu/~collinj/research/signed-scripts/relative-path.html

The only thing that the signed victim extension needs to do for the attack to work is to embed <script src="some_relative_path.js"> in one of its XUL files. By sending "Cache-Control: no-cache" headers and replying with a different JAR file, the attacker can substitute its own file for some_relative_path.js.

Relative path attacks on signed XPI files are mitigated by two factors:

1) XUL files are often JAR'd a second time in an unsigned JAR file in the chrome/ directory, which seems to prevent the browser from treating the XUL document as signed.
2) Extension authors seem to prefer absolute chrome:// URLs to relative paths, even for files in the same directory.

Because of these mitigating factors, unlike the arbitrary script injection attack described above, not all XPI files are vulnerable to relative path attacks. However, it would be preferable to find a solution that fixes both issues, particularly since there may be other types of signed JAR files out there.
In order to address this vulnerability while retaining enablePrivilege functionality, the main change to behavior that would be required is that a signed JAR should not be able to script anyone but itself, and no one should be able to script the signed JAR except itself. This might break some existing web applications that use signed JARs, but the signature is probably not providing much value for these applications since the JAR contents are intermingling with unsigned code.

To prevent the relative path attack, the behavior for relative paths inside signed JARs needs to be handled differently. Relative paths inside a signed JAR should only resolve to resources inside that JAR. They should certainly not resolve to unsigned JARs or JARs signed by a different principal, and they should probably also not resolve to JARs signed by the same principal but with different contents, as this might allow an attacker to cause a JAR to behave unexpectedly by serving a mixture of files from two different signed JARs. This restriction is unlikely to be noticeable in non-attack scenarios.

One way to accomplish this would be to create a new scheme "sjar:" that distinguishes protected signed JAR files from ordinary unsigned JAR files. The sjar: protocol is designed to compartmentalize the JAR file into a separate origin based on its contents, rather than the server it was downloaded from. If a browser requests a file using the jar: scheme and ends up with a signed JAR, it can do an automatic redirect to the sjar: protocol. Since the JAR is already in the cache, it should be relatively quick to load it again using the new scheme. 

To distinguish the attacker's signed JAR from the target site's signed JAR, and to prevent relative path attacks, the sjar: URL should contain a hash of the JAR's contents, perhaps using a format similar to the data: URLs described in <http://www.ietf.org/rfc/rfc2397.txt>:

sjar:application/java-archive;sha1=7697b80327957f91146b80b29276dbb0226a58db,http://www.example.com/foo.jar!/index.html

Thinking about this some more, it seems that sjar is actually taking two separate, ordered actions: first it checks the hash on the JAR (this code is already basically implemented in attachment 278795 [details] to bug 377245), then it decodes it (using the standard JAR decoder). Thus, we can break it into a nested scheme syntax:

jar:hash:application/java-archive;sha1=7697b80327957f91146b80b29276dbb0226a58db,http://www.example.com/foo.jar!/index.html

The hash: scheme would download the scheme of the specified resource and check it against the provided hash before returning it to the jar: protocol handler.

As an aside, the hash: scheme would probably have many other useful applications outside the scope of what is needed to resolve this vulnerability. See <http://wiki.whatwg.org/wiki/Link_Hashes>. Because a sequence of bytes can have different behavior depending on the Content-Type it is served with, the mime type is specified in the URL to disambiguate the behavior. See also bug 369814. It should be noted that although this URL is long, it is generated automatically from the ordinary jar: URL, so no one is expected to type it manually. 

Using the jar:hash: technique, the sequence of calling enablePrivilege would thus be:

1) User visits jar:http://www.example.com/foo.jar!/index.html
2) Browser downloads foo.jar, checks Content-Type. If it's not a JAR type, abort.
4) Browser checks JAR signature. If unsigned, display index.html using today's unsigned JAR behavior.
5) If foo.jar is signed, browser computes the hash of foo.jar.
6) Browser issues an automatic redirect to jar:hash:application/java-archive;sha1=7697b80327957f91146b80b29276dbb0226a58db,http://www.example.com/foo.jar!/index.html
7) The hash: protocol handler downloads foo.jar from www.example.com (which is in the cache, so it gets it back immediately).
8) The hash: protocol handler checks the hash of the JAR file to ensure that it matches the provided hash. If not, abort. It also checks to make sure the Content-Type matches the mime type specified in the hash: URL. 
9) The hash: protocol handler returns the data to the jar: protocol handler, which renders index.html in a security context that is unique to this JAR's hash. The JAR is not scriptable by www.example.com and it cannot script www.example.com. Relative paths are resolved using the jar:hash: protocol, and thus cannot return files that are not from this JAR. For purposes of XHR, the JAR's Access-Control-Origin is hash:application/java-archive;sha1=7697b80327957f91146b80b29276dbb0226a58db,http://www.example.com.
10) Because the JAR file has the hash: scheme, it is allowed to make enablePrivilege requests using the principal that signed the JAR.
(In reply to comment #2)
> Thinking about this some more, it seems that sjar is actually taking two
> separate, ordered actions: first it checks the hash on the JAR (this code is
> already basically implemented in attachment 278795 [details] to bug 377245), then it
> decodes it (using the standard JAR decoder). 

I meant to refer to attachment 278794 [details] [diff] [review] here. Attachment 278795 [details] is the spec.
Flags: blocking1.8.1.13? → wanted1.8.1.x+
Flags: tracking1.9? → blocking1.9?
This looks like a blocker. Marking P1 since fixing it is likely going to break a few websites so we should do that sooner rather than later.
Flags: blocking1.9? → blocking1.9+
Priority: -- → P1
> The main change to behavior that would be required is that a
> signed JAR should not be able to script anyone but itself

That's how it's supposed to work.  Or more precisely, a signed jar is only allowed to script an object that comes from a jar signed by the same entity and hosted on the same server.

Unfortunately, XOW is just using a function it shouldn't be using.  See bug 370113.  So we get the injection attack.  Once that's fixed, we have a long-standing bug in Equals() which we need to fix.  With both of those fixed, we're more or less good.  Except for the fact that the CheckPropertyAccessImpl checks ignore certs as well...  This exploit is still stopped by the belt-and-braces check on node insertion, luckily.

On the bright side, it looks like we can now remove this API altogether, which will make me very very happy.  And fix this bug.  Or at least this bug as filed in comment 0...
Blocks: JEP/caps
Attached patch Proposed fixSplinter Review
Several things going on here:

* Fixes XOW as described to take certs into account
* Fixes nsPrincipal::Equals to work correctly if |this| has no cert but aOther
  has one.
* Changes various places in secman that ignored certs to pay attention.
* Changes CheckFunctionAccess to take document.domain and certs into account.
  The fact that it wasn't is very odd to me.  If there was a good reason for it
  to not take document.domain into account, we'll need to factor this code
  somewhat differently (unless it should also not take certs into account, of
  course).  Please review this part carefully!  As far as I can tell it's just
  rarely-touched code that got changed without changing behavior when
  principals got merged, even though the old behavior was wrong.
* Removed the API that tripped up XOW.  Hallelujah!
Assignee: dveditz → bzbarsky
Status: NEW → ASSIGNED
Attachment #309033 - Flags: superreview?
Attachment #309033 - Flags: review?(mrbkap)
Attachment #309033 - Flags: review?(dveditz)
Blocks: 370113
Attachment #309033 - Flags: superreview? → superreview?(jst)
Summary: Unsigned documents can inject script into signed JARs → [FIX]Unsigned documents can inject script into signed JARs
To be clear, this would prevent http://a.com from accessing jar:http://a.com/foo.jar!/bar if foo.jar is signed?  What about the <script src="some_relative_path.js"> issue described in comment #1?
> To be clear, this would prevent http://a.com from accessing
> jar:http://a.com/foo.jar!/bar if foo.jar is signed?

That's correct.

> What about the <script src="some_relative_path.js"> issue described in comment
> #1?

I hadn't gotten as far as sorting out what's going on there yet.  It might want to be a separate bug...
Though the patch stops the testcase in comment 1 from working because forcehttps.js can't touch Window.attack, for what it's worth.
I should also note that for HTML scripts we downgrade the principal of the caller (remove the signature) unless all <script>s loaded by the page have that same signature.  Perhaps we simply need to do that for XUL as well.  Definitely sounds like separate bug fodder.  Do you want to file, or should I?
Interesting.  We can file a separate bug.  Downgrading the principal doesn't work in all cases because the document could have JavaScript pointers to objects in another document before it gets downgraded.  Do you know of examples of sites that actually use these features?  It would be easer to think about complete solutions if we knew the context of how this is really used.
> Downgrading the principal doesn't work in all cases because the document could
> have JavaScript pointers to objects in another document before it gets
> downgraded.

Yes, but it wouldn't be able to do anything with them after downgrading due to cross-origin wrappers, right?  At least I would hope not!

> Do you know of examples of sites that actually use these features? 

Almost entirely intranet web apps, to be honest.
> Yes, but it wouldn't be able to do anything with them after downgrading due to
> cross-origin wrappers, right?  At least I would hope not!

I think what makes sense here is to go forward with this patch to fix the current issue.  Once that happens, we can experiment and see how the cross-origin wrappers cope with a document whose principal changes.
That sounds like about the plan I was pursuing, yes.   ;)

Have I mentioned how grateful I am that you guys are destruct-testing this stuff?
Thanks.  We learned a lot by experimenting with this feature.  The bug here is similar to bugs in a number of current and proposed browser security features that try to make sub-origin distinctions.  For example, in Bug 403590, the mixed content indicator can say that one page in an origin has mixed content but that another is uncorrupted by a network attacker, which doesn't make sense in light of same-origin scripting.

Are there other features you'd like us to experiment with?
Flags: in-testsuite?
This patch sounds awesome, but how confident are you that this won't cause bad regressions? I'd definitely say we want beta testing at the least on this so P1 is the right thing here.
Adam, if I think of something I'll definitely let you guys know.

Jonas, that's a really good question.  Most of the changes only affect communication between code in a signed jar and code outside it, so they should be reasonably safe.  In Gecko 1.8, code outside a signed jar can touch data inside one that comes from the same origin, but code inside a signed jar can only touch data from a jar signed by the same cert (regardless of origin).  Before this patch on trunk code outside a signed jar can touch data inside a signed jar that comes from the same origin, but code inside a signed jar can only touch data from a jar signed by the same cert _and_ coming from the same origin.  This patch makes the check symmetric and consistently applied, as it should have been all along...  Sites will break if they have an unsigned page and a signed jar on teh same server with the unsigned page reaching into the jar.  That's a pretty bad security hole, imo, so I think we want to fix it no matter what.

The one substantive change that would affect non-jar pages is the change to CheckFunctionAccess to take into account document.domain.  This will only affect cases in which the principal of the function object and the principal of the target object are not the same nsIPrincipal pointer.  There are two callers of this function:

1) XBL constructors/destructors.  In this case, the cloned function object's
   parent is the target object, so they have the same principal.  Patch
   shouldn't change behavior here.
2) nsJSContext::CallEventHandler.  In this case, the function is the handler
   function, the target object is the event target.  The function doesn't
   actually have any principals compiled in, so what matters is whether the
   principal of mContext (and if the target is a content node of its owner
   document's script global's JSContext) can access the target.  The latter
   always can, unless the document has been unloaded and a new inner window
   loaded in its place.  The former is the context the event handler was
   initially compiled on, which in all the web cases should be the same thing.
   So the behavior here should also be unaffected...  But I'm not quite as
   certain of that.

I agree that we'd want beta testing here.  Bring on the reviews?  ;)
And of course it would be good to have mochitests confirming all of the above.  Sadly, I'm still in the situation of having minimal time for all of this until May.  :(  Help very much wanted on the tests.
Flags: blocking1.8.1.14?
Comment on attachment 309033 [details] [diff] [review]
Proposed fix

r=dveditz
Attachment #309033 - Flags: review?(dveditz) → review+
Attachment #309033 - Flags: review?(mrbkap) → review+
Attachment #309033 - Flags: superreview?(jst) → superreview+
Checked in.
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
I tried this on the latest nightly.

The vulnerability described in comment #0 is fixed. The vulnerability described in comment #1 is not fixed.

I updated my test case at http://crypto.stanford.edu/~collinj/research/signed-scripts/relative-path.html to address the issue described in comment #9.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Yeah, I never claimed that that patch fixed comment 1.  We need a separate bug on that...  thank you for creating a testcase for it!
Status: REOPENED → RESOLVED
Closed: 17 years ago17 years ago
Resolution: --- → FIXED
Filed bug 424188.
Filed bug 424488 on having a decent way to test this in a good controlled manner.
Depends on: 424488
Flags: blocking1.8.1.15? → blocking1.8.1.15+
Depends on: 434544
Attached patch 1.8 branch version. (obsolete) — Splinter Review
This was not a trivial merge, so this could use a serious look.
Attachment #323775 - Flags: superreview?(bzbarsky)
Attachment #323775 - Flags: review?(bzbarsky)
Whiteboard: [sg:high] → [sg:high][needs r/sr bz]
Comment on attachment 323775 [details] [diff] [review]
1.8 branch version.

There are some else-after-returns here, but I think that's fine.  Less code movement is good...

>@@ -1603,21 +1619,24 @@ nsScriptSecurityManager::CheckFunctionAc
>-    // Note that CheckSameOriginPrincipalInternal already does an equality
>-    // comparison on subject and object, so no need for us to do it.
>-    return CheckSameOriginPrincipalInternal(subject, object, PR_TRUE);
>+    PRBool subsumes;
>+    rv = subject->Subsumes(object, &subsumes);
>+    if (NS_SUCCEEDED(rv) && !subsumes) {
>+        rv = NS_ERROR_DOM_PROP_ACCESS_DENIED;
>+    }
>+    return rv;

Are we sure we need this change to fix aspects of this bug?  I mostly made it as part of my trunk API cleanup, but it feels like the answer is "yes", since the new check is stricter than the old one. This is the one part here that worries me for branch...

r+sr=bzbarsky, I guess.
Attachment #323775 - Flags: superreview?(bzbarsky)
Attachment #323775 - Flags: superreview+
Attachment #323775 - Flags: review?(bzbarsky)
Attachment #323775 - Flags: review+
Whiteboard: [sg:high][needs r/sr bz] → [sg:high][needs approval (branch patch change?)]
Comment on attachment 323775 [details] [diff] [review]
1.8 branch version.

Boris, that change is *not* needed to fix this testcase. I'll leave that out when landing.
Attachment #323775 - Flags: approval1.8.0.15?
Comment on attachment 323775 [details] [diff] [review]
1.8 branch version.

Nominating for the other branch.
Attachment #323775 - Flags: approval1.8.0.15? → approval1.8.1.15?
Could someone add a clean repro case for QA to test this once it goes in?
Never mind, I see the site is a repro case now.
Comment on attachment 323775 [details] [diff] [review]
1.8 branch version.

Approved for 1.8.1.15, a=dveditz for release-drivers
Attachment #323775 - Flags: approval1.8.1.15? → approval1.8.1.15+
Whiteboard: [sg:high][needs approval (branch patch change?)] → [sg:high]
Fix checked in on the branch.
Keywords: fixed1.8.1.15
Backed out of branch, maybe causing orange. I'll be watching it over the weekend and I'll try to reland it before Monday.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Ok, this one definitely causes the tp2 test to crash somehow. I can't reproduce on my laptop... maybe a ppc-only issue?!
Boris, since Johnny is out for the weekend (I believe), any idea why the branch patch would cause Tp2 to crash?
Whiteboard: [sg:high] → [sg:high][branch patch backed out due to tp2 regression, needs new patch?]
Nope.  No clue.
>                 // Compare ports
>                 PRInt32 targetPort;
>                 rv = targetBaseURI->GetPort(&targetPort);
>                 PRInt32 sourcePort;
>                 if (NS_SUCCEEDED(rv))
>                     rv = sourceBaseURI->GetPort(&sourcePort);
>-                *result = NS_SUCCEEDED(rv) && targetPort == sourcePort;
>+                result = NS_SUCCEEDED(rv) && targetPort == sourcePort;
>                 // If the port comparison failed, see if either URL has a
>                 // port of -1. If so, replace -1 with the default port
>                 // for that scheme.
>-                if (NS_SUCCEEDED(rv) && !*result &&
>+                if (NS_SUCCEEDED(rv) && !result &&
>                     (sourcePort == -1 || targetPort == -1))
>                 {
...
>                 }
>             }
>         }
>     }
>-    return NS_OK;
>+
>+    return PR_FALSE;
> }

If the port comparison succeeded, this code should return true, but this code
returned false.

Because of this, Tp2 test code can't access subframe's content, I guess.

Also, this caused bug 437758.
Whiteboard: [sg:high][branch patch backed out due to tp2 regression, needs new patch?] → [sg:high][branch patch backed out due to tp2 regression, needs new patch, caused 437758]
Depends on: 437758
Depends on: 428995
This shouldn't have been reopened, since it's still FIXED in 1.9.
Status: REOPENED → RESOLVED
Closed: 17 years ago17 years ago
Resolution: --- → FIXED
Thanks moz_bug_r_a4@yahoo.com, that was the exact problem with this merge! New patch coming up.
Attached patch Updated 1.8 version. (obsolete) — Splinter Review
This is the same as above with the incorrect return from nsScriptSecurityManager::SecurityCompareURIs() fixed. The only difference between the two patches should be the second and third change from the end of nsScriptSecurityManager::SecurityCompareURIs().
Attachment #323775 - Attachment is obsolete: true
Attachment #324366 - Attachment is patch: true
Attachment #324366 - Attachment mime type: application/octet-stream → text/plain
Attachment #324366 - Flags: superreview?(dveditz)
Attachment #324366 - Flags: review?(dveditz)
Same as above with the change to CheckFunctionAccess() reverted, per comment #26.
Attachment #324366 - Attachment is obsolete: true
Attachment #324370 - Flags: superreview?(dveditz)
Attachment #324370 - Flags: review?(dveditz)
Attachment #324366 - Flags: superreview?(dveditz)
Attachment #324366 - Flags: review?(dveditz)
Whiteboard: [sg:high][branch patch backed out due to tp2 regression, needs new patch, caused 437758] → [sg:high][needs r/sr dveditz]
Comment on attachment 324370 [details] [diff] [review]
Update 1.8 version including bz's suggestion in comment #26.

r/sr=dveditz
approved for 1.8.1.15, a=dveditz for release-drivers
Attachment #324370 - Flags: superreview?(dveditz)
Attachment #324370 - Flags: superreview+
Attachment #324370 - Flags: review?(dveditz)
Attachment #324370 - Flags: review+
Attachment #324370 - Flags: approval1.8.1.15+
Whiteboard: [sg:high][needs r/sr dveditz] → [sg:high]
Updated fix landed on the branch.
Keywords: fixed1.8.1.15
(In reply to comment #38)
> Created an attachment (id=324244) [details]
> testcase - frames[0].document.body
> 

I verified that this still passed with the Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.15pre) Gecko/2008061004 BonEcho/2.0.0.15pre build. The original test case no longer injects script as well. This is all good.
Depends on: 445890
Attachment #324370 - Flags: approval1.8.0.next+
Comment on attachment 324370 [details] [diff] [review]
Update 1.8 version including bz's suggestion in comment #26.

a=asac for 1.8.0 branch
This bug has been fixed for a few years and is probably no longer security-sensitive.
Indeed not.
Group: core-security
We re-broke this in Firefox 4 -- bug 657267
Group: core-security
Can you CC me on 657267? Thanks
Done.
Group: core-security
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: