Closed Bug 528661 (xssfilter) Opened 15 years ago Closed 6 years ago

Heuristics to block reflected XSS via X-XSS-Protection HTTP header

Categories

(Core :: Security, enhancement)

enhancement
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: jruderman, Unassigned)

References

(Blocks 1 open bug)

Details

(4 keywords, Whiteboard: [sg:want P2])

Attachments

(1 file, 10 obsolete files)

      No description provided.
Whiteboard: [sg:want P2]
Here's a preprint of a WWW2010 paper on the subject, by Bates / Barth / Jackson.

http://www.collinjackson.com/research/xssauditor.pdf
While we wait for browsers and websites to adopt CSP, a protection against reflected XSS attacks could be a useful addition to Mozilla. In fact, it could be implemented as a default CSP for websites which do not provide a CSP.

I can understand that people are skeptical of these filters, especially after the bad publicity IE8 got because of their filters which were introducing new XSS vulnerabilities (http://blog.c22.cc/2010/04/15/blackhat-europe-universal-xss-via-ie8s-xss-filters-2/). However, I invite you to read Jackson's paper carefully. The original poster made a mistake: their approach is very different than IE8, and has been implemented in Webkit with low overhead.

To sum up, it is:
1) Immune from browser quirks (complete interposition on *all* scripts)
2) Immune from sanitization bugs (no new XSS vulnerabilties can be introduced)
3) More precise than IE8, and faster than filters based on HTML parsing.
(Disclaimer: I am not affiliated with the authors ;-)

If such a filter was accepted, I would be happy to take on the task.
Last time i checked Webkit, the way they check for tainted strings in
the script leaves a couple of holes in some contexts. Firefox could do better using a (probably more expensive) algorithm such as the one described here (http://seclab.cs.sunysb.edu/seclab/pubs/ndss09.pdf). Interposing on script execution instead of HTTP traffic (as this paper does) makes for smaller strings to match which probably do not cause a big slowdown with this algorithm. However, even blatantly copying the webkit approach would provide a great deal of defense.
(In reply to comment #3)
> I'm hesitant about but not completely opposed to any XSS filters.  If we do a
> filter, we have to be incredibly careful:
> 
> http://blog.0x0lab.org/2010/06/bypassing-safari-5-xss-auditor/
> http://sla.ckers.org/forum/read.php?13,31377

these seem to be bugs. XSSAuditor can in theory defend against these quirks. They may be fixed by now... or maybe they are due to their matching policies: they seem to be very primitive..
Anyway, these bypass the filter, but do not introduce new vulnerabilties.

> http://lcamtuf.blogspot.com/2010/06/browser-side-xss-detectors-of-doom.html
> http://hackademix.net/2009/11/21/ies-xss-filter-creates-xss-vulnerabilities/

These introduce more vulnerabilities, but they are only possible with the IE8 approach: sanitization at the HTTP level is hard and can lead to this kind of bug. These bugs are *not* possible if you interpose inside the browser, before dispatching the script to the JS engine (unfortunately Webkit has a clearer interface, entry points seem to be more scattered in Mozilla).
Sanitization in IE8 involves changing the HTML so that the parser does not detect a script (hard, dangerous). Sanitization in XSSAuditor simply involves returning NS_OK without dispatching the script (easy, safe).
Attached patch Preliminary patch for XSS filter (obsolete) — Splinter Review
Attaching a preliminary version of the XSS filter. Does not compile on windows because of some warnings with unicode characters. All tests are green for linux and macosx. The patch represents work in progress, here are the main TODOs:
1) parsing of parameters in POST requests. urlencoded POSTs are easy, but multipart/form-data POSTs require a MIME parser.
2) HTML entity decoding. Currently, the functionality is not exposed by the parser and should not be duplicated.
3) Use the JS engine to tokenize the script. Parsing and tokenization are interdependent so i will have to use the parser and navigate the AST.
4) support block mode, which stops the whole page from loading further in case of an XSS violation.
Attachment #547179 - Flags: review?(mrbkap)
Attached patch Simplified XSS patch (obsolete) — Splinter Review
Removed detection of partial injections for safety considerations and used a preexisting jQuery file in the src tree.
Attachment #547179 - Attachment is obsolete: true
Attachment #549432 - Flags: review?(mrbkap)
Attachment #547179 - Flags: review?(mrbkap)
It does not break the use case of writing inside the markup area, because only the URL is matched against scripts (I'll hopefully extend this to POST data before this patch lands). It does however break the app if you provide the HTML code to be evaluated through the URL and the HTML code provided contains a script. However, this is the intended behaviour, as this is an XSS vulnerability!

example:
http://software.hixie.ch/utilities/js/live-dom-viewer/?%3Chtml%3E%3Cbody%3E%3Cscript%3Ealert%28document.location%29%3C/script%3E%3C/body%3E%3C/html%3E

the author should probably fix it (I'll notify him right away). Assuming it's your app, if you are sure that the origin where an app runs does not have any sensitive data and you actually want to allow untrusted parties to run scripts on your domain, i'd say that the burden of disabling the XSS filter (there's a header for that) falls on you :-) note that Chrome's XSS filter also stops the script from running (because that's what XSS filters currently do, which is not stopping XSS attacks but preventing injection of untrusted JavaScript code, no matter how benign it looks).
There's no XSS vulnerability on that page. The whole point is for people be able to run arbitrary scripts in that domain.
I see. Then I suggest you add the X-XSS-Protection header to your page to prevent XSS filters from messing with your client side logic. Chrome already prevents scripts from running, Firefox will (hopefully) do it soon, and I believe IE does not stop this kind of injection merely for technical reasons. Unfortunately XSS filters cannot distinguish between XSS capabilities and XSS vulnerabilities...

In general, all web applications willingly allowing injection of untrusted JavaScript code will have to disable the filter through an HTTP header. Hopefully most of them have already noticed that they are incompatible with Chrome and IE.
Assignee: nobody → r.pelizzi
Attachment #549432 - Attachment is obsolete: true
Attachment #553555 - Flags: review?(mrbkap)
Attachment #549432 - Flags: review?(mrbkap)
The only outstanding issue (aside from reviewer's comments) is now escaping HTML entities. Currently, I am stuffing the content I want to escape in a title element, calling SetInnerHTML and GetTextContent to get the escaped version out, but this triggers an assertion:

###!!! ASSERTION: Want to fire DOMNodeRemoved event, but it's not safe: 'aChild->IsNodeOfType(nsINode::eCONTENT) && static_cast<nsIContent*>(aChild)-> IsInNativeAnonymousSubtree() || IsSafeToRunScript() || sDOMNodeRemovedSuppressCount', file /Users/riccardo/Mozilla/src/content/base/src/nsContentUtils.cpp, line 3319
Attachment #553555 - Attachment is obsolete: true
Attachment #553845 - Flags: review?(mrbkap)
Attachment #553555 - Flags: review?(mrbkap)
Attached patch Reflected XSS filter (obsolete) — Splinter Review
Added support for XSS in POST parameters.
Attachment #553845 - Attachment is obsolete: true
Attachment #554257 - Flags: review?(mrbkap)
Attachment #553845 - Flags: review?(mrbkap)
Depends on: 682058
Attached patch Reflected XSS filter (obsolete) — Splinter Review
cleaned up the code a bit. Still does not compile on windows for some unicode issues, I'm setting up a build environment and will address this shortly.
Attachment #554257 - Attachment is obsolete: true
Attachment #567345 - Flags: review?(mrbkap)
Attachment #554257 - Flags: review?(mrbkap)
Comment on attachment 567345 [details] [diff] [review]
Reflected XSS filter

># HG changeset patch
># User Riccardo Pelizzi <r.pelizzi@gmail.com>
># Parent 3b58a9df4c8cf6d95af64d2a64b4f10c339b0d2c
>Bug 528661 - Reflected XSS Filter for Firefox
>
>diff --git a/browser/base/content/browser.js b/browser/base/content/browser.js
>--- a/browser/base/content/browser.js
>+++ b/browser/base/content/browser.js

>+    const anchorID = "addons-notification-icon";
>+    var action;
>+
>+    var notificationID = aTopic;
>+    // Make notifications persist a minimum of 30 seconds
>+    var options = {
>+      timeout: Date.now() + 30000
>+    };

All these variables are unused.

>+    var nb = gBrowser.getNotificationBox();
>+	  const priority = nb.PRIORITY_WARNING_MEDIUM;
>+
>+    var msg = aSubject.QueryInterface(Ci.nsISupportsCString).data;
>+    var details = aData;
>+
>+    // if it is a block mode event, do not display the warning
>+    if (aData.indexOf(";;block") != -1)
>+      return;
>+
>+	  var buttons = [{
>+	                   label: 'View Script',
>+	                   accessKey: 'V',
>+	                   popup: null,
>+	                   callback: function () {
>+                       alert(details);
>+                     }
>+	                 }];
>+	  
>+	  nb.appendNotification(msg, 'popup-blocked',
>+	                        'chrome://browser/skin/Info.png',
>+	                        priority, buttons);

You have a couple of tabs in there that should be spaces.

>+  Services.obs.addObserver(gXSSObserver, "xss-on-violate-policy", false);

You never remove this observer.
OS: Mac OS X → All
Hardware: x86 → All
Riccardo, are you still interested in working on this?
Comment on attachment 567345 [details] [diff] [review]
Reflected XSS filter

s/PRBool/bool/g, s/{PR,JS}_FALSE/false/g, s/{PR,JS}_TRUE/true/g.

>--- a/caps/src/nsScriptSecurityManager.cpp
>+++ b/caps/src/nsScriptSecurityManager.cpp
>+XSSFilterPermitsJSAction(JSContext *cx, JSString *str,
>+                         XSSJSAction action)
>+{
>+    

Trailing whitespace. Also elsewhere.

>+    NS_ASSERTION(ssm, "Failed to get security manager service");
>+    if (!ssm)
>+        return JS_FALSE;

Either assert or handle failure, not both. Also elsewhere.

>+    size_t len = 0;
>+    const jschar *jschrs = JS_GetStringCharsAndLength(cx, str, &len);
>+    NS_ASSERTION(jschrs, "Can this be null?");

Yes.

>+    nsAutoString nsStr(jschrs);

Use nsDependentJSString.

>+    return xss->PermitsJSAction(nsStr, action) ? JS_TRUE : JS_FALSE;

return xss->PermitsJSAction(nsStr, action);

>--- /dev/null
>+++ b/content/base/public/nsIXSSUtils.idl
>+ * The Initial Developer of the Original Code is
>+ * Mozilla Corporation

"the Mozilla Foundation."

>+ *
>+ * Contributor(s):
>+ *  Benedict Hsieh <bhsieh@mozilla.com>

Oh?

Both also elsewhere.

>--- a/content/base/src/nsDocument.cpp
>+++ b/content/base/src/nsDocument.cpp
>+nsresult nsDocument::InitXSS()
>+{
>+  if (!nsXSSFilter::sXSSEnabled)
>+    return NS_OK;

Brace all ifs.

>+  if (NS_SUCCEEDED(ssm->IsSystemPrincipal(NodePrincipal(), &system)) &&
>+      system)
>+    // TODO: protection for privileged documents? how would attackers
>+    // provide untrusted input to it?
>+    return NS_OK;
>+
>+

One line will do.

>+  // since there is a principal and the pref is enabled, create the object.
>+  nsRefPtr<nsXSSFilter> xss = new nsXSSFilter(this);
>+  if (!xss) {
>+    NS_WARNING("Out of memory for XSSFilter object?");
>+    return NS_OK;
>+  }

Can't happen, so don't handle.

>--- a/content/base/src/nsGkAtomList.h
>+++ b/content/base/src/nsGkAtomList.h
>+GK_ATOM(headerXSS, "x-xss-protection")

http://tools.ietf.org/html/draft-saintandre-xdash-considered-harmful-01

>--- /dev/null
>+++ b/content/base/src/nsXSSFilter.cpp
>+PRBool skipWhiteSpace(const nsACString &str, PRUint32& pos,

PRUint32& parameters are frowned upon, and functions start with a capital letter.

>+	    if (fromHttpEquivMeta) {
>+        while (pos != len && str[pos] <= ' ')

Tabs, and this looks wrong.

>+nsresult nsXSSFilter::ScanRequestData() {
>+  // no need to skip spaces before the begging of the string, firefox
"beginning", and firefox doesn't do anything relevant

>+  if (xssHeaderValue[pos++] == '1'
>+      && skipWhiteSpace(xssHeaderValue, pos, false)

&& goes at the end of the line.

>+PRBool nsXSSFilter::PermitsInlineScript(const nsAString &aScript) {

PRBool
nsXSSFilter::PermitsInlineScript(const nsAString& aScript)
{

(3 lines, & on the correct side. Also elsewhere.)

>+PRBool nsXSSFilter::PermitsExternalScript(nsIURI *aURI) {
>+  if (nsXSSUtils::HasAttack(GetParams())) {
>+    nsCAutoString spec;
>+    aURI->GetSpec(spec);
>+    NotifyViolation("External Script", NS_ConvertUTF8toUTF16(spec));
>+    cache.Put(domain, PR_FALSE);
>+    return IsReportOnly();
>+  } else {

No else after a return. Also elsewhere.

>+ParameterArray& nsXSSFilter::GetParams() {
>+    if (method.Equals(NS_LITERAL_CSTRING("POST"))) {

EqualsLiteral

>+      char* buf;
>+      PRUint32 len;
>+      nsresult rv;
>+      rv = uploadStream->Available(&len);
>+      buf = (char*)malloc(len+1);

PRUint32 len;
nsresult rv = uploadStream->Available(&len);
char* buf = static_cast<char*>(moz_xmalloc(len + 1));

>+class nsXSSNotifier: public nsRunnable {

{ on its own line.

>+  NS_IMETHOD Run() {
>+    nsCOMPtr<nsIObserverService> observerService =
>+      mozilla::services::GetObserverService();

No mozilla:: prefix.

>--- /dev/null
>+++ b/content/base/src/nsXSSFilter.h
>+#ifndef nsXSSFilter_h___

No trailing underscores.

>+#include "jsapi.h"

This isn't necessary.

>--- /dev/null
>+++ b/content/base/src/nsXSSUtils.cpp
>+
>+using namespace std;

No.

>+#define min(a,b) ((a)<(b)?(a):(b))
>+#define max(a,b) ((a)>=(b)?(a):(b))

No. NS_MIN/NS_MAX.

>+nsresult nsXSSUtils::DecodeHTMLEntities(const nsAString &aString,
>+  nsIContent* title = NS_NewHTMLTitleElement(titleInfo.forget());
>+  if (!title) {
>+    return NS_OK;
>+  }
>+  nsCOMPtr<nsIDOMNSHTMLElement> domNSTitle(do_QueryInterface(title));

This interface was removed.

>+  domTitle->GetTextContent(result);

You should be able to get this from the nsIContent.


>+#define DEFAULT_COST 20
>+
>+#define IS_ASCII(u)       ((u) < 0x80)

Pretty much everything from here on should go.
>+/*** Macros and utils ***/

And all of this as well.

>--- a/dom/base/nsJSTimeoutHandler.cpp
>+++ b/dom/base/nsJSTimeoutHandler.cpp
>+      if (xss) {
>+        //xss settimeout
>+        size_t len = 0;
>+        const jschar *jschrs = JS_GetStringCharsAndLength(cx, expr, &len);
>+        nsAutoString nsStr(jschrs);

nsDependentJSString.

>--- a/dom/src/jsurl/nsJSProtocolHandler.cpp
>+++ b/dom/src/jsurl/nsJSProtocolHandler.cpp
>+    if (xss) {
>+        nsAutoString arg = NS_ConvertUTF8toUTF16(mScript);

NS_ConvertUTF8toUTF16 arg(mScript);

This needs to be split up, and there's no way it can land without a spec.
(In reply to Dão Gottwald [:dao] from comment #16)

> >diff --git a/browser/base/content/browser.js b/browser/base/content/browser.js
> >--- a/browser/base/content/browser.js
> >+++ b/browser/base/content/browser.js

Actually I'm not even sure this part should end up in the final patch. I wrote this code mostly for debugging purposes, but I think Sid wanted something less conspicuous (e.g. a console message). Do you want me to clean this up and enable it with a pref, or should we only log to console?

(In reply to Brian Smith (:bsmith) from comment #17)
> Riccardo, are you still interested in working on this?

yes, I'll take care of this. I will address the rest of the comments in the next days.
The patch addresses most of the comments, though i have questions and objections (see below). I will address the remaining changes (PRBool -> bool and nsIDOMNSHTMLElement) as soon as I rebase the patch to a recent green changeset.

> 
> >+    size_t len = 0;
> >+    const jschar *jschrs = JS_GetStringCharsAndLength(cx, str, &len);
> >+    NS_ASSERTION(jschrs, "Can this be null?");
> 
> Yes.

When can this happen? I am assuming that if the string is null the js action will either fail or do nothing, thus i am now allowing it.


> 
> >--- a/content/base/src/nsGkAtomList.h
> >+++ b/content/base/src/nsGkAtomList.h
> >+GK_ATOM(headerXSS, "x-xss-protection")
> 
> http://tools.ietf.org/html/draft-saintandre-xdash-considered-harmful-01

unfortunately the header name is not up to me. it is a de facto standard from IE8. Webkit also supports this header, and many web applications (e.g. google) serve it.

> 
> >--- /dev/null
> >+++ b/content/base/src/nsXSSFilter.cpp
> >+PRBool skipWhiteSpace(const nsACString &str, PRUint32& pos,
> 
> PRUint32& parameters are frowned upon, and functions start with a capital
> letter.

can you elaborate? since i need to return both a PRUint32 and a bool, the alternative is to pass a bool reference and return a PRUint32. why is this preferable?

> 
> >+nsresult nsXSSFilter::ScanRequestData() {
> >+  // no need to skip spaces before the begging of the string, firefox
> "beginning", and firefox doesn't do anything relevant

are you sure? this is contrary to my experience. I have a test case that supplies the header value "  1".
unless the bug is in the sjs file that serves the header, the test should fail because the first action on the header value is to check for a '1' without skipping any spaces.

> 
> >+#include "jsapi.h"
> 
> This isn't necessary.

Actually it is, at least given the current filter interface. XSSJSAction needs to be defined both in js code and in the xss filter. Since the js engine cannot depend on the filter, the best i could do was to define the enum in jsapi.h. It is true however that I am not actively using this parameter(which identifies the specific JavaScript capability being checked, e.g. eval vs setTimeout vs setInterval vs Function) at the moment, so I could remove it altogether until the filter logic actually needs it. Is it a big deal to keep this information in jsapi.h?


> 
> 
> >+#define DEFAULT_COST 20
> >+
> >+#define IS_ASCII(u)       ((u) < 0x80)
> 
> Pretty much everything from here on should go.

I could move it to another file, but I can't just remove it. It's the code for the approximate substring matching algorithm.
Originally, I implemented this in a separate XPCOM component, but somebody (perhaps Sid or mrbkap) suggested I avoid creating XPCOM components unless necessary.


> >+/*** Macros and utils ***/
> 
> And all of this as well.

Same here. They are tests. They can't live in the test directory because tests are not linked directly to libxul and thus there's no way to call nsXSSUtils functions from there unless I create an XPCOM interface for nsXSSUtils, which is overkill. Mochitests are provided to test the filter, but these are too high-level to test the nsXSSUtils code appropriately.

> 
> This needs to be split up, and there's no way it can land without a spec.

You mean splitting the patch into smaller patches? How do you propose to split it? Unfortunately, the only logical separation I can think of is filing a separate patch for the approximate substring matching algorithm. Everything else really belongs together, and separating it just because it's too long to read might not ultimately help reviewers.

I have inserted some preliminary information in the public and intranet wiki (more details in the intranet wiki), but obviously I can come up with a detailed spec as we iron out the patch.
https://intranet.mozilla.org/XSS_Filter
https://wiki.mozilla.org/Security/Features/XSS_Filter
Attachment #567345 - Attachment is obsolete: true
Attachment #583120 - Flags: review?(mrbkap)
Attachment #567345 - Flags: review?(mrbkap)
Please don't put information on the intranet unless it's MoCo-confidential.
I don't believe there was anything confidential. unfortunately this was done during my summer internship and I don't have access to the intranet wiki anymore. I'll have somebody send me the content and merge whatever is relevant into the public wiki.
Alias: xssfilter
rebased to a more recent revision.
Attachment #583120 - Attachment is obsolete: true
Attachment #591286 - Flags: review?(jst)
Attachment #583120 - Flags: review?(mrbkap)
Comment on attachment 591286 [details] [diff] [review]
Patch to add an XSS filter to Firefox

Here's some feedback on this patch. I have yet to look at the actual filter implementation or the tests, but I did read through the rest of this patch fairly closely. On a general note, it seems there's a lot of overlap in where we call into CSP and this new XSS filter, would it make sense to combine the two and have the new combined class call into both CSP code (if a policy is present) and XSS filter code (if a filter is present) rather than adding the burden to know which ones to call and which call sites? Let me know if you and/or Sid etc have thought about that part of this yet.

Here's a bunch of feedback on the surrounding hookup code, some of which are gated on the general question of whether CSP and XSS filter code can be combined to some extent.

- In caps/idl/nsIPrincipal.idl:

+    /**
+     * XSS Filter associated with the principal. It is not an xpcom
+     * object and it is not refcounted.
+     */
+    [noscript,notxpcom] attribute nsXSSFilter XSSFilter;

The comment there doesn't appear to be correct.

- In nsNullPrincipal::GetXSSFilter() and nsNullPrincipal::SetXSSFilter():

+  // filter on a null principal makes no sense

Maybe say: "Null principals are never loaded from anywhere, and thus can never have an XSS filter."?

- In nsPrincipal::nsPrincipal():

+    mXSS(nsnull),

mXSS is an nsRefPtr, no need to initialize it explicitly.

- In nsPrincipal::SetXSSFilter():

+  // If the filter was already set, it should not be destroyed!  Instead, it should
+  // get set anew when a new principal is created.

Maybe say that an XSS filter can never be changed, it can only be set once and it remains for the lifetime of the principal?

- In nsScriptSecurityManager::XSSFilterPermitsJSAction()

+    if (str == NULL) {
+        NS_WARNING("XSS: str is null");
+        return true;
+    }

Do this as earlier, avoid extra work in this case (when can this happen anyways?). And test !str instead of str == NULL.

+    nsDependentJSString nsStr;

Name this ns_str or somesuch to make the variable name look less like a class name.

+    if (!nsStr.init(cx, str)) {
+        NS_WARNING("Failed to get nsDependentJSString");

Failed to initialize instead of get?

- In nsScriptSecurityManager::Init():

     static JSSecurityCallbacks securityCallbacks = {
         CheckObjectAccess,
         NULL,
         NULL,
-        ContentSecurityPolicyPermitsJSAction
+        ContentSecurityPolicyPermitsJSAction,
+        XSSFilterPermitsJSAction

Seems we could combine this new callback with the CSP one. The CSP one is called for functions and eval, which is the same places you want to be called, and you already handle the timeout case outside of the engine anyways. We should probably rename this callback to something that's not specific to CSP, and we might need to change the signature of stuff in the engine to pass along a bit more info, but if we can save us yet another callback here, that would be fantastic!

- In nsDocument::StartDocumentLoad():
 
   nsresult rv = InitCSP();
   NS_ENSURE_SUCCESS(rv, rv);
 
+  rv = InitXSS();
+  NS_ENSURE_SUCCESS(rv, rv);
+
   return NS_OK;

Just return rv here? No real point in checking rv here.

- In nsresult nsDocument::InitXSS():

Call this method InitXSSFilter(), and since this always returns NS_OK, change this to a void method.

- In nsDocument::SetBaseURI():
 
+  // xss check

XSS filter check?

+  nsRefPtr<nsXSSFilter> xss;
+  nsresult rv = NodePrincipal()->GetXSSFilter(getter_AddRefs(xss));
+  NS_ENSURE_SUCCESS(rv, rv);
+  // only check if origin changed
+  if (xss && !xss->PermitsBaseElement(mDocumentBaseURI, aURI)) {
+    PR_LOG(gXssPRLog, PR_LOG_DEBUG, ("CSP blocked XSS <base> attack"));

CSP didn't block this, fix the log message.

- In content/base/src/nsObjectLoadingContent.cpp:
 
 #ifdef PR_LOGGING
 static PRLogModuleInfo* gObjectLog = PR_NewLogModule("objlc");
+// TODO: what is the difference between declaring it here or in the
+// constructor? just lazy initialization?
+static PRLogModuleInfo* gXssPRLog = PR_NewLogModule("XSS");

The difference is when this code runs (library load time vs when this code is invoked). Either way is fine here.

- In nsObjectLoadingContent::LoadObject():

+      if (!xss->PermitsExternalObject(aURI)) {
+        PR_LOG(gXssPRLog, PR_LOG_DEBUG, ("XSSFilter blocked event handler."));

Fix the log message.

+        // TODO: why are we not doing this?
+        //HandleBeingBlockedByContentPolicy(rv, nsIContentPolicy::REJECT_SERVER);

Content policy != XSS filter, so you don't want to call HandleBeingBlockedByContentPolicy(). I do think you want to call Fallback(false) here though.

- In nsScriptLoader::ShouldLoadScript():
 
+    if (!xss->PermitsExternalScript(aURI)) {
+      PR_LOG(gXssPRLog, PR_LOG_DEBUG, ("XSSFilter blocked external script."));
+      return NS_ERROR_FAILURE;

Should you return NS_ERROR_XSS_BLOCK here?

- In nsScriptLoader::ProcessScriptElement():

+    if (!xss->PermitsInlineScript(scriptText)) {
+      PR_LOG(gXssPRLog, PR_LOG_DEBUG, ("XSSFilter blocked inline script."));
+      return NS_ERROR_FAILURE;

Should you return NS_ERROR_XSS_BLOCK here?

- In nsEventListenerManager::AddEventListener():

   ls->mHandlerIsString = PR_FALSE;
+  ls->mXSSHandlerWasString = PR_FALSE;
+  ls->mXSSChecked = PR_FALSE;
+  ls->mXSSResult = PR_TRUE; // easier to catch bugs

Use true/false, not PR_TRUE/PR_FALSE any more.
 
- In nsEventListenerManager::SetJSEventListener():

+    ls->mXSSHandlerWasString |= !aHandler;

XXXjst: Maybe change that to:

	if (!ls->mXSSHandlerWasString && !aHandler) {
	  ls->mXSSHandlerWasString = true;
	}

to make it easier to read?

- In nsEventListenerManager::AddScriptEventLi...:

+    // NOTE: if it's onload, we need to check now because it does not
+    // go through handleevent later. why?

XXXjst: Check with Olli Pettay on this.

+    nsAutoString attr;
+    aName->ToString(attr);
+    if (attr.EqualsLiteral("onload")) {

You can do all of the above much faster with an if (aName == nsGkAtoms::onload) check (or whatever the exact name of the onload atom is), no need for the nsAutoString.

+        if (!xss->PermitsEventListener(aBody)) {
+          PR_LOG(gXssPRLog, PR_LOG_DEBUG,
+                 ("XSS:Onload:blocked onload event"));
+          return NS_OK;
+        }
+      }
+
+    }
+
+

XXXjst: Remove empty lines here.

- In PRBool nsEventListenerManager::IsEventXSS():

Change the return type to bool.

+  // 1. xss filter
+  nsIDocument* doc = nsnull;
+  nsCOMPtr<nsINode> node = do_QueryInterface(aCurrentTarget);
+  if (!node) {
+    PR_LOG(gXssPRLog, PR_LOG_DEBUG,
+           ("CheckEventForXSS:no node"));
+    return PR_FALSE;
+  }
+  doc = node->GetOwnerDoc();
+  nsIPrincipal* principal = doc->NodePrincipal();

No need for doc here, you can simply call node->NodePrincipal().

+  nsRefPtr<nsXSSFilter> xss;
+  nsresult rv = principal->GetXSSFilter(getter_AddRefs(xss));
+  if (!xss) {
+    return PR_FALSE;

No need for rv here, you don't check it, so don't bother storing it.

false/true, not PR_FALSE/PR_TRUE. Do a global search in this patch for those please, and do s/PRBool/bool/ too.

+  content->GetAttr(kNameSpaceID_None, aListenerStruct->mTypeAtom, handlerBody);
+  NS_ENSURE_SUCCESS(rv, rv);

This checks rv from earlier, store the result of GetAttr() and check that. But don't return rv here, this method returns bool.

+  // 3. actual check
+  if (!xss->PermitsEventListener(handlerBody)) {
+    PR_LOG(gXssPRLog, PR_LOG_DEBUG,
+           ("CheckEventForXSS:XSSFilter blocked event handler."));
+    aListenerStruct->mXSSChecked = PR_TRUE;
+    aListenerStruct->mXSSResult = PR_TRUE;
+    return PR_TRUE;
+  }
+  aListenerStruct->mXSSChecked = PR_TRUE;

Both the if body and the code after the if check sets mXSSChecked to true, do that once before the if check only.

+  aListenerStruct->mXSSResult = PR_FALSE;
+  return PR_FALSE;
+}

- In content/events/src/nsEventListenerManager.h:

 typedef struct {
   nsRefPtr<nsIDOMEventListener> mListener;
   PRUint32                      mEventType;
   nsCOMPtr<nsIAtom>             mTypeAtom;
   PRUint16                      mFlags;
   bool                          mHandlerIsString;
+  bool                          mXSSHandlerWasString;
+  bool                          mXSSChecked;
+  bool                          mXSSResult;

Olli should sign off on all these event related changes too.
 
- In nsJSScriptTimeoutHandler::Init():

+        nsDependentJSString nsStr;

Rename nsStr here too.

- In nsJSThunk::EvaluateScript():

+    if (xss) {
+        NS_ConvertUTF8toUTF16 arg(mScript);

This conversion happens further down in this function too. Do this only once.

- In js/src/jsapi.h:

 /*
+ * Used to check if an xss filter wants to disable eval() and friends.
+ * See js_CheckXSSPermitsJSAction() in jsobj.
+ */
+enum XSSJSAction { xss_eval, xss_timeout, xss_function };
+
+typedef JSBool
+(* JSXSSFilterChecker)(JSContext *cx, JSString *str, enum XSSJSAction action);

Check with the js engine guys on this API and where it should live. But utlitmately xss_timeout is not relevant to the JS engine at all, so that should live elsewhere, and as stated earlier, we should combine the CSP and XSS callbacks.

- In nsLayoutStatics::Initialize():

+  nsXSSUtils::InitializeStatics();
+  nsXSSFilter::InitializeStatics();

Can initializing the XSS filter simply initialize the XSS utils too so we don't need two calls here?

- In nsDataChannel::OpenContentStream():

+            if (!xss->PermitsDataURL(URI())) {
+                PR_LOG(gXssPRLog, PR_LOG_DEBUG, ("XSS in data URL"));
+                return NS_ERROR_NOT_INITIALIZED;

Is NS_ERROR_NOT_INITIALIZED really the error code we want here?
Sorry about the delay. I'll address merging with CSP first.
I don't think it is appropriate to merge the two implementations for the following reasons:
1) Even though they have roughly the same goal at the high-level, they have a different approach. Also, one is opt-in and the other is opt-out, and there is no need for the XSS filter when a CSP is present (unless you explicitly allow inline scripts).
2) As far as I have understood, the CSP implementation is still a prototype that can change in the future if the work-in-progress w3c draft is modified or if the maintainers feel like it should be rewritten in C++ for performance.
3) We chose to write the XSS filter in C++ (and we even created a refcounted class instead of an XPCOM component) to minimize the overhead. If I merge it, I have to rewrite it in JS.

Having said that, my very first XSS filter prototype that I wrote before joining Mozilla was embedded in the CSP implementation, so it's not a big deal for me to make this change. The overhead might actually not be that bad as long as we keep the string matching stuff in a separate C++ XPCOM component. 

On the other hand, merging the callback to the script security manager with the CSP callback is practical. The csp callback passes a JSContext, i just need an additional pointer to the JavaScript string that is about to be executed. But is shaving off one pointer from the JSSecurityCallbacks data structure such a big deal?
Riccardo, you're correct that at some point the CSP implementation may well be rewritten in C++. Sid, Johnny and I discussed merging the implementation and the callback briefly a while ago. Merging the callback makes a lot of sense to me, merging the implementation seems like more of a long-term idea with the dependency on needing both implementations to be the same language that you brought up. Personally, though, i would advocate addressing the rest of the feedback in comment 24 without worrying about merging the implementation (or callback) at this point and pushing the implementation/review loop along some more.
Any update on this bug?
I had no time to work on it until last week, and this week I am abroad with limited connectivity. I should be able to submit an updated patch next week.
Do you need some help or are you going to find time to finish this?
I'm sorry about the long delay. I will address jst's comments this week. It looks like just a couple of changes are nontrivial.
sorry for the long delay. Here are my comments:

> - In nsNullPrincipal::GetXSSFilter() and nsNullPrincipal::SetXSSFilter():
> 
> +  // filter on a null principal makes no sense
> 
> Maybe say: "Null principals are never loaded from anywhere, and thus can
> never have an XSS filter."?

I actually just copied the comment from the CSP code above. It's not
clear to me when nsNullPrincipals are used. It seems that there is
only one reference in the codebase for GetSafeJSContext.
 

> +    // NOTE: if it's onload, we need to check now because it does not
> +    // go through handleevent later. why?
> 
> XXXjst: Check with Olli Pettay on this.

unfortunately, it seems that the way I implemented lazy checking and
caching does not work for events that target the window, such as those
on the body element (where i was trying onload). I had to move the
check to AddScriptEventListener, which unfortunately is done eagerly,
because there is apparently no reliable way to fetch the event handler
body the first time the event handler is executed.


> 
> - In nsJSThunk::EvaluateScript():
> 
> +    if (xss) {
> +        NS_ConvertUTF8toUTF16 arg(mScript);
> 
> This conversion happens further down in this function too. Do this only once.
> 

nsXSSFilter::PermitsJSUrl does a few conversions already, so it's not
going to make a big difference. Hopefully this path is not
particularly hot. On a related note, are "decoding html entities" and
"unescaping url control characters" commutative? If not, which one
should be executed first? I suppose url decoding...

> - In js/src/jsapi.h:
> 
>  /*
> + * Used to check if an xss filter wants to disable eval() and friends.
> + * See js_CheckXSSPermitsJSAction() in jsobj.
> + */
> +enum XSSJSAction { xss_eval, xss_timeout, xss_function };
> +
> +typedef JSBool
> +(* JSXSSFilterChecker)(JSContext *cx, JSString *str, enum XSSJSAction
> action);
> 
> Check with the js engine guys on this API and where it should live. But
> utlitmately xss_timeout is not relevant to the JS engine at all, so that
> should live elsewhere, and as stated earlier, we should combine the CSP and
> XSS callbacks.

until we combine the callbacks, i removed the parameter altogether. I
am not planning to use it for policy decisions anytime soon anyway
(input from setTimeout, Function and eval is currently treated in the
same way)

I did not merge the callback yet, as Ian suggested. I'll see what can
be done with the callback in the next days. By the way, are you sure
that it is so important to minimize the number of callbacks? You just
added two new callbacks since I last pulled from mozilla-central :-)
Attachment #591286 - Attachment is obsolete: true
Attachment #591286 - Flags: review?(jst)
Attachment #628762 - Flags: review?(jst)
>I did not merge the callback yet, as Ian suggested. I'll see what can
>be done with the callback in the next days. By the way, are you sure
>that it is so important to minimize the number of callbacks? You just
>added two new callbacks since I last pulled from mozilla-central :-)

Adding a new callback into eval is most likely unacceptable, this code is performance critical for some benchmarks. I believe for this reason we started caching the result of JSCSPEvalChecker. We do add and remove callbacks, but we try to avoid calling them in hot code paths.
it seems that I have originally misunderstood the purpose of merging the callbacks. It thought you wanted to save space in the memory layout of the structure. Instead, it seems that it's about these callbacks being used in very hot paths. In this case, the merge itself would not improve the situation much. Unless these callbacks use some sort of dispatching mechanism that makes calling them very slow (but it seems that you just have an indirect pointer), most of the overhead would come from what the XSSFilter actually does. Unfortunately caching won't help as much as it helped speed up CSP checks. CSP checks need to retrieve the evalOk variable from contentSecurityPolicy object associated to the document. Once that is cached, subsequent eval calls can use the cached value. XSS checks can only cache results for a particular string being eval'ed. Even if you decide to keep a hash table indexed by string, it will take up space as the strings cannot be discarded once they are hashed (the alternative is to use a cryptographically secure hash), because if the attacker can find a collision he can use the cached result of a benign eval call for his own injected code.
(In reply to Riccardo Pelizzi from comment #31)
> Created attachment 628762 [details] [diff] [review]
> Patch to add an XSS filter to Firefox
> 
> sorry for the long delay. Here are my comments:
> 
> > - In nsNullPrincipal::GetXSSFilter() and nsNullPrincipal::SetXSSFilter():
> > 
> > +  // filter on a null principal makes no sense
> > 
> > Maybe say: "Null principals are never loaded from anywhere, and thus can
> > never have an XSS filter."?
> 
> I actually just copied the comment from the CSP code above. It's not
> clear to me when nsNullPrincipals are used. It seems that there is
> only one reference in the codebase for GetSafeJSContext.

just a note that when 341604 lands there will be documents with null principals (when the document is loaded in an iframe with sandbox attribute not containing 'allow-same-origin')
(In reply to Ian Melven :imelven from comment #34)
> just a note that when 341604 lands there will be documents with null
> principals (when the document is loaded in an iframe with sandbox attribute
> not containing 'allow-same-origin')

if we keep the code this way, it would fail to assign an xss filter on sandboxed iframes. Then, all the xss checks would simply allow the execution of any content. Since the sandbox attribute is provided explicitly to frame untrusted content and since the null principal does not have access to any sensitive data, it is a reasonable fallback policy. If you disagree (for consistency reasons, i suppose), I can look into adding an nsXSSFilter field to null principals...
In the meantime, here is some data from talos:
https://tbpl.mozilla.org/?tree=Try&rev=af6008a3840f
Parent is 3e32a1d1f1a8, from May 29.

I'm afraid some of the talos tests underestimate the overhead. The overhead of most external scripts is 0, because they are probably loaded from the same origin. Tests like sunspider probably calculate elapsed time from the moment the tests start executing, *after* the filter has approved the execution of the scripts involved.
(In reply to Riccardo Pelizzi from comment #35)
> (In reply to Ian Melven :imelven from comment #34)
> > just a note that when 341604 lands there will be documents with null
> > principals (when the document is loaded in an iframe with sandbox attribute
> > not containing 'allow-same-origin')
> 
> if we keep the code this way, it would fail to assign an xss filter on
> sandboxed iframes. Then, all the xss checks would simply allow the execution
> of any content. Since the sandbox attribute is provided explicitly to frame
> untrusted content and since the null principal does not have access to any
> sensitive data, it is a reasonable fallback policy. If you disagree (for
> consistency reasons, i suppose), I can look into adding an nsXSSFilter field
> to null principals...

by default sandboxed iframes don't allow script, so this might not be a huge concern immediately, especially with the additional reasons you point out. bug 341604 is still a ways from landing, i just bring it up as a heads up :)
(In reply to Riccardo Pelizzi from comment #36)
> In the meantime, here is some data from talos:
> https://tbpl.mozilla.org/?tree=Try&rev=af6008a3840f
> Parent is 3e32a1d1f1a8, from May 29.
> 
> I'm afraid some of the talos tests underestimate the overhead. The overhead
> of most external scripts is 0, because they are probably loaded from the
> same origin. Tests like sunspider probably calculate elapsed time from the
> moment the tests start executing, *after* the filter has approved the
> execution of the scripts involved.

can you help me understand how to see the impact on the talos tests ? they are all 'green' so i know it wasn't a huge hit. how do i compare this try talos run to the 'baseline' ?
frankly i am having trouble understanding the test results as well. In graphs.mozilla.org you can find the results for the parent, 3e32a1d1f1a8, but I haven't found an easy way to compare them side-by-side.
(In reply to Riccardo Pelizzi from comment #39)
> frankly i am having trouble understanding the test results as well. In
> graphs.mozilla.org you can find the results for the parent, 3e32a1d1f1a8,
> but I haven't found an easy way to compare them side-by-side.

jmaher just wrote a blog post which may help : http://elvis314.wordpress.com/2012/07/13/mozilla-a-team-how-to-compare-talos-numbers-from-try-server-to-trunk-13/
I wonder if this patch should also add to mobile/locales/en-US/overrides/netError.dtd
Rebased patch.
Added strings to mobile/locales/en-US/overrides/netError.dtd

I ran a talos test and used Ian's link to compare the performance to recent commits to the Firefox tree. This is the result for the Linux64 tpn5:
Firefox min: 263.389
Firefox max: 302.449
XSSFilter result: 297.202

Looking at the graph, anything less than 290 looks like an outlier:
http://graphs.mozilla.org/graph.html#tests=[[206,113,15]]&sel=1343866158527.742,1344211147927.0354&displayrange=30&datatype=running

It seems that the patch does not affect tp5n.

Btw, do I also have to use the new license block for all files?
Attachment #628762 - Attachment is obsolete: true
Attachment #628762 - Flags: review?(jst)
Attachment #650917 - Flags: review?(jst)
(In reply to Riccardo Pelizzi from comment #42)
>
> Btw, do I also have to use the new license block for all files?

Thanks for continuing to push on this, Riccardo !

I believe that yes, you should use the new licence block everywhere, although you should have picked this up for existing files when you rebased, I would think ?
Fixes some mochitests that were using eval and broke, provides more useful informations to observers, uses new license block.
Attachment #650917 - Attachment is obsolete: true
Attachment #650917 - Flags: review?(jst)
Attachment #651164 - Flags: review?(jst)
jst - can you review this when you get a chance?
Should make sure this implementation doesn't fall vulnerable to the same issues mentioned in http://homakov.blogspot.com/2013/02/hacking-with-xss-auditor.html and http://homakov.blogspot.com/2013/02/hacking-facebook-with-oauth2-and-chrome.html
Flags: needinfo?(jst)
Blocks: WBGP
See Also: → 985704
No longer blocks: csp-w3c-2
Clearing requests here after talking to Sid, this'll need a bunch of work to move forward at this point.
Flags: needinfo?(jst)
Attachment #651164 - Flags: review?(jst)
See Also: → 1045902
Assignee: r.pelizzi → nobody
Here's another exploit for Chrome's XSS Auditor that's worth avoiding: http://blog.portswigger.net/2015/08/abusing-chromes-xss-auditor-to-steal.html
Updating the bug title to be easier to find. Currently supported by IE, Edge, Safari, and Chrome, to reflect what was lost in the title.
Summary: Heuristics to block reflected XSS (like in IE8) → Heuristics to block reflected XSS via X-XSS-Protection HTTP header
Inspired by Tom going through all sec-want bugs, I'm looking at this bug again.
Out of convenience, I'll quote myself from some place else:

There have been numerous discussions, the latest one in late 2016 and we
had come to the conclusion that it is currently not worth the effort for
Firefox to provide a built-in feature:

An XSS filter can not protect against stored (aka persistent) XSS or DOM
XSS, which has become more and more prevalent recently.
An XSS filter is prone to security holes if not maintained very
diligently and actively. It is hard to justify security engineering time
on a feature that provides limited value.
Lastly, there is an XSS filter in NoScript that people can use.
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → WONTFIX
Assignee: nobody → nobody
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: