Closed Bug 389508 (xxx) Opened 17 years ago Closed 16 years ago

Implement Cross-site XMLHttpRequest

Categories

(Core :: DOM: Core & HTML, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla1.9.1b1

People

(Reporter: sicking, Assigned: sicking)

References

Details

(Keywords: dev-doc-complete, fixed1.9.1)

Attachments

(4 files, 11 obsolete files)

79.56 KB, patch
sicking
: review+
sicking
: superreview+
Details | Diff | Splinter Review
109.94 KB, patch
sicking
: review+
sicking
: superreview+
Details | Diff | Splinter Review
13.57 KB, patch
sicking
: review+
Details | Diff | Splinter Review
32.16 KB, patch
sicking
: review+
sicking
: superreview+
Details | Diff | Splinter Review
Assignee: nobody → jonas
Attached patch Patch to fix (obsolete) — Splinter Review
Attachment #273736 - Flags: superreview?(jst)
Attachment #273736 - Flags: review?(jst)
Short description of changes:

content/xslt/ + xpcom/:
These changes are simply to move a whitespace tokenizer from the xslt code to xpcom so that I can reuse it. This generally seems like a handy class to have around as i'm sure we do whitespace tokenization elsewhere.

The class doesn't follow XPCOM naming conventions (nextToken rather than NextToken), let me know if you want that fixed.

caps/ + js/:
Add an 'allow cross site' flag to nsIScriptSecurityManager::CheckConnect to avoid getting stopped by caps immediately.

*Document* + dom/:
Improve the kLoadAsData mode such that it disables plugins as well as on* attributes. Also add the ability to disable getting/setting document.cookie.

The rest of the changes are the actual meat plus testcases. Let me know if you have questions as I know this is last minute.
OS: Windows XP → All
Hardware: PC → All
Comment on attachment 273736 [details] [diff] [review]
Patch to fix

This is too big a patch in a security-sensitive new feature to make me comfortable with only one review.
Attachment #273736 - Flags: superreview?(jst) → superreview?(dveditz)
The more that look at it the happier I am. My plan was to get bz to do a second review with an eye towards the security stuff once he has time. The only problem is that freeze is tomorrow.
So some quick comments on things that jump out at me when skimming (not at all a real security review):

1) The IsSameOrigin static you define is touching channels before they've responded, right?  I assume that's the reason you're not using the channel principal: because it might not really be correct yet (e.g. for a jar: wouldn't have the cert yet).  Might want to document that.

2) Why are we using CheckConnect here, exactly?  Just to allow CAPS to override the behavior?  If so, why not use CAPS to enable that check to pass cross-site instead of hacking the API?

(In reply to comment #5)
> So some quick comments on things that jump out at me when skimming (not at all
> a real security review):
> 
> 1) The IsSameOrigin static you define is touching channels before they've
> responded, right?  I assume that's the reason you're not using the channel
> principal: because it might not really be correct yet (e.g. for a jar:
> wouldn't have the cert yet).  Might want to document that.

Where would I get the principal? GetOwner? In any case, I do use the channel before it's opened. In fact, I always do it before it's opened since I might not want to open the channel depending on what IsSameOrigin returns.

> 2) Why are we using CheckConnect here, exactly?  Just to allow CAPS to
> override the behavior?  If so, why not use CAPS to enable that check to pass
> cross-site instead of hacking the API?

Yes, we're checking with CAPS to give it a chance to override and deny. Not sure I follow how you mean we should do it instead? By setting the ...XMLHttpRequest.open pref in all.js?
What would that buy us in light that we'll probably add access-control support to more things in the future?

Not opposed it, just wondering what the rationale is.
> Where would I get the principal? 

nsIScriptSecurityManager::GetChannelPrincipal.  Note that this is really designed to work with channels in OnStartRequest (so in particular it handles original vs final URIs in a particular way, looks at owners, etc).  I guess it doesn't really work for your use case.  :(

And yes, I meant to just add a pref in all.js.  The rationale is to actually use this infrastructure we have, since we have it.  Less code, fewer ways to accomplish the same effect, that sort of thing.  I'm not sure why the "other things" part matters.
Attached patch Patch v2 (obsolete) — Splinter Review
So I couldn't simply set "capability.policy.default.XMLHttpRequest.open" to allAccess since that allows not just cross-site requests, but also cross-site access to the .open method, which is scary.

So I renamed the CheckConnect 'property' to "open-uri" to be able to set a different policy for the request itself.
Attachment #273736 - Attachment is obsolete: true
Attachment #273866 - Flags: superreview?(dveditz)
Attachment #273866 - Flags: review?(jst)
Attachment #273736 - Flags: superreview?(dveditz)
Attachment #273736 - Flags: review?(jst)
Comment on attachment 273866 [details] [diff] [review]
Patch v2

- In nsXMLHttpRequest::GetResponseHeader():

+  // Check for dangerous headers
+  if (mState & XML_HTTP_REQUEST_USE_XSITE_AC) {
+    const char *kCrossOriginUnsafeHeaders[] = {
+      "Cache-Control", "Content-Language", "Content-Type", "Expires",
+      "Last-Modified", "Pragma"
+    };

Those are the safe ones, right? You should probably rename that variable to SafeHeaders and not UnsafeHeaders :)

More later...
In nsCrossSiteListenerProxy you call mOuter->OnStopRequest twice in some cases - first from ForwardRequest, the second time when you get OnStopRequest (when you cancel a channel, you still get OnStopRequest called)


Perhaps your stream lsisteners should check nsIHttpChannel::requestSucceeded (to catch 404 errors)

+nsACProxyListener::GetInterface(const nsIID & aIID, void **aResult)
+{
+  return QueryInterface(aIID, aResult);

I'm not really a fan of implementing GetInterface that way... makes it less clear what interfaces you're actually handing out here

+    const char *kCrossOriginUnsafeHeaders[] = {
+      "Cache-Control", "Content-Language", "Content-Type", "Expires",
+      "Last-Modified", "Pragma"
+    };

I think more efficient, at least on ELF, would be:
  static const char kCrossOriginUnsafeHeaders[][N] = { ... };
for a sufficiently large N to fit the largest string

Shouldn't IsSameOrigin be static?

+  if (!(mState & XML_HTTP_REQUEST_XSITEENABLED)) {
+    listener = new nsCrossSiteListenerProxy(listener, mPrincipal);

please add a comment that redirects are the reason why you always have to use the listener proxy

+  // Check that we havn't already opened the channel. We can't rely on

typo here (havn't)

+  // Disable redirects for non-get cross-site requests entierly for now

typo (entierly)

Comment on attachment 273866 [details] [diff] [review]
Patch v2

- In nsCrossSiteListenerProxy::HandleProcessingInstruction():

+    if (!res) {
+      // parsing attribute value failed or unknown/dulicated attribute

"duplicated"?

+static PRBool
+EatLWS(const char*& aIter, const char* aEnd)
+{
+  if (aIter + 1 < aEnd && *aIter == 13 && *(aIter + 1) == 10) {
+    aIter += 2;
+  }
+
+  PRBool res = PR_FALSE;
+  while (aIter < aEnd && (*aIter == 9 || *aIter == 32)) {
+    ++aIter;
+    res = PR_TRUE;
+  }
+  
+  return res;
+}

Wanna document what these Eat* functions do, just to help people reading this code? And please use '\n', '\r', '\t', etc rather than the numeric character values.

- In EatToChar():

+  static char emptyStatic[] = { '\0' };
+
+  aIter = start;
+  return Substring(emptyStatic, emptyStatic);

Couldn't you return EmptyCString() here?

- In content/base/src/nsParserUtils.cpp:

 #define SKIP_WHITESPACE(iter, end_iter)                          \
   while ((iter) != (end_iter) && nsCRT::IsAsciiSpace(*(iter))) { \
     ++(iter);                                                    \
-  }                                                              \
-  if ((iter) == (end_iter))                                      \
-    break
+  }
 
 #define SKIP_ATTR_NAME(iter, end_iter)                            \
   while ((iter) != (end_iter) && !nsCRT::IsAsciiSpace(*(iter)) && \
          *(iter) != '=') {                                        \
     ++(iter);                                                     \
-  }                                                               \
-  if ((iter) == (end_iter))                                       \
-    break
+  }

These changes aren't right, you'll want to stop the outer loops (where these macros are used) when you hit the end in these macros.




- In nsACProxyListener::OnStartRequest():

+  nsresult rv = aRequest->GetStatus(&status);
+
+  if (NS_SUCCEEDED(rv)) {
+    rv = status;
+  }
+
+  nsCOMPtr<nsIHttpChannel> http;
+  if (NS_SUCCEEDED(rv)) {
+    http = do_QueryInterface(aRequest, &rv);
+  }

Wanna move the rv = status code into this if-check to eliminate one of the checks?

- In nsXMLHttpRequest::OpenRequest():

+    if (!method.LowerCaseEqualsASCII("get")) {

LowerCaseEqualsLiteral()? There's other places where existing code could do the same too, up to you if you want to change that as well.

- In nsXMLHttpRequest::Open():

-    rv = secMan->CheckConnect(cx, targetURI, "XMLHttpRequest","open");
+    rv = secMan->CheckConnect(cx, targetURI, "XMLHttpRequest","open-uri");

Add a space after ',' there while you're here :)

- In most test files:

\ No newline at end of file

Might as well add one :)

- In modules/libpref/src/init/all.js:

+pref("capability.policy.default.XMLHttpRequest.open-uri", "allAccess");

Yuck, but oh well.

r=jst with that.
Attachment #273866 - Flags: review?(jst) → review+
Comment on attachment 273866 [details] [diff] [review]
Patch v2

sr=dveditz (with the same whitespace macro concerns as jst).
Attachment #273866 - Flags: superreview?(dveditz) → superreview+
> +    const char *kCrossOriginUnsafeHeaders[] = {
> +      "Cache-Control", "Content-Language", "Content-Type", "Expires",
> +      "Last-Modified", "Pragma"
> +    };
> 
> I think more efficient, at least on ELF, would be:
>   static const char kCrossOriginUnsafeHeaders[][N] = { ... };
> for a sufficiently large N to fit the largest string

I prefer to keep it simple, this stuff isn't critical.

(In reply to comment #12)
> - In EatToChar():
> 
> +  static char emptyStatic[] = { '\0' };
> +
> +  aIter = start;
> +  return Substring(emptyStatic, emptyStatic);
> 
> Couldn't you return EmptyCString() here?

Unfortunately no, since I'm returning an nsDependantString by value :( Our class hierarchy needs to be simpler... moz2

> These changes aren't right, you'll want to stop the outer loops (where these
> macros are used) when you hit the end in these macros.

Added another argument to SKIP_WHITESPACE, what to return if we hit the end of the file.

> - In nsACProxyListener::OnStartRequest():
> 
> +  nsresult rv = aRequest->GetStatus(&status);
> +
> +  if (NS_SUCCEEDED(rv)) {
> +    rv = status;
> +  }
> +
> +  nsCOMPtr<nsIHttpChannel> http;
> +  if (NS_SUCCEEDED(rv)) {
> +    http = do_QueryInterface(aRequest, &rv);
> +  }
> 
> Wanna move the rv = status code into this if-check to eliminate one of the
> checks?

Then i'd need to check after assigning |rv = status| anyway, no?
Attached patch Patch v3 (obsolete) — Splinter Review
Fixes review comments. This compiles, but I haven't tested it yet since I can't land tonight anyway. Time for sleep.
Attachment #273866 - Attachment is obsolete: true
(In reply to comment #14)
> I prefer to keep it simple, this stuff isn't critical.

I meant codesize-wise.
Attached patch Additional fix (obsolete) — Splinter Review
The old patch wouldn't OnStopRequest on the inner listener if we called ForwardRequest from OnStopRequest. This fixes that, and also adds a status-check at the top of OnStartRequest. This check is possibly not strictly needed, but it feels like the right thing to do.
Attachment #274107 - Flags: superreview?
Attachment #274107 - Flags: review?
Attachment #274107 - Flags: superreview?
Attachment #274107 - Flags: superreview+
Attachment #274107 - Flags: review?
Attachment #274107 - Flags: review+
Depends on: 389943
I think this caused bug 389943
Depends on: 389948
No longer depends on: 389948
Attached patch Fix regressions and bugs (obsolete) — Splinter Review
This patch fixes the two regressions already filed (bug 389943 and bug 389964).

I also noticed that I broke multipart requests since I nested the multipart proxy and the cross-site proxy the wrong way.

I lastly used the wrong listener when doing POSTs which meant that no access-control check was done on the result of the initial GET.

While fixing all this I noticed that our multipart implementation is kind of wonky in that it alway transitions to readystate 1. I fixed this somewhat by checking the nsIMultiPartChannel::isLastPart flag in OnStopRequest. This isn't perfect since due to limitations in our multipart decoder, but at least it does the right thing for properly formatted multipart responses. I can remove this stuff if you want since it's orthogonal to the original bug.

It would be great to get this stuff in with the M7 release to avoid releasing regressions and bugs in the new cross-site xhr implementation.
Attachment #274337 - Flags: superreview?(jst)
Attachment #274337 - Flags: review?(jst)
Comment on attachment 274337 [details] [diff] [review]
Fix regressions and bugs

Oh, and i noticed that the original code failed to do access-control checks on non-xml data in some circumstances. The reason is that the parser will return an error from onDataAvailable if the sink doesn't QI to nsIHTMLContentSink
Are there consumers who expect setRequestHeader to silently no-op for non-HTTP implementations or something?
Yes, see bug 389943. Though there are probably mostly internal ones since web-content has had little reason, or ability, to open() file-urls before.

Hmm.. that gets me thinking. Should we have a CheckLoadURI check on each redirect even with the access-control stuff? The CheckLoadURI thing has always been an belts-and-suspenders thing which might be good to apply here too.
Probably a good idea.  More interestingly, we should do a content policy check there just like we do at the original load start, imo.
Flags: blocking1.9+
Target Milestone: --- → mozilla1.9 M7
Comment on attachment 274337 [details] [diff] [review]
Fix regressions and bugs

- In nsCrossSiteListenerProxy::OnDataAvailable():

+  rv = stackedListener->OnDataAvailable(aRequest, aContext, stream, aOffset,
+                                        aCount);
+  if (rv == NS_ERROR_NO_INTERFACE) {
+    rv = NS_OK;
+  }

This seems worthy of a comment :)

r+sr=jst
Attachment #274337 - Flags: superreview?(jst)
Attachment #274337 - Flags: superreview+
Attachment #274337 - Flags: review?(jst)
Attachment #274337 - Flags: review+
Depends on: 390219
No longer depends on: 390219
Attached patch Fix regressions and bugs v2 (obsolete) — Splinter Review
This is a better way to deal with the case when the parser QIs to nsIHTMLContentSink. The new code will ignore all errors that happen after we forward the request to the outer listener.
Attachment #274337 - Attachment is obsolete: true
Attachment #274548 - Flags: superreview+
Attachment #274548 - Flags: review+
Latest bug checked in. Marking this baby FIXED
Status: NEW → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
Yay!  :)
>+  // When we forward the request we also terminate the parsing which will
>+  // result in an error bubbling up to here. We want to ignore the error
>+  // in that case.

Did you mean to cancel the channel then?  Returning error from OnDataAvailable handles that automatically, but as it is necko will keep pumping in data (which you will ignore), no?  Or do you actually want that data?
The terminated parser is the one used to parse the PIs in the stream. Once i've determined that the PIs (or headers) grant access I want to terminate that parser, but keep pumping data to nsXMLHttpRequest.

So no, I do not want to cancel the channel in that case.

If the PIs/headers do say that access is not granted I do want to cancel the channel, and that happens a few lines down.
Test is checked in as you pointed out.
Flags: in-testsuite? → in-testsuite+
Depends on: 392322
Depends on: 393918
I am finding that allowed XXX requests are generating errors in the console. In the latest development build of MineField, http://webziek.nl/catalog/catalog.html is working properly, but errors are being logged in the Error Console like:

Security Error: Content at http://webziek.nl/catalog/catalog.html may not load data from http://xtra.davex.nl/catalog.xml.
Ok thanks. Any idea why I get this error?

Error: uncaught exception: [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIXMLHttpRequest.setRequestHeader]"  nsresult: "0x80004005 (NS_ERROR_FAILURE)"  location: "JS frame :: http://localhost:2080/remote/test-page.html :: <TOP_LEVEL> :: line 6"  data: no]

when I return the following HTTP response from another domain:

HTTP/1.1 200 OK
Date: Wed, 05 Sep 2007 07:02:25 GMT
Server: Apache-Coyote/1.1
Content-Access-Control: allow <*>
Content-Type: text/xml;charset=ISO-8859-1
Content-Length: 91
Set-Cookie: JSESSIONID=0CBE8A640BF54D920C2DFE36E6D916D4; Path=/xhr-sample



<?xml version="1.0" encoding="UTF-8"?>
<contents>
Sample dynamic text.
</contents>
> Ok thanks. Any idea why I get this error?

Is your code actually calling setRequestHeader directly?  If so, are you trying to set a header that's not allowed for cross-site requests?
The code should be updated to use the "Access-Control" HTTP header as opposed to "Content-Access-Control". Hopefully before Firefox 3!

Also, are non-GET requests already supported? (Using If-Method-Allowed and all that.)

Can sites inspect the Referer-Root header (which doesn't suffer from information leakage and can therefore be safely included with every access control request).
Anne, file new bugs on that? (and mention the bug numbers here?)
Yes, non-GET methods are supported. Though I used the response header name from the original draft which was simply "Allow". I don't think it sends any extra request header.

It would be great if you could file separate bugs on using the latest versions of he various header names. I also need to update the pattern-matching algorithm to match he latest version of the spec, as well as do the right thing for international DNS names.
I don't really have the bandwidth right now. I'll see what I can do. The response header is indeed "Allow", but the server might want to know which method you're planning to do. That's specified in "In-Method-Allowed".
See bug 397877, bug 397878, and bug 397879 for the aforementioned issues.
Alias: xxx
Depends on: 397877, 397878, 397879
What is the state of the Cross-site XMLHttpRequest?

See lot of strange error messages (in error console) in the form:

----

Error: 
Source File: http://212.123.221.64:4040/dvrec/camera?unique=1197453631244&session=90AF016A71AA993766043D4B0D4B4211&cameraid=1&command=getLastPointChangedPromilage
Line: 1, Column: 110
Source Code:
<?xml version="1.0" encoding="UTF-8"?><?access-control allow="*"?><RESULT type="CAMERA" id="1" errorcode="0"><LASTPOINTCHANGEDPROMILAGE>22</LASTPOINTCHANGEDPROMILAGE></RESULT>

----

There seems to be nothing wrong with this XML.
This happens on XMLHttpRequest using GET method (cross domain).

When using POST method I always get 0x80040111 (NS_ERROR_NOT_AVAILABLE) whatever I try with "Access-Control"  and "Allow" headers.
The API for cross-site non-GET requests has changed. Before Firefox 3 is released this should probably be implemented along with the other requirements.

http://dev.w3.org/2006/webapi/XMLHttpRequest-2/
http://dev.w3.org/2006/waf/access-control/
First pass at documentation can be found here:
http://developer.mozilla.org/en/docs/Cross-Site_XMLHttpRequest
Keywords: dev-doc-needed
Depends on: 405341
Patch was backed out, so reopening.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Attached patch Reland old implementation (obsolete) — Splinter Review
I'm going to attach two patches here that implements most of the Access-Control spec and applies it to the XMLHttpRequest implementation. The first patch relands the old implementation that was written for FF3.0, the second patch implements changes to the spec since then, as well as a much better set of tests that takes advantage of new capabilities in mochitest.

With the two patches the implementation isn't yet fully complete. The following things still needs to be done:
1. Implement cookie-less requess
2. Implement redirects for preflighted reqeusts
3. Fix issues with upload events that allows server detection
4. Go through and make sure that all issues found during security reviews are
   addressed (I think the only remaining thing is setting the document URI of
   the parsed document to be that of the loading page to make sure that security
   checkss based on document uri rather than principal isn't a problem)
5. Add new objects to cycle collector if needed

The first two require changes to necko which is why I want to keep those patches separate. We might not want to land until all of this is fixed, but it's nicer to keep the patches separate review-wise.
Attachment #273935 - Attachment is obsolete: true
Attachment #274107 - Attachment is obsolete: true
Attachment #274548 - Attachment is obsolete: true
Attachment #338425 - Flags: superreview?(jst)
Attachment #338425 - Flags: review?(jst)
Attached patch Update to spec changes (obsolete) — Splinter Review
Attachment #338426 - Flags: superreview?(jst)
Attachment #338426 - Flags: review?(jst)
Attachment #338425 - Attachment is patch: true
Attachment #338425 - Attachment mime type: application/text → text/plain
Now that XHR is going to be a part of the DOM, there should be a way of hiding password fields from other domains, i.e. a property that could block out other-domain calls to read the value or, better yet to encrypt the password in the DOM itself. The latter would be much better and still make handlers valid just the value would be encrypted (via the encryption key of the site itself, if it provides encryption). This can prevent huge password-sniffing mistakes and this should be implemented regardless as a way of safeguarding passwords from extensions and the like. Maybe not this bug though...
Yes, definitely unrelated to this bug. I'd suggest raising it in the news groups first so we can discuss the security aspects of what you are proposing.
Attachment #338425 - Flags: superreview?(jst)
Attachment #338425 - Flags: superreview+
Attachment #338425 - Flags: review?(jst)
Attachment #338425 - Flags: review+
Flags: blocking1.9+ → blocking1.9.1+
Priority: -- → P1
Target Milestone: mozilla1.9alpha7 → mozilla1.9.1b1
Priority: P1 → P2
This prevents using upload events to test if a server exists by ensuring that if there are upload events registered we require preflight. Also ensures that if we've already started a unpreflighted request we don't allow adding progress listeners.

Also kills the unused #define XML_HTTP_REQUEST_HAS_AC_UNSAFE_HEADERS
Attachment #340254 - Flags: superreview?(jst)
Attachment #340254 - Flags: review?(Olli.Pettay)
Attachment #340254 - Attachment is patch: true
Attachment #340254 - Attachment mime type: application/octet-stream → text/plain
Comment on attachment 338426 [details] [diff] [review]
Update to spec changes

- In nsCrossSiteListenerProxy::CheckRequestApproved():

+  nsCAutoString allowedOriginHeader, origin;
+  rv = http->GetResponseHeader(
+    NS_LITERAL_CSTRING("Access-Control-Allow-Origin"), allowedOriginHeader);
+  NS_ENSURE_SUCCESS(rv, rv);
+
+  if (!allowedOriginHeader.EqualsLiteral("*")) {

origin, declared above, is only used inside this if check, so the declaration could be moved.

+      foundMethod = foundMethod || mPreflightMethod.Equals(method);

foundMethod |= mPreflightMethod.Equals(method)? :)

+    NS_ENSURE_TRUE(foundMethod, NS_ERROR_DOM_BAD_URI);

Do we really want a message on the console if this is not the case?

- In nsACProxyListener::AddResultToCache():


+    // Cap at 24 hours. This also avoids overflow
+    if (age > 86400) {
+      age = 86400;
+    }

Use PR_MIN() here?

That's all for now, will continue later. Note to self, continue at nsAccessControlLRUCache::CacheEntry::PurgeExpired()
Comment on attachment 340254 [details] [diff] [review]
Prevent server sniffing using upload events

>+  PRPackedBool mHandlersForbidden;
What is this?

>+++ b/content/events/src/nsEventListenerManager.h
>@@ -49,6 +49,8 @@
> 
> class nsIDOMEvent;
> class nsIAtom;
>+class nsIWidget;
>+struct nsPoint;
What are these changes?

>@@ -131,6 +133,11 @@ public:
> 
>   static PRUint32 GetIdentifierForEvent(nsIAtom* aEvent);
> 
>+  PRBool HasListeners()
>+  {
>+    return !mListeners.IsEmpty();
>+  }
>+
This looks like a useful method. Add it to the interface. Then you don't need to
add those _AMBIGUOUS macros.

Note, because of other changes, this patch doesn't apply cleanly.
Attachment #340254 - Flags: review?(Olli.Pettay) → review-
Comments (though please follow-up on public-webapps): I don't think addEventListener should start throwing. The event should simply not be dispatched. Also, should a preflight request really be "implied" if you register a listener for the "foo" event?
Good point and I agree. AddEventListener shouldn't throw.

(In reply to comment #52)
> (From update of attachment 340254 [details] [diff] [review])
> >+  PRPackedBool mHandlersForbidden;
> What is this?

From old versions of this patch, will remove.

> >+++ b/content/events/src/nsEventListenerManager.h
> >@@ -49,6 +49,8 @@
> > 
> > class nsIDOMEvent;
> > class nsIAtom;
> >+class nsIWidget;
> >+struct nsPoint;
> What are these changes?

These classes are used in this file but not #included. Probably #included above nsEventListenerManager.h in nsEventListenerManager.cpp. Not really needed if I don't include this class directly.

> This looks like a useful method. Add it to the interface. Then you don't need
> to add those _AMBIGUOUS macros.

Will do. Though we really should make more of the internal code use nsELM to avoid the virtual code overhead in both size and cycles.

(In reply to comment #53)
> Comments (though please follow-up on public-webapps): I don't think
> addEventListener should start throwing. The event should simply not be
> dispatched.

What I didn't like about that approach was that it basically means that your code "silently fails", which can be hard to debug. I suppose we can put a warning in the console to help with that but it still feels bad.

I'll raise on putlic-webapps
Attachment #338426 - Flags: superreview?(jst)
Attachment #338426 - Flags: superreview+
Attachment #338426 - Flags: review?(jst)
Attachment #338426 - Flags: review+
Comment on attachment 338426 [details] [diff] [review]
Update to spec changes

The rest looks good. r+sr=jst
Address review comments
Attachment #340254 - Attachment is obsolete: true
Attachment #340478 - Flags: superreview?(jst)
Attachment #340478 - Flags: review?(Olli.Pettay)
Attachment #340254 - Flags: superreview?(jst)
Comment on attachment 340478 [details] [diff] [review]
Prevent server sniffing using upload events. v2

Could you use -U 8 -P with patches.

>@@ -1629,7 +1628,8 @@
>                                         PRUint64 aLoaded, PRUint64 aTotal,
>                                         PRUint64 aPosition, PRUint64 aTotalSize)
> {
>-  if (aType.IsEmpty()) {
>+  NS_ASSERTION(aTarget, "null target");
>+  if (aType.IsEmpty() || (aTarget == mUpload && !AllowUploadProgress())) {
Could you change this to something like 
  if (aType.IsEmpty() || (!AllowUploadProgress() && (aTarget == mUpload ||
                                                     aType.EqualsLiteral(UPLOADPROGRESS_STR))) {
Then you don't need the following change:

>@@ -3041,7 +3043,8 @@
....
>+  if (!mErrorLoad &&
>+      (!upload || AllowUploadProgress())) {



>@@ -2541,7 +2541,9 @@
>     
>     nsCAutoString method;
>     httpChannel->GetRequestMethod(method);
>-    if (!mACUnsafeHeaders.IsEmpty()) {
>+    if (!mACUnsafeHeaders.IsEmpty() ||
>+        HasListenersFor(NS_LITERAL_STRING(UPLOADPROGRESS_STR)) ||
>+        (mUpload && mUpload->HasListeners())) {
>       mState |= XML_HTTP_REQUEST_NEED_AC_PREFLIGHT;
>     }
>     else if (method.LowerCaseEqualsLiteral("post")) {
What if someone adds upload.onload or similar after this check?
(Just trying to understand this.)

> protected:
>-  nsCOMPtr<nsPIDOMEventTarget> mOwner;
>+  nsRefPtr<nsPIDOMEventTarget> mOwner;
Why this change?
will revert the nsRefPtr change and make the other change.

If all of the tests fail we don't set the preflight flag which means that AllowUploadProgress will return false and we won't fire any upload events.
Comment on attachment 340478 [details] [diff] [review]
Prevent server sniffing using upload events. v2

Ok, with the changes.
Attachment #340478 - Flags: review?(Olli.Pettay) → review+
Attached patch Implement cookieless requests (obsolete) — Splinter Review
This implements cookie-less requests.
Attachment #341061 - Flags: superreview?(jst)
Attachment #341061 - Flags: review?(jst)
Comment on attachment 341061 [details] [diff] [review]
Implement cookieless requests

Dave or Biesi, would be great to have one of you look over the netwerk parts of this patch.
Attachment #341061 - Flags: review?(cbiesinger)
Attachment #341061 - Flags: superreview?(jst)
Attachment #341061 - Flags: superreview+
Attachment #341061 - Flags: review?(jst)
Attachment #341061 - Flags: review+
Comment on attachment 341061 [details] [diff] [review]
Implement cookieless requests

Looks good to me. r+sr=jst
Comment on attachment 340478 [details] [diff] [review]
Prevent server sniffing using upload events. v2

Jonas, can you attach a new patch with -p to make the patch more readable?
Attachment #341061 - Flags: review?(cbiesinger) → review+
Synced to tip
Attachment #338425 - Attachment is obsolete: true
Attachment #341168 - Flags: superreview+
Attachment #341168 - Flags: review+
Synced to tip and fix review comments
Attachment #341169 - Flags: superreview+
Attachment #341169 - Flags: review+
Synced to tip and update review comments
Attachment #338426 - Attachment is obsolete: true
Attachment #340478 - Attachment is obsolete: true
Attachment #341170 - Flags: superreview?(jst)
Attachment #341170 - Flags: review+
Attachment #340478 - Flags: superreview?(jst)
Synced to tip
Attachment #341061 - Attachment is obsolete: true
Attachment #341172 - Flags: superreview+
Attachment #341172 - Flags: review+
Attachment #341170 - Flags: superreview?(jst) → superreview+
Checked in. Woohoo!!!

There are some things that remain to do, but I'll file separate bugs on that.
Status: REOPENED → RESOLVED
Closed: 17 years ago16 years ago
Resolution: --- → FIXED
Blocks: 416968
No longer depends on: 416968
Blocks: 458087
What's with the magical "2192" in the httpd.js changes?  If that has to do with a line number in httpd.js (ugh if it does), could it be runtime-calculated by throwing and catching an exception just before that to get an actual line number for the calculation, so it doesn't have to be tweaked for every change to the preceding lines in httpd.js?

It's nice to see SJS being used in more Mochitests.  :-)
that's exactly what it is. Didn't really mean to check that in though. However if the exception idea works we should do that!
Depends on: 471139
Depends on: 485704
Depends on: 463639
Depends on: CVE-2011-0069
Component: DOM → DOM: Core & HTML
You need to log in before you can comment on or make changes to this bug.