Open Bug 751552 Opened 12 years ago Updated 2 years ago

Posting a file through XMLHttpRequest when using negotiate auth stops at the 401 without authenticating and re-posting

Categories

(Core :: DOM: Core & HTML, defect, P3)

12 Branch
x86_64
Windows 7
defect

Tracking

()

UNCONFIRMED

People

(Reporter: ffbugzilla, Unassigned)

Details

Attachments

(3 files)

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0
Build ID: 20120420145725

Steps to reproduce:

When posting a file with XMLHttpRequest (i used the File API to send the actual file content that the user selected in a file input) and using negotiate auth (apache + kerberos in my case) it seems that Firefox first posts to the target, receives a 401 promting to authenticate and just does nothing. It works fine in Chrome, and it also works fine if I pull up Firebug and re-post (!). The bug is not reproducable when using HTTP basic auth, and it also works well when posting a regular string instead of the binary data.

Steps to reproduce:
 Download attached PHP-file and put it on a server with kerberos/negotiate auth
 Open the URL in Firefox and post a file
 No response is received
We have also seen this type of issue when issuing a cross domain POST request and 'withCredentials' set to 'true' with server side NTLM auth required.

Using network monitor or fiddler2 we see the preflight OPTIONS header being sent to the server, but after the initial 401 response from the server requesting that the client performs NTLM negotiation, the browser simply doesn't bother to negotiate and the call fails. Settings in 'about:config' have been configured so that cross domain should be supplied the credentials: 'network.automatic-ntlm.auth.trusted-uris' and 'network.negotiate-auth.trusted-uris'. I'm not sure on the technical details, but GET works fine, does GET attempt preflight of OPTIONS? It doesn't appear to...

As a test we tried the following code in Chrome and Firefox with network monitoring and discovered that Chrome negotiates the preflight response in order to obtain the OPTIONS header from the server, and Firefox does not:


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
	<head>
		<title>XMLHttpRequest Cross Domain Post</title>
	</head>
	<body>
		<script>
			var url = "http://myWebServiceServer/InstantMessagingService/chat/message/send";
			var data = '{ "remoteUserUri" : "sip:foo.bar@mydomain.com", "message" : "This is my message" }';			
			var request = new XMLHttpRequest();						
			request.open("POST", url, true);
			request.withCredentials = true;
			request.setRequestHeader("Content-Type", "application/json");					
			request.send(data);			
			
			console.log(request);			
		</script>
	</body>
</html>
Attached file POST JSON Fail Example
Example HTML/Javascript to show Firefox bug in POST (OPTIONS) cross domain with negotiation.
I can confirm this issue with Firefox 42.0, though I do not use Ajax, I have a plain HTML form. Firefox tries to upload, Tomcat rejects with Negotiate, Firefox aborts the request. I double-checked the action with curl and curl indeed reposts the data successfully. Recent Chrome fails too. IE11 does another thing here, because the domain name is in the intranet zone, it performs preemptive authentication and avoids that problem.

This issue can also be reproduced by using Basic auth and deleting the auth cache. Firefox will fail too instead of popping the dialog box.

How we can make progress here?
Having come across this issue in a different way recently, I have to admit that I think Firefox was correct in its behaviour which Chrome now seems to emulate. 

According the spec, negotiation is not allowed on an OPTIONS request. 

We've solved the problem in our own system by correctly configuring all OPTIONS requests to be allowed anonymous access, so they can therefore return the accepted headers access origins etc to the caller and then they can call GET/POST ... with negotiation occurring.
Hello Michael, As per Peter's comment, issue is same as on chrome and seems to be resolved after changing some configuration. Can you please retest on the latest released version of Firefox(Firefox 43.0.4) and let me know your finding. I will change the bug status accordingly. Thanks!
Component: Untriaged → XML
Flags: needinfo?(ffbugzilla)
Product: Firefox → Core
(In reply to Kanchan Kumari QA from comment #5)
> Hello Michael, As per Peter's comment, issue is same as on chrome and seems
> to be resolved after changing some configuration. Can you please retest on
> the latest released version of Firefox(Firefox 43.0.4) and let me know your
> finding. I will change the bug status accordingly. Thanks!

Though it is not quite clear what Peter has changed in Firefox, obviously nothing. I will retry with the newest version of Firefox from Windows 7. Please note that the HTML form does not perform any OPTIONS requests but a plain multipart POST request.
(In reply to Kanchan Kumari QA from comment #5)
> Hello Michael, As per Peter's comment, issue is same as on chrome and seems
> to be resolved after changing some configuration. Can you please retest on
> the latest released version of Firefox(Firefox 43.0.4) and let me know your
> finding. I will change the bug status accordingly. Thanks!

Just tried to perform the upload with "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0" through a HTML form: Error: connection aborted.
Component: XML → DOM
Per https://fetch.spec.whatwg.org/#cors-preflight-fetch Chrome is wrong. A preflight fetch needs to have a response code in the range 200 to 299. HTTP authentication, HTTP redirects, etc. are not handled for the preflight.

Michael, Peter did not change Firefox, he fixed his server setup. Do you have a test somewhere demonstrating the HTML <form> behavior? That seems like a separate bug though.

(Also, there is no prompting for cross-origin requests with XMLHttpRequest, by design. You need to implement the negotiation in your application.)
(In reply to Anne (:annevk) from comment #8)
> Michael, Peter did not change Firefox, he fixed his server setup. Do you
> have a test somewhere demonstrating the HTML <form> behavior? That seems
> like a separate bug though.

Hi, 

the form is generated by Tomcat HTMLManagerServlet: https://github.com/apache/tomcat60/blob/trunk/java/org/apache/catalina/manager/HTMLManagerServlet.java#L1156-L1185
The entire Manager app is proctected by a SPNEGO authenticator. Communication with the manager is stateless, no sessions involved, i.e., every request has to be authenticated. I can provide a private Wireshark capture of course. It works with curl because it sends a "Expect: 100-continue" header.
I'm sorry but unless a Tomcat expert can weigh in here, it seems like there's nothing else we can do. If this were to be reproducible with a smaller test case we can revisit, of course.
(In reply to Andrew Overholt [:overholt] from comment #10)
> I'm sorry but unless a Tomcat expert can weigh in here, it seems like
> there's nothing else we can do. If this were to be reproducible with a
> smaller test case we can revisit, of course.

I do not understand you statement. What is the problem you have with?
I think I can prepare a small usecase for you:
A simple PHP script with form upload protected by a SPNEGO module (available on GitHub). This still will require a working Kerberos setup. Will run on any Unix-like OS with Apache 2.2+.

An alternative would be the same script without Kerberos, luckily SSPI will fallback to NTLM when Kerberos is not available. This would work for you.

What do you say?
I mean: please provide a simple non-Tomcat-based steps to reproduce and I'll test it for you. The problem is that it seems like this isn't a Firefox bug.
Flags: needinfo?(ffbugzilla)
(In reply to Andrew Overholt [:overholt] from comment #12)
> I mean: please provide a simple non-Tomcat-based steps to reproduce and I'll
> test it for you. The problem is that it seems like this isn't a Firefox bug.

Fair enough, I have a PHP script now and comparing the behavior/packets with Wireshark. I will let you know how you can safely reproduce this issue.
(In reply to Andrew Overholt [:overholt] from comment #12)
> I mean: please provide a simple non-Tomcat-based steps to reproduce and I'll
> test it for you. The problem is that it seems like this isn't a Firefox bug.

Hi Andrew, it took me several hours to figure out what is happening and why Firefox is failing.
First of all, I wrote the PHP script which contains the very same HTML form as the Tomcat Manager to upload files. Added the SPNEGO authenticator to .htaccess and limited "Requre valid-user" to POSTs only. That script fully worked with Firefox. I saw in Fiddler and Wireshark that Firefox sends off the entire file, Apache Web Server reads it and then produces a 401. Firefox retries with the Authorization header and the request succeeds.
So it seemed that there must be some bug in Tomcat and not in Firefox per sé. I have debugged the entire code in Tomcat 6.0.44 which processes the request Firefox sends. Additionally, I have searched the web for similiar issues and found this (http://stackoverflow.com/q/26216335/696632). The request is prematurely aborted and Firefox stops. Tomcat tries to validate the request as soon as possible, e.g., auth headers and aborts the request (with proper response) eventhough the client is still writing. A predefined value for a maxSwallowSize (2 MiB) is exceeded and an IOException is thrown. This exception is written as "500 Internal Server Error" but Firefox still sends its body and ignores the reponse. Meanwhile Tomcat closes the connection while Firefox is still writing. Which means Firefox does not read the response until is has fully sent the data off to the server. This has been introduced in 6.0.44 (see http://tomcat.apache.org/tomcat-6.0-doc/changelog.html): When applying the maxSwallowSize limit to a connection read that many bytes first before closing the connection to give the client a chance to read the response. (markt).
Digging a bit deeper I found a thread on the Tomcat Users ML (http://grokbase.com/t/tomcat/users/113ef8srx9/how-to-prevent-abort-the-processing-of-the-multipart-request-body) which depicts the same behavior. The upshot is that browsers have a problem with aborted connections. Introduced due to CVE-2014-0230. Tomcat 7 introduced some attributes (http://tomcat.apache.org/tomcat-7.0-doc/config/context.html#Common_Attributes) swallowAbortedUploads and (http://tomcat.apache.org/tomcat-7.0-doc/config/http.html) maxSwallowSize. Interesting to mention is this note: "Not reading the additional data will free the request processing thread more quickly. Unfortunately most HTTP clients will not read the response if they can not write the full request.". Pity.
To make some proof that Tomcat's code is appropriate/correct, I have crafted the browser requests with curl 7.43.0. Two requests, one below maxSwallowSize and one above.
below: curl sees that the server stopped reading and sends 401, curl rePOSTs with proper header. 200 OK.
above: curl says that the connection has been closed by the server (which Tomcat does when the size is exceeded), creates a new one and perform auth with that POST request. 200 OK.

It seems, after all, that Firefox has some shortcomings which are just triggered by SPNEGO auth but are not related to. I am quite certain that one can craft a testcase where some simple header is missing and Firefox will fail too.

I can provide curl output and Wireshark pcaps privately.
I think, I encountered and reproduced this bug with Firefox 52.0(64-bit) on Arch Linux. Also found workaround that I implemented in my web app.

In the attachment you'll find test server and HTML form that reproduces the problem 100% of times.

For some reason, if uploaded file is larger than ~4.5MB Firefox fails to re-post it with appropriate Authorization header, when server responds with 401. And it doesn't add the header initially, even though credentials were provided with xhr.open().

Workaround: manually set Authorization header with xhr.setRequestHeader()

Compared to Chromium 56.0.2924.87 (64-bit), which handles it as expected.

P.S. If uploaded file is smaller it works fine.
P.P.S. Test it in new Private Window each time, so that Firefox doesn't memorise credentials.
Thanks Andrew, it sounds like we have a limit for cloning request bodies that is rather low nowadays. You should also be aware that Firefox is cloning the resource and transmitting it twice, wasting the user and your server a bit of bandwidth that you can avoid by authenticating straight away as you did. Nevertheless, it seems fair to waste more bandwidth, especially with images now easily being that size.

Other Andrew, this seems like it should be relative simple to fix if we find someone that can write a patch.
Priority: -- → P3
Component: DOM → DOM: Core & HTML
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: