Closed Bug 504016 Opened 16 years ago Closed 9 years ago

XMLHttpRequest requests with arbitrary HTTP methods through HTTP proxy may generate errors

Categories

(Core :: Networking, defect)

x86
All
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: gfleischer+bugzilla, Unassigned)

References

()

Details

(Keywords: sec-low, Whiteboard: [sg:low] spec/consistency issue)

User-Agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.11) Gecko/2009060214 Firefox/3.0.11 Build Identifier: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.1pre) Gecko/20090713 Shiretoko/3.5.1pre In Firefox, it is possible to make requests for nearly arbitrary HTTP methods using XMLHttpRequest objects. If an HTTP proxy has been configured, the request will be handled by the proxy. Depending on the proxy implementation a number of possible errors may occur. For example, the proxy may not know how to handle the request and generate an error condition. Error messages from HTTP proxies often include sensitive network diagnostic information such as client IP addresses, internal hostnames, email addresses and possibly a copy of the HTTP request. The XMLHttpRequest object can be used to read this information from the response. A remote site may be able to construct such requests in order to reduce a user's privacy. Additionally, if a copy of the HTTP request is included in the error response, it may be possible to read cookies marked as HttpOnly in XSS situations. Reproducible: Always Steps to Reproduce: 1. Configure HTTP proxy 2. Visit listed URL 3. Run test Tested with squid-cache proxy (squid/3.0.STABLE16) with default configuration. This is the general case of bug 504013.
Component: General → Networking
Product: Firefox → Core
QA Contact: general → networking
Do you have a proposed solution? Is there some way we can differentiate between a response from a proxy (which shouldn't be exposed to web script) and a response from the website itself, which should? Should all failure responses cause the content to be unreadable? That doesn't sound like a reasonable solution. Alternately we could say this is not a bug and that the proxies should be fixed not to expose sensitive information in responses.
Like with bug 504013, it would be helpful to have more specific details here on the attach mechanism(s) and info potentially at risk.
(In reply to comment #1) > Do you have a proposed solution? Is there some way we can differentiate between > a response from a proxy (which shouldn't be exposed to web script) and a > response from the website itself, which should? No, there probably isn't a good mechanism to differentiate proxy errors. An additional complication is that the proxy may be transparent. In that case, there is not a way to distinguish website failures from proxy errors. > Should all failure responses cause the content to be unreadable? That doesn't > sound like a reasonable solution. Denying read to any non-200 responses would be one solution, but it probably isn't reasonable. This was the approach that was taken with bug 479880, which is basically the inverse of this situation. > > Alternately we could say this is not a bug and that the proxies should be fixed > not to expose sensitive information in responses. Yes, that is certainly one approach. The proxies probably believe that these error messages provide helpful diagnostic information.
(In reply to comment #2) > Like with bug 504013, it would be helpful to have more specific details here on > the attach mechanism(s) and info potentially at risk. Consider an XSS vulnerability on a site where the cookies are protected with HttpOnly. An injected script cannot access a cookie, because it is only available within the HTTP request. But if a proxy is configured that generates detailed diagnostic information of malformed requests, the injected script could generate an intentionally invalid request. When the proxy generates the error message, the cookie would be returned in the error message and could be read from script. This is basically an unintentional reimplementation of the TRACE method. See bug 412945. Although not formulated as an attack, it also demonstrates the issue. Or consider remote website that wants to profile the user by attempting to expose the local network information returned in the proxy error message. The site would include script that attempts to generate a proxy error. This could be accomplished by generating an invalid request on the client side or by generating invalid response from the server. In the case of an invalid response from the server, the use of XMLHttpRequest isn't required. Simply reading content from an iframe would be sufficient.
This is essentially a spec-level problem -- any browser implementing the specs will suffer from information disclosure if there are such proxies. What's the best approach here? Work with the w3c to change/limit the XHR specification? contact CERT to help with a round-up of proxy vendors to get them to stop sending information back? Perhaps the XHR spec could specify an extra header to indicate content-generated requests. Then proxy vendors who have diagnostic responses could suppress those responses in those cases. Alternately proxies could be required to add a response header indicating it's a proxy error response rather than an end-server response, which would then not be passed back to the XHR request.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Whiteboard: [sg:investigate]
Whiteboard: [sg:investigate] → [sg:low] spec/consistency issue
Group: core-security
Its pretty much the intermediary that needs to fix this
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.