Closed Bug 1754066 (CVE-2022-28283) Opened 3 years ago Closed 3 years ago

Missing security checks for fetching `sourceMapURL` for CSS/JS from devtools

Categories

(DevTools :: Framework, defect, P2)

Desktop
All
defect

Tracking

(firefox-esr91 wontfix, firefox97 wontfix, firefox98 wontfix, firefox99 fixed)

RESOLVED FIXED
99 Branch
Tracking Status
firefox-esr91 --- wontfix
firefox97 --- wontfix
firefox98 --- wontfix
firefox99 --- fixed

People

(Reporter: Gijs, Assigned: jdescottes)

References

Details

(Keywords: sec-moderate, Whiteboard: [keep hidden while bug 1756536 is][adv-main99+])

Attachments

(2 files)

In bug 1746089 we're looking into an auth issue with source maps. This prompted me to look into source map fetching a bit. I'm filing this bug because I can't find any security checks.

In particular, I think https://sitea.com/ shouldn't be able to say its source map is at file:///etc/passwd or chrome://browser/content/. I don't know what restrictions are specced for this, but if these are CORS requests they should probably adhere to CORS headers (I guess? Might need checking with a spec, if any exists - are source maps allowed to be on https://siteb.com/ in this case without any CORS stuff in place?), they should send the right referrer headers (subject to the including page's referrer policy and the like), etc. etc.

Julian suggests the actual fetch happens in https://searchfox.org/mozilla-central/rev/4615b544a0f7166233c409c619b426c4025467a7/devtools/client/shared/source-map/worker.js#154 from a chrome privileged worker. Because it's system principal, I think it can do whatever it likes, and none of the relevant CORS / referrer info will be included. That seems potentially bad (as in, combined with timing attacks can probably at least determine some privacy-sensitive stuff, or worse).

Now, hopefully there are security checks somewhere between there and where we determine the source map URL - but I'm not sure where that is, and some very quick searching isn't throwing it up. The source map URL for CSS is tracked on the stylesheet, and originally comes from a header or a comment in the stylesheet, but the platform code that stores this info doesn't appear to do any checking of its own.

:ochameau, do you know if I'm just not finding the right place where we do do the security checks? If not, what would be involved in getting some checks in place, and passing the right principal to the code doing the fetching?

I checked that with a stylesheet such as, served by a sample website running on localhost

div {
  color: #ff0066; }

span {
  background-color: #EEE; }

/*# sourceMappingURL=file:///etc/passwd */

the worker linked above will issue a fetch to file:///etc/passwd .

As noted in bug 1746089 , the fact that we're not using the right principal also causes that bug: we'll not be using the same http authentication cache as the main webpage, so we re-request auth.

Summary: Potentially missing security checks for fetching `sourceMapURL` for CSS/JS from devtools → Missing security checks for fetching `sourceMapURL` for CSS/JS from devtools

Small example at https://test-sourcemap-local.glitch.me/ , the red div uses sourcemapped styles.

This one points to file:///Applications/MAMP/htdocs/sourcemaps.css.map which happens to be where I have my local copy of the sourcemap for this project. So on my machine, devtools show correct sourcemap info on this website, while you should not see anything unless you create it yourself at the same path with the content from https://test-sourcemap-local.glitch.me/sourcemaps.css.map

FWIW it seems that Chrome also resolves my local stylesheet, and Safari doesn't.

The correct variant of this test page, using the sourcemap from the server and not from file:// is at https://test-sourcemap-regular.glitch.me/

Also to my knowledge we don't expose the result of this request to the content process in any way. I guess timing attacks would also require the attacker to know when we start/stop fetching the file? I hope crafting an attack from this vulnerability is actually pretty difficult but I agree that in general we should rather avoid such requests (and I don't know much about that topic anyway).

Regarding sourcemap origins, I don't see much in the spec proposal at https://sourcemaps.info/spec.html
There are no restrictions on which URL can be used it seems:

Note: <url> is a URL as defined in RFC3986; in particular, characters outside the set permitted to appear in URIs must be percent-encoded.

Note: <url> maybe a data URI.  Using a data URI along with “sourcesContent” allow for a completely self-contained source-map. 

I think that nowadays, chromium fetches the source maps via this code:
https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/front_end/core/sdk/PageResourceLoader.ts;l=223-253?q=loadResource
I simplified the code to focus on the logic:

    const eligibleForLoadFromTarget = parsedURL && parsedURL.scheme !== 'file' && parsedURL.scheme !== 'data' && parsedURL.scheme !== 'devtools';
    if (eligibleForLoadFromTarget) {
      try {
        if (initiator.target) {
          const result = await this.loadFromTarget(initiator.target, initiator.frameId, url);
          return result;
        }
        const frame = FrameManager.instance().getFrame(initiator.frameId);
        if (frame) {
          const result = await this.loadFromTarget(frame.resourceTreeModel().target(), initiator.frameId, url);
          return result;
        }
      } catch (e) {}
      console.warn('Fallback triggered', url, initiator);
    }

    const result = await MultitargetNetworkManager.instance().loadResource(url);
    ...
    return result;

It tries to load the URL from target/frame only if this isn't an URL whose scheme is file/data/devtools.
I imagine that loading from target/frame ensures loading the URL as if it was loaded by the debugged page. With same credentials/headers.
Otherwise if it failed loading the URL this way, or if it is one of these schemes, it fallbacks on some more privileged network API.
"mutli target" suggest that it can do requests for all the targets and so any origin.

This MultitargetNetworkManager.loadResource only appends User-Agent and Cache-Control header. Most importantly, it no longer tries to execute the request against a given context/frame/target.
https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/front_end/core/sdk/NetworkManager.ts;l=1436-1458?q=MultitargetNetworkManager

  async loadResource(url: string) {
    Host.ResourceLoader.load(url, headers, () => {...});

https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/front_end/core/host/ResourceLoader.ts;l=259?q=ResourceLoader

load = function(url, headers, callback) {
  ...
  loadAsStream(url, headers, stream, mycallback);
...
loadAsStream = function(url, headers, stream, callback) {
  ...
  InspectorFrontendHostInstance.loadNetworkResource(url, rawHeaders.join('\r\n'), streamId, finishedCallback);

https://source.chromium.org/chromium/chromium/src/+/main:out/Debug/gen/third_party/devtools-frontend/src/front_end/core/host/InspectorFrontendHost.js;l=138-159?q=loadNetworkResource

  loadNetworkResource(url, headers, streamId, callback) {
    fetch(url).then((result) => result.text()).then(function(text) {
      ...
    });

At the end, this fallback code is similar to what we are doing today. It uses the frontend's fetch method without trying to do anything special except setting the two headers I mentioned.

But, we don't have the loadFromTarget logic:
https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/front_end/core/sdk/PageResourceLoader.ts;l=288-306?q=loadFromTarget

  private async loadFromTarget(target, frameId, url) {
    ...
    const resource =
        await networkManager.loadNetworkResource(frameId, url, {disableCache: true, includeCredentials: true});

Which will do a CDP request to load the request as if it was loaded from the webpage.
The CDP method is in C++ over there:
https://source.chromium.org/chromium/chromium/src/+/main:content/browser/devtools/protocol/network_handler.cc;l=2948;drc=76c8d618a15f817eb3ee551220b22be71f789b4c;bpv=0;bpt=1

In order to get the credential right, we would need to do something similar. To support remote debugging nicely, we would have to execute the request from the backend.

Otherwise regarding file:// URL, chromium used to reject them by mistake via some unrelated refactoring.
It was regressed by this patch:
https://codereview.chromium.org/16199002
And fixed by this other one:
https://chromiumcodereview.appspot.com/23102017
(Note that the code involved here no longer exists. Lots of code has been refactored.)

I can easily imagine some node tooling crafting sourcemap on local file:// URL in order to avoid hosting them on the http server.
Now, that doesn't prevent doing some proper review on the possible issues of doing this.

I find the local file:// thing troubling, but I can see the use-case for that you're trying to make. We can argue about that. But by doing no checks at all we also allow chrome:// urls or others that could cause serious unintended effects. Hopefully they don't, but we can't guarantee all such URLs are safe, either.

Keywords: sec-moderate

From Alex's comment, it sounds like we should try to move fetching the sourcemap to the DevTools server and try fetching the sourcemap as the page itself, with the correct principal. And potentially add a higher privileges fallback in the parent process, but still on the server. That solution would be compatible with remote debugging which is not really the case with the current one.

However the issue we need to fix here is not just about fetching the initial sourcemap, it also applies to all the files listed in the sourcemap. For instance, if I modify my example mentioned above to list file:///etc/passwd in the sources property of the sourcemap, then we will actually show the content of this file in the StyleEditor.

So in theory any resource load linked to sourcemaps should be done as the page on the DevTools server. Which is the solution we had initially, but we moved this to a DevTools client worker years ago because of performance reasons (Bug 1289570, Bug 1349354). To me it feels risky to change the architecture back to a server side one, as we might end up with the same issues as with the original server implementation. At least I don't feel confident about it.

For shorter term fixes, we could try to implement:

  • Add manual checks on the URLs and disallow file/chrome/resource. Maybe except in certain conditions? eg if the original CSS/JS file itself was served from such a protocol?
  • Use the correct principal. But this will not work with remote debugging, and I need to check if we can pass a principal to a worker

(In reply to Julian Descottes [:jdescottes] from comment #7)

From Alex's comment, it sounds like we should try to move fetching the sourcemap to the DevTools server and try fetching the sourcemap as the page itself, with the correct principal. And potentially add a higher privileges fallback in the parent process, but still on the server. That solution would be compatible with remote debugging which is not really the case with the current one.

This is really just a detail, but I'm wondering if it is worth keeping the fallback read the URL from the frontend.
I imagine that file:// url would only be accessible from the client and not the server.
This would match what chromium is doing.

However the issue we need to fix here is not just about fetching the initial sourcemap, it also applies to all the files listed in the sourcemap. For instance, if I modify my example mentioned above to list file:///etc/passwd in the sources property of the sourcemap, then we will actually show the content of this file in the StyleEditor.

I hope that both sourcemap and original sources are fetched via the worker and via a similar codepath.
That seems to be the case for the debugger.

So in theory any resource load linked to sourcemaps should be done as the page on the DevTools server. Which is the solution we had initially, but we moved this to a DevTools client worker years ago because of performance reasons (Bug 1289570, Bug 1349354). To me it feels risky to change the architecture back to a server side one, as we might end up with the same issues as with the original server implementation. At least I don't feel confident about it.

I have some other arguments related to breakpoint to have a server side source map implementation.
It isn't clear what was wrong with the old server implementation. I don't find any actual argument beyond hearsay in these bugs.
I think there was issues around unsafeSynchronize usage that might related to source map.
But this was to have the breakpoint set correctly before the page runs.

The issue with client side source maps is that we update the breakpoint positions way after the page has reloaded.
So that if content changes, you have to reload twice to hit the right positions for breakpoint set on code that runs early.

Having said that, it would be a quarter project on its own and sounds unrelated to the security issue of loading some special schemes.

For shorter term fixes, we could try to implement:

  • Add manual checks on the URLs and disallow file/chrome/resource. Maybe except in certain conditions? eg if the original CSS/JS file itself was served from such a protocol?

If we restrict the checks to URLs related to source map, I think we are safe banning chrome and resource as we don't support source map for browser resources. We might want to, but AFAIK, noone tries to use source map in mozilla-central.
That might be a first step to at least avoid issue around privileged URLs.
Otherwise, I suspect that some tooling might serve the original file on http:// while keeping the generated on file://. So I'm not sure we would have an easy way to restrict file URLs.

  • Use the correct principal. But this will not work with remote debugging, and I need to check if we can pass a principal to a worker

Sounds unlikely possible and hard to do.
I suspect nsIPrincipal object can't be passed between main thread and the worker thread.
Also, the frontend doesn't typically have a reference to the debugged tab, nor its principal object.
This would require some local-tab only code to magically fetch it.

We can investigate a bit, but it might be easier to implement a server side fetch request.
We could probably reuse or fork this existing method:
https://searchfox.org/mozilla-central/rev/38652b98c6dd3bf42403eeb8c5305902b9a6e938/devtools/server/actors/webconsole.js#1821

This would probably break the load of all privileged schemes.

Thanks for the comment.

I imagine that file:// url would only be accessible from the client and not the server.

Maybe? I was assuming the other way around, I would find surprising that a server returns a source with references to file paths which only make sense for the machine which remotely debugs a browser. I guess some tooling might be able to do that, but that feels odd.

I hope that both sourcemap and original sources are fetched via the worker and via a similar codepath.

They are, I was just stressing the fact that it's not only about swapping "one call" with a server-side fetch, but it's really the whole sourcemap solution which is impacted.

It isn't clear what was wrong with the old server implementation. I don't find any actual argument beyond hearsay in these bugs.

I'm also not sure. I don't know if the old server implementation was also using a worker or not? I feel like a "server parent process worker" based solution should have identical performances compared to the one we have right now? Or was it related to passing the sources back from server to client?

Having said that, it would be a quarter project on its own and sounds unrelated to the security issue of loading some special schemes

Right, that's my main worry. It would be a sizeable project, with uncertainty.

Also, the frontend doesn't typically have a reference to the debugged tab, nor its principal object.

But it should be possible to fetch the browsing context where the request comes from, and from there to fetch the principal? Wouldn't work for remote debugging, but at least that's what I had in mind. Of course if passing a principal to a worker is a no go, no point in doing this.

If we restrict the checks to URLs related to source map, I think we are safe banning chrome and resource as we don't support source map for browser resources. We might want to, but AFAIK, noone tries to use source map in mozilla-central.
That might be a first step to at least avoid issue around privileged URLs.

Sounds good. Let's start rejecting chrome & resource.

Assignee: nobody → jdescottes
Severity: -- → S3
Status: NEW → ASSIGNED
Priority: -- → P2
Group: firefox-core-security → core-security-release
Status: ASSIGNED → RESOLVED
Closed: 3 years ago
Resolution: --- → FIXED
Target Milestone: --- → 99 Branch
Blocks: 1756536

Is this worth uplifting? Please nominate if so.

Flags: needinfo?(jdescottes)
Whiteboard: [keep hidden while bug 1756536 is]
Blocks: 1756763
QA Whiteboard: [post-critsmash-triage]
Flags: qe-verify-

Not going to do any uplift here since the proper fix should be in Bug 1756763

Flags: needinfo?(jdescottes)
Whiteboard: [keep hidden while bug 1756536 is] → [keep hidden while bug 1756536 is][adv-main99+]
Attached file advisory.txt
Alias: CVE-2022-28283
Group: core-security-release
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: