Open Bug 1293420 Opened 5 years ago Updated 7 months ago

Should we disable mix-blend-mode because it can lead to a history leakage attack?

Categories

(Core :: CSS Parsing and Computation, defect, P3)

defect

Tracking

()

Tracking Status
firefox57 --- wontfix

People

(Reporter: tanvi, Unassigned)

References

(Blocks 1 open bug, )

Details

(Keywords: csectype-disclosure, privacy, Whiteboard: [userContextId])

See http://lcamtuf.blogspot.com/2016/08/css-mix-blend-mode-is-bad-for-keeping.html

It looks like the History leakage attack is back.
dbaron, do you have thoughts on this?
Flags: needinfo?(dbaron)
I don't think so; I think we should instead be looking into other approaches to avoiding history disclosure.
Flags: needinfo?(dbaron)
Summary: Should we disable mixed-blend-mode because it can lead to a History leakage attack? → Should we disable mix-blend-mode because it can lead to a history leakage attack?
e.g., having link on a page with URL S to a page with url T be considered visited iff:
 * S and T are same-domain, and the user has previously visited T, or
 * the user has previously visited T by following a link from a page S2 that is same domain with S
(In reply to David Baron :dbaron: ⌚️UTC-7 (review requests must explain patch) from comment #3)
> e.g., having link on a page with URL S to a page with url T be considered
> visited iff:
>  * S and T are same-domain, and the user has previously visited T, or
>  * the user has previously visited T by following a link from a page S2 that
> is same domain with S

With the above suggestion, we are limiting the user experience.  I'd like the browser to be able to tell if I've visited a site, while keeping the server in the dark.

I may have visited this bug via a bugzilla query.  But then later I may be looking at an etherpad or google doc that has a list of security bugs.  I'd like to be able to visually see which of those security bugs I've never visited before, so I could click and read those first.
Can we not consider making the visited link pref(layout.css.visited_links_enabled) more prominent by placing a check box in about:preferences#privacy. This pref prevents this attack after testing with it disabled.

The other option could be to disable it when tracking protection on. This pref isn't needed when using private browsing mode either to have the same effect.

From my understanding there are other active attacks on history that still works using visited links and likely will be more in the future. Perhaps the default behavior could be expanded from dbarron's suggestion in #3 to be slightly more lax.

Having a link on a page with URL S to a page with url T be considered visited if:
 * S and T are same-domain, and the user has previously visited T, or
 * the user has previously visited T and
   * the user has previously visited T2 which is same domain to T from a page S2 that is same domain with S

This should mean clicking a link from google docs to bugzilla permits any link from google docs to be considered visited to a bugzilla domain.

However I'm not sure how complex that implementation would have to be.
(In reply to Jonathan Kingston [:jkt] from comment #5)
 > From my understanding there are other active attacks on history that still
> works using visited links and likely will be more in the future. Perhaps the
> default behavior could be expanded from dbarron's suggestion in #3 to be
> slightly more lax.
> 
I don't know of any other attacks.  I asked Yan Zhu via twitter and she said that the attack she had come up with was fixed by Firefox and Chrome some months ago.

Before implementing heuristics on when a link appears visited vs when it doesn't, I'd like to first understand why we can't prevent a website from learning the color of the link.
There are open bugs on other attacks in bugzilla somewhere...
> I'd like to first understand why we can't prevent a website from learning the color of the link

Did you read the linked article?

We can and do prevent a website from just learning the color of the link, I believe.  But the attack in question is an attack that relies on "cooperation" from the user.  The basic idea of the attack is that you take two links to the same page and make one of them visible to the user only if it's visited and the other visible only if it's unvisited: you can do that because you can control the color and background of the links.  So you can style them like so:

  #link1, #link2 { background: black; }
  #link1:visited, #link2:link { color: black; }
  #link1:link, #link2:visited { color: white; }

Now if the link is visited, #link2 will be visible and #link1 will not be.  Otherwise, #link1 will be visible and #link2 will not be.

You then ask the user to click on the thing they see, under the guise of a game or something (the linked article uses a whack-a-mole game, which is all about clicking things you see).  Each click tells you whether a URL is visited.

That's the basic outline of the attack.  The rest is details to allow testing more than one URL per click by having more than two clickable areas, so each click gives you more than one bit of information.
Priority: -- → P3
Duplicate of this bug: 1644415
Duplicate of this bug: 1651270
Duplicate of this bug: 1651272

I cannot reproduce this on Nightly Linux, with WR enabled. As in, the attack doesn't reproduce / gives wrong results.

I would have expected bug 1632765 to render this attack useless. Is there a platform combination where the attack works?

See Also: → 1632765

(In reply to Emilio Cobos Álvarez (:emilio) from comment #12)

I cannot reproduce this on Nightly Linux, with WR enabled. As in, the attack doesn't reproduce / gives wrong results.

I would have expected bug 1632765 to render this attack useless. Is there a platform combination where the attack works?

The attack doesn't rely on timing. It just uses the visited link colors. You need to click on the mole you see - it's not very clear that this is what you need to do: clicking elsewhere on the page will return a different result depending on where you click: the 256 images are in a 16x16 grid

there are eight URLs used

https://www.cnn.com/
https://news.ycombinator.com/
https://www.reddit.com/
https://www.amazon.com/
https://twitter.com/lcamtuf
https://www.donaldjtrump.com/
https://www.farmersonly.com/
https://www.diapers.com/

the possible outcomes is the same as tossing a coin eight times (eight URLs in the sample) which is 256 combinations. If you look at the source code, it has <img src='mole.png' class='mole'> 256 times, but only one of them will be applied

I'm not going to test all 256 possible results: but it works correctly, I tested a couple of combos: it can't not work if the visited color pref is enabled: i.e there must always be only one possible outcome

It seems to me that the mix-blend-mode part isn't all that interesting -- my understanding is that it's the part that this testcase uses to escalate the simple attack in comment 8 into something that can get more than one bit per interaction. I'd be somewhat surprised if it's the only way to do that; I suspect it's possible with interesting combinations of properties that have been part of the Web for much longer and are much more widely used.

You need to log in before you can comment on or make changes to this bug.