Screen reader detection heuristics: Privacy review

RESOLVED FIXED

Status

()

defect
--
major
RESOLVED FIXED
5 years ago
5 years ago

People

(Reporter: MarcoZ, Assigned: MarcoZ)

Tracking

(Blocks 2 bugs)

Trunk
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

Background: There is a growing movement in the web developer community that requests that browsers expose to them if they currently deliver content to a screen reader. These web developers want to tailor specific content or functionality to that information.

Additionally, a recent screen reader user survey shows that almost 80% of people who were asked if they were comfortable exposing to web sites that they're using a screen reader, asked that they were very or somewhat comfortable with that. But the concern of privacy was not at all addressed in the question itself, so many users gave a not really informed answer.

There have been several reactions to that survey result:

1. my own blog post: http://www.marcozehe.de/2014/02/27/why-screen-reader-detection-on-the-web-is-a-bad-thing/
This also contains a link to the survey results.

2. Another worried post by Léonie Watson, a member of the Paciello Group: http://tink.co.uk/2014/02/thoughts-on-screen-reader-detection/

3. And a pro screen reader detection post by Dylan Barrell of Deque Systems: http://unobfuscated.blogspot.ca/2014/02/assistive-technology-detection-it-can.html

In short, the main concerns are that web developers will offer feature-reduced or text-only versions of web sites to screen reader users, limiting the experience or even exposing out of date content because the text-only version is not being maintained and updated like the rest of the page.
Moreover, screening out visually impaired users on job employment sites and only offering them certain types of jobs, limiting equal access to the job market.

There is currently work being done in the IndieUI Working Group at the W3C to formalize such screen reader detection, primarily driven by Google and Apple, who want to make their Docs and iWork web offerings more accessible, but cannot do everything they want with current APIs, and want this for a better user experience: https://dvcs.w3.org/hg/IndieUI/raw-file/default/src/indie-ui-context.html#userScreenReaderSettings

However, there are already heuristics floating around some involved parties that detect screen reader users. One of those heuristic approaches has been made available to us by said Dylan Barrell:
http://dylanb.github.io/screenreaderdetection/

The heuristics used here immediately identify me as a screen reader user if I activate that button with NVDA in standard browse mode. It can be tricked into thinking I'm a regular keyboard user if I turn off browse mode and use standard keyboard commands. But most users won't do that.

Question: Are these heuristics already enough for a security/privacy concern, and should we do something about obfuscating the event sequence somehow?

Making this a security sensitive bug on purpose for now.
I'm curious to hear from our privacy folks about the IndieUI work soon as it is very topical. I'm curious how we currently balance potentially improved experience with privacy (and fingerprinting).
I don't think we should have this bug as private, there is nothing actively detrimental to users here that I can discern. And limiting the discussion is not helpful to our mission. I'm leaving it hidden for the moment to hear dissenting view points but removing the csec-disclousure as there is no active threat that firefox is allowing a disclosure issue here. If we have an active patch proposal it would be useful to file both security review and privacy review bugs at that time for appropriate actions.
Here's my attempt at summarizing what we are asking:

1. There are web content hacks for getting at a client's screen reader (and/or similar) info.
2. We could make it harder or impossible for such hacks to work.
3. We could legitimize/formalize getting at such info with the idea it would do more good than harm.

We're looking to our privacy experts for help with 2 vs 3.

Note a link to example working code to get at disclosure is provided in comment 0.
Group: core-security
Flags: needinfo?(sstamm)
Flags: needinfo?(afowler)

Updated

5 years ago
Blocks: 950328

Comment 4

5 years ago
One comment and then a question:

Statement: The existence of a screen reader could be masked to appear as a keyboard user if the keydown etc. events were sent instead of the mousedown etc. events when interacting with a button.

Question: I have always assumed that it was the screen reader that was triggering these events is that not the case?
(In reply to Dylan Barrell from comment #4)
> Question: I have always assumed that it was the screen reader that was
> triggering these events is that not the case?

No. In the normal case, the screen reader sends us a doAction command on the accessible, and we act accordingly. The Actions array is an array of actions supported by that particular type of accessibles. Could be more than one, and in the case of a button, the default action is "press". And because not all buttons react to keyboard commands, we simulate a mouse click.
(In reply to David Bolter [:davidb] from comment #3)
> Here's my attempt at summarizing what we are asking:
> 
> 1. There are web content hacks for getting at a client's screen reader
> (and/or similar) info.
> 2. We could make it harder or impossible for such hacks to work.
> 3. We could legitimize/formalize getting at such info with the idea it would
> do more good than harm.
> 
> We're looking to our privacy experts for help with 2 vs 3.

I'm all for making the "hacks" harder and making the legitimate "asks" easier, especially in a way that users control.  Imagine a site permission that, when set to true, allows the site to query screen reader info via a given API.  Same user who doesn't want this can go set a global pref to say "don't expose screen reader info" and turn off the API.

This works well for "privacy" (user control), but only if sites begin to rely on the API.  If they can get at it anyway, why not design a proper API with better controls?

(dbolter: hope this is what you were looking for, not sure what else to say)
Flags: needinfo?(sstamm)
(In reply to Sid Stamm [:geekboy or :sstamm] from comment #6)
> (In reply to David Bolter [:davidb] from comment #3)
 
> (dbolter: hope this is what you were looking for, not sure what else to say)

Perfect. Thanks. It is a great point.
Going to kick this over (assign) to Marco as he is on a mission (csunconference.org) to figure this out this week.
Assignee: nobody → marco.zehe
Flags: needinfo?(afowler)
Eitan weighs in: http://blog.monotonous.org/2014/03/17/am-i-vision-impaired-who-wants-to-know/

I think this gels with Sid's comment 6 ... and I've heard similar arguments in social web space.
After discussions at CSUN, more discussions in private channels, and some long thinking, I think it's best to
a) obfuscate the way we activate items so that heuristics don't as easily detect that we're acting on behalf of a screen reader. For that, I created Bug 988896.
b) Work with the IndieUI working group to have a clearly defined way of feature detection of things like color, contrast, assistive technology detection etc., and thus work with the standards proactively to prevent things from just spreading into the wild uncontrollably.
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.