Closed Bug 1423784 Opened 7 years ago Closed 6 years ago

[Shield] Opt-out Study: JavaScript Errors

Categories

(Shield :: Shield Study, defect)

defect
Not set
normal

Tracking

(firefox59+ fixed)

RESOLVED FIXED
Tracking Status
firefox59 + fixed

People

(Reporter: osmose, Assigned: osmose)

References

Details

Attachments

(2 files)

# Basic description of experiment:
This is an information-gathering experiment and has no user-visible features. Before building a prototype of collecting browser chrome JS errors, we’d like to estimate the amount and distribution of these errors that are currently occurring on the Nightly channel. 

We’re measuring two things:

1. The total amount of JS errors being logged on the Nightly channel. This will directly inform load testing of the service we’re planning to use for prototyping.
2, The distribution of unique errors across Nightly users. This will be used in strategic planning for 2018 to prove whether there are errors with significant-enough reach that collecting and fixing them will have significant impact.

This study will collect any chrome errors that are logged to the Browser console, create a hash based on the error to hide identifying details, and submit the hash via Telemetry. The hashes will be aggregated and counted during analysis to provide the info we want.

Specifically, the hashes are SHA384 hashes of the error type, message, and traceback joined together by colons.

# What are the branches of the study?
None, there is a single branch and no control since this is only observational.

# What percentage of users do you want in each branch?
100%

# What Channels and locales do you intend to ship to?
Nightly, all locales

# What is your intended go live date and how long will the study run?
Thursday, December 7th, running for 2 weeks

The add-on code has a built-in expiration on January 1st, 2018.

# Are there specific criteria for participants?
None, unless there are legal/privacy concerns and we need to exclude users.

# What is the main effect you are looking for and what data will you use to make these decisions?
- Total count of individual errors per-day/hour
- Per-day counts of error hashes to find the distribution of specific errors

# Who is the owner of the data analysis for this study?
Michael Kelly [:Osmose]

# Who will have access to the data?
Anyone with telemetry access

# Do you plan on surveying users at the end of the study?
No

# QA Status of your add-on:
Feature-complete: https://github.com/mozilla/shield-study-js-errors/

I'll be doing some last-minute manual QA tomorrow before attaching an XPI to this bug. mythmon has been reviewing the add-on, and I'll be providing developer QA since this is only hitting Nightly.

I've set up a CEP graph as a sanity check that pings are being sent: https://pipeline-cep.prod.mozaws.net/dashboard_output/graphs/analysis.mkelly_mozilla_com.shield_study_js_errors_pings_received.JS_error_study_pings_per_minute.html

# Link to any relevant google docs / Drive files that describe the project. Links to prior art if it exists:
PRD: https://docs.google.com/a/mozilla.com/document/d/1FAoRLP19hvVFniQniOC9N5jxSUpITCrUs1SiXdhrOEM/edit?uspsharing
PHD: https://docs.google.com/document/d/1iHRNYY9kB8R9ecT4YSMhcDRaJsDzLSKwrJnlsoi_WNA/edit
Clarifying: This is meant to be an opt-out study.
Summary: [Shield] Opt-in/Opt-out Study: JavaScript Errors → [Shield] Opt-out Study: JavaScript Errors
Here's the built XPI. I tested this on Nightly 59.0a1 (2017-12-07) and it functions as expected.
I've added some documentation to the Github README describing the data being collected by the study: https://github.com/mozilla/shield-study-js-errors#data-collection
Whoops, forgot to NEEDINFO people
Flags: needinfo?(sguha)
Flags: needinfo?(mgrimes)
Ilana reviewed the PHD. I'm giving the thumbs up.
Flags: needinfo?(mgrimes)
looks good to me.
Flags: needinfo?(sguha)
that said, how you will go back from SHA to original error? How will you know what error the top most SHA corresponds to?
(In reply to "Saptarshi Guha[:joy]" from comment #8)
> that said, how you will go back from SHA to original error? How will you
> know what error the top most SHA corresponds to?

We won't. This experiment is only about determining the distribution and amount of errors that are occurring. The prototype that we're planning to build in January based on this study will be what actually collects errors.
Matt, just to confirm, you deployed this?
Flags: needinfo?(mgrimes)
Correct. I spoke to osmose about this last week. Even though we are in a pipeline freeze:

1. This is very low risk with no user facing changes and on Nightly
2. The add-on self destructs on Jan 1st so coverage in that timeframe is not a concern
3. Osmose is doing his own analysis, so resources from my team are not a blocker
Flags: needinfo?(mgrimes)
We've disabled this study and are not planning to re-run it. The data we got during the short run was enough to inform the prototype. Thanks all!
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Blocks: 1426479
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: