[Experiment] Add-On: Preparatory Survey for NAAR (Needs Aware Add-on Recommendations) Fx 68.0 Release
Categories
(Shield :: Shield Study, task, P3)
Tracking
(firefox68+ fixed)
People
(Reporter: experimenter, Assigned: fwollsen)
References
()
Details
(Whiteboard: Ready for Sign-Off [no-nag])
User Story
Experiment Type: Add-on experiment
What are the branches of the experiment:
- Treatment Survey 100%:
All targeted users are prompted to participate in the survey using a Heartbeat-styled notification.
What version and channel do you intend to ship to?
0.5% of Release Firefox 68.0
Are there specific criteria for participants?
We are looking for 1000 responses* from respondents within the following filtering criteria:
NOTE: REMOVED THIS NORMANDY FILTER REQUIREMENT - the add-on itself will check for at least 3 self-installed Extensions installed before showing the survey.
1. Profile age > 30 days and <= 550 days
Targeting clients who have had about a month to install important add-ons and that joined us in early 2018 (excluding profiles that potentially have legacy add-ons from before the Quantum migration to WebExtensions)
This filtering criteria corresponds to 2% of the release population.
Targeting a 25% sample av the above, ie a 0.5% sample of the release population, should net 1000 responses in around 4-30 days given a 2-5% CTR to survey opt-in and a 25-75% survey completion rate. See https://dbc-caf9527b-e073.cloud.databricks.com/#notebook/137650 for enrollment simulation based on these filters and assumptions.
* Chosen based on the assumption that it is a large enough sample that is reasonable to collect as baseline while still small enough to be able to analyze.
Countries:
Locales: English (British) (en-GB) English (Canadian) (en-CA) English (US) (en-US)
What is your intended go live date and how long will the experiment run?
Jul 22, 2019 - Aug 19, 2019 (28 days)
What is the main effect you are looking for and what data will you use to
make these decisions?
(Do you plan on surveying users at the end of the study?)
Yes. This study is a survey that is required to be launched from a study add-on since it will need to query the list of the user's currently installed add-ons and send add-on guids to the survey URL at SurveyGizmo.
(What is the main effect you are looking for?)
We wish to see that the survey responses uncover a taxonomy of A. important, B. satisfied, C. self-reported user needs that map to specific add-ons or add-on categories.
(What data will you use to make these decisions?)
Survey data joined with standard telemetry data. All targeted users are prompted to participate in a survey using a Heartbeat-styled notification. The survey receives information about at most 10 randomly selected self-installed add-ons, of which the first 6 add-ons are listed in the survey.
Heartbeat prompt:
Help others discover better Extensions! {Take me to the questionnaire} {X}
User facing description of the experiment and survey:
Thank you for taking this short survey. Your responses are important for making Firefox better. Please answer all questions to the best of your ability.
Below are a couple of extensions that you have installed.
Survey structure:
1. The first part of the survey lists their installed Extensions and asks what the user was trying to achieve by choosing the Extension. The user either fills in a free-text field as answer or tick a box for “I don’t know/remember”, and then clicks continue.
2. The second part of the survey is generated from the answers they gave in part one, now asking the user how important each self-reported motivation is and how satisfied they were without the Extension and how satisfied they are now with the Extension.
3. Finally, the user is able to supply optional comments before submitting the information and ending the study.
Survey URLs and screenshots can be found at https://github.com/mozilla/preparatory-survey-for-naar-study-addon/blob/master/docs/TESTPLAN.md#expected-user-experience--functionality
(How the study will be analyzed:)
Part 1 will be analyzed as follows:
- The answer to what the user was trying to achieve by choosing the Extension will be openly coded into a set of self-reported user needs. Depending on the resulting amount of open inputs, this coding effort may be performed after analysis of Part 2 has progressed enough to be able to provide a relevant filter against what user need responses are most relevant for refined exploration.
Part 2 will be analyzed as follows:
- The importance and satisfaction ratings for each coded user need and add-on combination will be translated into two different opportunity scores [importance + Max(importance – satisfaction, 0)], one corresponding to the satisfaction without the add-on installed and one with the add-on installed.
- The difference between these opportunity scores will be used as an indicator of how well the add-on manages to serve the user need. By using opportunity scores instead of the difference between the satisfaction, we get a metric that is adjusted to give more weight to important user needs.*
- Correlations between the add-ons, add-on categories and how well they satisfy important user needs are explored
* High opportunity scores highlights failure of the Extension to fulfill important needs ( the opportunity lies amongst Extension creators to create better Extensions to address the important need). For more information, see "Presentation - Introduction to Outcome-Driven Innovation and how it relates to Mozilla - 2018-02-12"
https://docs.google.com/presentation/d/1osmb8GcM7XqhZpT2v9-0x4nnKDeB2R2jHXpxit6ZtSI/edit?pli=1#slide=id.g2a4fad622c_0_41
Part 3 will be analyzed as follows:
- An open coding of the optional comments responses will be done and explored to uncover unforeseen feedback related to the survey experience.
(Which telemetry metrics will be considered)
Standard engagement metrics of the survey respondents will be compared with known current baseline metrics in order to see how representable they are in relation to the general release population.
(How metrics will be analyzed to determine the outcome of the study)
The level of correlation between add-ons, add-on categories and how well add-ons manages to satisfy important user needs will be evaluated. If we can make statements like "Users that try to achieve X have been well served by add-ons [set of add-ons] or add-ons in category [category]", we will be one step closer towards Needs-Aware Add-on Recommendations.
Who is the owner of the data analysis for this experiment?
ilana@mozilla.com
Will this experiment require uplift?
False
QA Status of your code:
https://github.com/mozilla/preparatory-survey-for-naar-study-addon/blob/master/docs/TESTPLAN.md
Link to more information about this experiment:
https://experimenter.services.mozilla.com/experiments/preparatory-survey-for-naar-needs-aware-add-on-recommendations/
Attachments
(3 files, 2 obsolete files)
Preparatory Survey for NAAR (Needs Aware Add-on Recommendations)
Mozilla Systems Research Group is exploring the role of a survey-based signal in the context of TAAR (Telemetry-Aware Add-on Recommender). With this preparatory survey, we ask users about their self-installed add-ons and attempt to uncover a taxonomy of A. important, B. satisfied, C. self-reported user needs that map to specific add-ons or add-on categories.
Experimenter is the source of truth for details and delivery. Changes to Bugzilla are not reflected in Experimenter and will not change delivery configuration.
More information: https://experimenter.services.mozilla.com/experiments/preparatory-survey-for-naar-needs-aware-add-on-recommendations/
Updated•6 years ago
|
Comment 2•6 years ago
|
||
Updated•6 years ago
|
Updated•6 years ago
|
Added v1.0.1 for signing
Comment 4•6 years ago
|
||
Updated•6 years ago
|
| Reporter | ||
Updated•6 years ago
|
Comment 5•6 years ago
|
||
We have finished testing the Add-On: Preparatory Survey for NAAR experiment.
QA’s recommendation: GREEN - SHIP IT
Reasoning:
- All issues found during testing were fixed in the “1.0.1” version of the add-on and no new issues were encountered.
- Only one issue (#7) remains, where opt-out experiments are displayed as active in "about:studies" page after they are removed, until the page is refreshed. However, it does not affect the functionality or the end user in any way and that’s why we are not considering it to be a blocker.
This issue seems to be affecting only new opt-out experiments and will be addressed in the "Normandy Client" component, in Bugzilla (bug: 1564818) and it was also cloned in the “shield-studies-addon-utils” repo (#296).
Testing Summary:
- Test Suite: Test Rail
Tested Platforms:
- Windows 10 x64
- macOS 10.14
- Ubuntu 16.04 x64
Tested Firefox versions:
- Unbranded Firefox Release 68.0 (en-US)
- Firefox Dev Edition 67.0b14 (en-CA)
- Firefox Dev Edition 67.0b14 (en-GB)
Regards,
Remus
| Reporter | ||
Updated•6 years ago
|
| Reporter | ||
Updated•6 years ago
|
Comment 6•6 years ago
|
||
| Reporter | ||
Updated•6 years ago
|
Updated•6 years ago
|
Description
•