Closed
Bug 1473580
Opened 7 years ago
Closed 7 years ago
[Shield] Opt-in/Opt-out Study: Simplified Onboarding Overlay + Extensions
Categories
(Shield :: Shield Study, enhancement, P1)
Shield
Shield Study
Tracking
(firefox62+ disabled)
RESOLVED
FIXED
People
(Reporter: ursula, Assigned: ursula)
References
(Blocks 1 open bug, )
Details
Attachments
(1 file, 2 obsolete files)
Details Section for Bugs and Rel-Drivers Email
Basic description of experiment:
The existing OnBoarding Tour has 6 tabs (left), viewable one at a time. The simplified OnBoarding Tour shows only 3 cards, together (right). This is a new mechanism for displaying the OnBoarding Tour. Before the users had to self launch by clicking a small icon (upper left). Now after new install, on the first NewTab launch the simplified OnBoarding Tour overlay will automatically be shown. Changes include a more direct call to action (CTA). Currently Add-ons button goes to the set-up menu. Experiment would change to a direct CTA to “Get [Extension name]”
What is the preference we will be changing?
browser.newtabpage.activity-stream.asrouterOnboardingCohort
What independent variable(s) (IVs) are you manipulating to affect measurements of your DV(s)? What different levels (values) can each IV assume?
Control 1:
browser.newtabpage.activity-stream.asrouterOnboardingCohort = 0
Existing New User OnBoarding tour - will not get Simplified Overlay on NewTab
Variant 1:
browser.newtabpage.activity-stream.asrouterOnboardingCohort = 1
3 cards, similar as existing New User OnBoarding Tour (Private Browsing, Screenshots, Add-ons)
Variant 2:
browser.newtabpage.activity-stream.asrouterOnboardingCohort = 2
3 cards, but targeting a specific extension (Private Browsing, Screenshots, Ghostery Extension)
What percentage of users do you want in each branch?
Equal size branches
NOTE: pdol wanted to make sure Test 3 branch (Ghostery) is kept to the minimum users needed to learn information. Fine to experiment - but potential impact to search ads if too large
What Channels and locales do you intend to ship to?
Channel: Beta/Release 62 Desktop
Locales: en-US
What is your intended go live date and how long will the study run?
As soon as possible. Enrollment for 6 weeks - then close enrollment but do not unenroll late joiners until 6 weeks beyond that point - to collect post-install data
Are there specific criteria for participants?
Only New Users (First Run)
Users must have snippets turned on:
browser.newtabpage.activity-stream.feeds.snippets = true
What is the main effect you are looking for and what data will you use to make these decisions?
Page views, Button clicks (CTRs), Engagement, Retention over 0-6 week period.
Who is the owner of the data analysis for this study?
Ben Miroglio
Will this experiment require uplift?
2 bugs are currently being uplifted:
https://bugzilla.mozilla.org/show_bug.cgi?id=1472297
https://bugzilla.mozilla.org/show_bug.cgi?id=1470170
QA Status of your code:
Green
Do you plan on surveying users at the end of the study?
Jennifer would like to survey, TBD survey and details on when triggered.
Link to any relevant google docs / Drive files that describe the project. Links to prior art if it exists:
Prototype: https://mozilla.invisionapp.com/share/KZJPRWBSYTB#/screens/290397228_Explainer
Extensions and Themes in Onboarding User Research: https://docs.google.com/presentation/d/1LZNfO74vxUcGhRJwW1u-rsByNOMeFSKoA1Aroo1uDZs/edit#slide=id.g355de5112f_0_0
Research Study on recommendations content and trust: https://trello.com/c/ObmQhPBJ/36-2018-extensions-recommendation-comprehension-trust-user-research
Assignee | ||
Updated•7 years ago
|
Comment 1•7 years ago
|
||
I have reviewed this code in https://github.com/mozilla/activity-stream and have given it an R+
Assignee | ||
Comment 2•7 years ago
|
||
Hey Ciprian, would you be able to do the QA review for this shield experiment?
Flags: needinfo?(ciprian.muresan)
Comment 3•7 years ago
|
||
Hey Ursula,
Sure I can, however, you'd need to file a PI request, including all the shield required details, in order to start the whole process.
Flags: needinfo?(ciprian.muresan)
Comment 4•7 years ago
|
||
UX Review: R+
Assignee | ||
Comment 5•7 years ago
|
||
Hey Ilana, could you please provide the data science review for this experiment. Thanks!
Flags: needinfo?(isegall)
Assignee | ||
Comment 6•7 years ago
|
||
Hey Francois, could you give the legal/data review on this experiment? We're using the same mechanism from: https://bugzilla.mozilla.org/show_bug.cgi?id=1459318 which has already been reviewed. In this experiment, we're collecting the same types of data, including one message that includes a specific URL to an add-on (Ghostery). Thanks!
Flags: needinfo?(francois)
Comment 7•7 years ago
|
||
No problems. Could you please fill out the data review request form (https://github.com/mozilla/data-review/blob/master/request.md) and attach it as a .txt file onto this bug?
Once that's done, you can r? me on it.
Flags: needinfo?(francois)
Assignee | ||
Comment 8•7 years ago
|
||
Attachment #8991331 -
Flags: review?(francois)
Assignee | ||
Comment 9•7 years ago
|
||
Sorry, the last attachment had some funky formatting going on. This one should be more readable. Thanks!
Attachment #8991331 -
Attachment is obsolete: true
Attachment #8991331 -
Flags: review?(francois)
Attachment #8991333 -
Flags: review?(francois)
Comment 10•7 years ago
|
||
Comment on attachment 8991333 [details]
Request for data collection review form
> 1) What questions will you answer with this data?
> Trying to measure general engagement and interactions with Onboarding content.
Are there any specific questions you're trying to answer? e.g. What buttons are users clicking on? What UI components are users clicking on? Which tiles are users clicking on?
> 3) What alternative methods did you consider to answer these questions? Why were they not sufficient?
That answer was left blank. If the answer is "none", then please say so.
> 4) Can current instrumentation answer these questions?
> The instrumentation for Activity Stream Router is already in place, and this experiment would be hooking into that.
So are you adding anything new / collect new data or you're just looking at existing Event pings as part of a Shield study?
> 5) List all proposed measurements and indicate the category of data collection for each measurement, using the Firefox [data collection categories](https://wiki.mozilla.org/Firefox/Data_Collection) on the Mozilla wiki.
> * User interactions (Category 3, bugzilla #: 1459318), reports an event ping when a user interacts with a UI component, such as click a button or block a message. The same as in Bug1459318
Has anything changed from https://bugzilla.mozilla.org/show_bug.cgi?id=1459318#c2 or is this the same data?
Is it still true (https://bugzilla.mozilla.org/show_bug.cgi?id=1459318#c9) that there is no personalization in this experiment?
Attachment #8991333 -
Flags: review?(francois) → review-
Assignee | ||
Comment 11•7 years ago
|
||
Attachment #8991333 -
Attachment is obsolete: true
Attachment #8991411 -
Flags: review?(francois)
Comment 12•7 years ago
|
||
We have finished testing the Simplified Onboarding Overlay + Extensions experiment. All issues found have been addressed.
QA’s recommendation: GREEN - SHIP IT
Reasoning:
- Two issues have been found during testing that are of low impact for the experiment and they were already addressed by the Dev team.
Testing Summary:
- Full Functional test suite: https://testrail.stage.mozaws.net/index.php?/plans/view/10847;
- Verified that the Telemetry pings are correctly sent;
Tested Platforms:
- Windows 10 x64
- Mac 10.13.5
- Arch Linux x64
Tested Firefox versions:
- Firefox Beta 62.0b7
Comment 13•7 years ago
|
||
Comment on attachment 8991411 [details]
Request for data collection review form
Thanks for the clarifications Ursula.
Since there is no new data being collected, this can be approved on the basis of the data review that was done in bug 1459318.
Attachment #8991411 -
Flags: review?(francois) → review+
Comment 15•7 years ago
|
||
tagging Mika or Michael for legal sign off on the content of the Ghostery specific add-ons card in a SHIELD experiment.
Scott cleared it with the caveat that it only go to a small group of users in an experiment. https://docs.google.com/spreadsheets/d/132NbUoIFfga9N_LCHrPl1ENLWMHDaBWZxEurh3yiLp4/edit#gid=1953340609
We are testing directly offering an add-on - and would engage bizdev before doing anything production with Ghostery.
Selection criteria (Scott):
Given the weight of selecting just one extension to best exemplify the power and customization of the extensions ecosystem at large, we chose Ghostery. Reasoning:
- It’s an established, broadly popular extension (1M+ users, 4.5 star rating)
- Ad blocking is a universally popular utility
- Anti-tracking and privacy features provide strong alignment with Firefox branding pillars
- User experience. Ghostery recently redesigned their UX to make the extension much more intuitive to understand and set up.
Meridel cleared the original copy: https://docs.google.com/document/d/1kHx36Gi8_Ul_AC5iFytjkpIUH2N96iAWK9_7mPTexR0/edit#heading=h.pz5p5n2ysqy0
Updated•7 years ago
|
Flags: needinfo?(udevi)
Flags: needinfo?(mfeldman)
Comment 16•7 years ago
|
||
Hi from legal, I'm approving this.
In comment 13 Francois linked to another bug in which type 3 impression data is mentioned. In this particular experiment, it is the type 2 interaction data that is being measured. We are also controlling the content on the cards which people would be directed to, so there is no privacy risk that leads this to be type 3.
Thanks,
Mika
Flags: needinfo?(udevi)
Comment 17•7 years ago
|
||
Tracking this for 62 beta/release.
status-firefox62:
--- → affected
tracking-firefox62:
--- → +
Updated•7 years ago
|
Flags: needinfo?(mfeldman)
Comment 18•7 years ago
|
||
Stephanie, I believe Ursula said you'd approved this already re: security. If so, can you r+ here in the ticket? Thanks!
Flags: needinfo?(stephouillon)
Comment 20•7 years ago
|
||
I had a look at the changes and don't have any concern.
r+ from security.
Flags: needinfo?(stephouillon)
Assignee | ||
Updated•7 years ago
|
Flags: needinfo?(jgaunt)
Assignee | ||
Comment 22•7 years ago
|
||
Noting down what was discussed on Slack with Matt Grimes and Marnie:
We're going to run this study for the remainder of Beta 62, (even though it is not 6 weeks) and pause it when Beta 63 rolls over. In addition we're running it for 6 weeks in Release 62. This means there will be 2 discrete studies, one for Beta users and one for Release users.
Comment 23•7 years ago
|
||
OK, sounds like everything is in place. And, we want to ship this in 62 release it makes sense to me to be testing it out in beta. Please consider this signoff from relman.
Since we plan to pause before the 63 merge, how about creating a new bug or some sort of date-related reminder to pause the study, and assign it to someone so it won't get lost in the shuffle - maybe on Friday before the merge? That would be Fri. Aug 31. What do you think?
Flags: needinfo?(lhenry) → needinfo?(usarracini)
Comment 24•7 years ago
|
||
Actually, since you want to pause before the 63 merge (Monday Aug 27), we'd need to do that a week earlier - on Friday Aug. 24.
Comment 25•7 years ago
|
||
Great suggestion. I've added that date to the Shield calendar as a reminder, but please follow up with us as well just to confirm as we near that date.
Comment 27•7 years ago
|
||
This study is now live in Beta 62 as requested.
Comment 28•7 years ago
|
||
Matt: can you confirm if we will be able to measure new user cohort retention with the variations in this experiment vs the existing newtab 5 panel tour? There is questions about client_id and impression_id for measuring retention, but I am thinking that within Shield's infrastructure, we should be able to get retention for each of the treatment/variation arms.
Flags: needinfo?(mgrimes)
Comment 29•7 years ago
|
||
@cmore - abosolutely. Ben is the Data Scientist assigned to this project. He can provide additional details on the analysis plan.
Flags: needinfo?(mgrimes) → needinfo?(bmiroglio)
Comment 30•7 years ago
|
||
(In reply to Matt Grimes [:Matt_G] from comment #29)
> @cmore - abosolutely. Ben is the Data Scientist assigned to this project. He
> can provide additional details on the analysis plan.
Ok, just wanted to double check, because there was a concern over the ability to use client_id vs impression_id and being able to measure retention over time for each cohort. I believe that since Shield can cohort users and measure the retention of that cohort of time, that as long as Shield has an arm for each variation, we can associate the retention of an onboarding variation with the Shield retention without need anymore data.
Comment 31•7 years ago
|
||
(In reply to Chris More [:cmore] from comment #30)
> Ok, just wanted to double check, because there was a concern over the
> ability to use client_id vs impression_id and being able to measure
> retention over time for each cohort. I believe that since Shield can cohort
> users and measure the retention of that cohort of time, that as long as
> Shield has an arm for each variation, we can associate the retention of an
> onboarding variation with the Shield retention without need anymore data.
You are right, there is an `experiments` table that has branch, client_id pairs for all experiments. I was not aware of this table until today, and it indeed solves the issue for the Simplified Onboarding experiment re: client_id.
Flags: needinfo?(bmiroglio)
Comment 32•7 years ago
|
||
*edit
We'd still like to look at retention/addon install metrics for users that engaged with a CTA, which is not possible for this specific experiment, but will be if we add client_id in the next iteration.
Updated•7 years ago
|
Severity: normal → enhancement
Priority: -- → P1
Comment 33•7 years ago
|
||
Ursula, did you get the next bug filed for the second study for release 62?
Flags: needinfo?(usarracini)
Assignee | ||
Comment 34•7 years ago
|
||
Doesn't this bug cover the experiment for Beta 62 and Release 62? That's what the PHD says, and it's already been confirmed as the plan of action with Matt Grimes.
Flags: needinfo?(usarracini)
Comment 35•7 years ago
|
||
OK, perfect. I misunderstood and thought there would be another, separate bug filed for the next version of the study in release 62.
Comment 36•7 years ago
|
||
:marnie, we need to rename the *slug* of this experiment from:
prefflip-simplified-onboarding-overlay-v1-0-1473580
to:
prefflip-activity-stream-simplified-onboarding-overlay-v1-0-1473580
This is required when it gets restarted so that the telemetry gets routed correctly
Flags: needinfo?(mpasciutowood)
Comment 37•7 years ago
|
||
We're live on this. Diffs from the Beta test:
1) Now on release
2) User pref (e.g. sticky user tagging)
3) Slug name updated to the above
4) Auto unenrolls in 63 (instead of manual)
Flags: needinfo?(mpasciutowood)
Comment 38•7 years ago
|
||
This study has now ended in 62 release.
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Comment 39•7 years ago
|
||
Ben created a report on the engagement and retention metrics for this experiment: https://dbc-caf9527b-e073.cloud.databricks.com/#notebook/35637/dashboard/38144/present
Thanks all!
You need to log in
before you can comment on or make changes to this bug.
Description
•