|Submitter||Diff||Changes||Open Issues||Last Updated|
|Error loading review requests:|
58 bytes, text/x-review-board-request
|Details | Review|
MozReview Request: Bug 1247146 - Send firstrun panel telemetry only for unique panel visits. r=mfinkle
58 bytes, text/x-review-board-request
|Details | Review|
In the telemetry that's being collected on firstrun, we see how many people visit each panel. However, the telemetry extra only includes the panel index, and because we have 3 different sets of firstrun panels of different lengths, it's hard to tell how many people visit a specific panel that is a different index in two different sets. For example, telemetry extra from the "Sign in" panel in Experiment B is "onboarding.3" but that is exactly the same as the telemetry extra for the panel describing sync in Experiment C, and there's no way to tell from the dashboard. We need the extra to reflect the panel itself, rather than just the index.
Created attachment 8717746 [details] MozReview Request: Bug 1247146 - Make firstrun panel telemetry extras more specific. r=mfinkle Review commit: https://reviewboard.mozilla.org/r/34285/diff/#index_header See other reviews: https://reviewboard.mozilla.org/r/34285/
This patch is good because we really do need something to identify the panels, especially when we start mixing them up based on locale, or other experiment. Yes, "onboarding.1" might be two different panels, based on some other variable. The bad part of this patch isn't really in this patch, it's from the original telemetry landing: OnPageChangeListener.onPageSelected gets called as people move back and forth through the list. I don't think that's the metric we want to collect. I think we want to collect "how deep into the onboarding did the user go?" If you look at the current UI telemetry, we see onboarding.0 has fewer events than "onboarding.1", which doesn't make sense. Of course, it does make sense if you know that the counts go up whenever people go "back". Do we want to change the code a bit to only fire the telemetry ping if the index is greater than the largest index "seen" yet? We'd need to add a maxIndex member to keep track of the deepest panel displayed and only firs the event if the new index is bigger than the last index. That would stop extra events from being triggered.
NI'ing Chenxia for thoughts on comment 2
Let me needinfo Alex - I talked with him about this topic, about whether we care what pages users visit (including duplication) vs just which pages they've seen. I'd say it probably doesn't matter what pages people "go back" to, so we could drop that entirely and do what you're suggesting. Let me throw together a patch. Alex, what do you think? Is there value in seeing how people navigate (backwards) between panels in Firstrun, or should we just send telemetry for unique pages that users see? It makes it clearer on our Fennec UI telemetry dashboard because we aggregate the data, and I can't think of a reason we'd want to collect every swipe action. mfinkle, regarding the onboarding.0 vs onboarding.1 problem, in the new version of onboarding, I added an onboarding.0 event when onboarding is shown because Alex did mention that we didn't have a baseline (which I incorrectly assumed in v1 that we could get from installs), so the numbers now cascade correctly.
Comment on attachment 8717746 [details] MozReview Request: Bug 1247146 - Make firstrun panel telemetry extras more specific. r=mfinkle Review request updated; see interdiff: https://reviewboard.mozilla.org/r/34285/diff/1-2/
Created attachment 8719073 [details] MozReview Request: Bug 1247146 - Send firstrun panel telemetry only for unique panel visits. r=mfinkle Review commit: https://reviewboard.mozilla.org/r/34871/diff/#index_header See other reviews: https://reviewboard.mozilla.org/r/34871/
Ultimately, it comes down to distinguishing "views" from "unique views". Although it was left out of initial scope after discussing with Chexia, Anthony wanted it in order to see if slides were seen more than once which could indicate bad UI/UX and it is normally standard practice to know both. So we put it in the scope before we all signed off on the test plan. If I can explain it in SQL terms, one is more like "SELECT COUNT (*) WHERE slide='smart_search';" and the other is more like a "SELECT COUNT (DISTINCT userId) WHERE slide='smart_search';". Do I think it's essential for this test? No. Can it be helpful? Certainly! Should we get in the habit of looking at both when tracking events or running A/B tests? I certainly think so. In my experience it has often helped me identify bugs or bad UI/UX. It also allows to get a better sense of the real user behavior.
Comment on attachment 8719073 [details] MozReview Request: Bug 1247146 - Send firstrun panel telemetry only for unique panel visits. r=mfinkle https://reviewboard.mozilla.org/r/34871/#review32397 Based on the SQL I am using to create drop-rates, we do not need to add this code.
Since I have been looking at the raw data, and using some SQL to do some simple analyses, I/m not sure we need the first patch either. We always have the experiment ("onboarding2-a", "onboarding2-a", "onboarding2-a") that we can use to group the panels together. We could just avoid the first patch and not worry about the overhead of the extra strings and tags. So far, I have not needed anything more than we already have to do analysis.
Comment on attachment 8717746 [details] MozReview Request: Bug 1247146 - Make firstrun panel telemetry extras more specific. r=mfinkle https://reviewboard.mozilla.org/r/34285/#review32399 As mentioned above, I don't think we need this patch either.
Just FTR, here is the SQL I am using to get counts: select extras, count(distinct clientid)::float as count from events where extras in ('onboarding.0', 'onboarding.1', 'onboarding.2', 'onboarding.3', 'onboarding.4') and experiments like '%onboarding2-c%' group by 1