We need to add a derived column for whatever search volume metric we want to review in the experiments viewer. The first step will be defining what metrics we want to look at. Dave - what do you think? My initial thoughts are to create two new metrics: sap_searches and attributed_searches. sap_searches will be all chrome-sap searches. attributed_searches will be the sum of in-content-sap and follow-on searches. See the search handbook  for a definition of these terms. Once we have metrics defined, I will start a PR to get these into main_summary.  https://github.com/harterrt/search-adhoc-analysis/tree/master/docs
Sorry for the delay! Those sound like good general summaries. We may want to others in the future as we get a better understanding of the range of questions that will be answered about search using the experiment viewer. Based on these aggregates, I recommend making comparisons in terms of total searches per profile and total searches per active hour (over some time period, eg. daily) in the experiment viewer.
Let's start with just one search metric: SAP Searches. This metric can be calculated by taking the sum of the SEARCH_COUNTS histogram for all keys in this whitelist . Eventually, we'll want to be able to slice this metric by engine, country, and search access point. For now, let's just get a safety metric implemented.  https://github.com/mozilla/python_mozetl/blob/master/mozetl/constants.py#L5
After discussing this more, I think this metric will probably end up in the experiment artifact generator before it is shown in the experiment viewer. Let's close this bug for now and track this in Bug 1426163.
Status: NEW → RESOLVED
Last Resolved: 5 months ago
Resolution: --- → FIXED
See Also: → bug 1426163
You need to log in before you can comment on or make changes to this bug.