Find a meaningful presentation for the CHECKERBOARD probe family
Categories
(Data Science :: Investigation, task, P1)
Tracking
(data-science-status Evaluation & interpretation)
Tracking | Status | |
---|---|---|
data-science-status | --- | Evaluation & interpretation |
People
(Reporter: tdsmith, Assigned: tdsmith)
References
Details
Assignee | ||
Comment 1•6 years ago
|
||
Assignee | ||
Updated•6 years ago
|
Assignee | ||
Updated•6 years ago
|
Assignee | ||
Comment 2•6 years ago
|
||
Some more iteration in https://dbc-caf9527b-e073.cloud.databricks.com/#notebook/86542/command/86586.
I think it makes sense to essentially treat these like crash rates, since they are individually rare-ish (most user-days have zero events, and we only have one or two days of activity per user for each nightly build) and depend on active use of the browser (so active_ticks
models exposure to the risk).
Discarding the least severe events (< 500) and plotting the population rate/active-ticks ratio (after truncating either to the 99th percentile per user-day) looks stable over time, and comparable for WR vs control.
Plotting active_ticks-scaled "badness" (sum of log10(severity)) over the population shows identical-looking trends -- it would be nice to capture that some events are worse than others but then we lose the ability to treat it as a Poisson process.
Next step is to add this to the WebRender dashboard.
Assignee | ||
Comment 3•6 years ago
|
||
Added this to the 67 release monitoring dashboard. This should go on the continuous monitoring dashboard as well.
Assignee | ||
Comment 4•6 years ago
|
||
Done in https://github.com/tdsmith/webrender-dashboard/commit/24840237e9e0071c224940ea96e72c42415c761f.
Description
•