Improve display of use counters

Assigned to


Telemetry Dashboard
28 days ago
a day ago


(Reporter: gfritzsche, Assigned: chutten, Mentored)



Firefox Tracking Flags

(Not tracked)


Per bug 1409865, use counters should probably be always interpreted relative to TOP_LEVEL_CONTENT_DOCUMENTS_DESTROYED & CONTENT_DOCUMENTS_DESTROYED:

I put a temporary hacky dashboard up here:

We should think about how to properly deal with use counters in the future.
Should they have specialized views in the TMO dashboard?
Or should they just have their own dashboard?
This will need a bit of understanding of the future of the TMO dashboards.


25 days ago
Duplicate of this bug: 1409865
So, digging a bit with others, it turns out that in follow-up to bug 1204994, the aggregator is doing some processing of USE_COUNTERS2_*:

AFAICT, this does the following:
(1) we leave the total value in the "1" bucket
(2) we add a normalized value in the "0" bucket [1]: (total - 1-bucket-sum)

This mean that we might show correct values on the TMO dashboard.
It is confusing though, as this context is hard to understand.

@frank,chutten: does that sound correct?

@chutten: with the information we have now, does my dashboard [2] look correct?
Should this match up with [3]?

Flags: needinfo?(fbertsch)
Flags: needinfo?(chutten)

Comment 4

24 days ago
The histogram as accumulated on the client only puts values into the 1 bucket. Then, for any client with at least one sample in the 1 bucket, the histogram is filled in by the aggregator with all of the 0s it should have due to pages it visited that didn't set a 1.

This means we don't aggregate pings which, during their subsessions, don't see a counted feature. We don't insert use counters full of 0s on the aggregator side (at least not that I have found).

This means TMO provides an accurate picture of how often a feature is used within sessions that have used it. This isn't useful in figuring out if a feature is used across the web, but it might be a better measure to guess how much users who _do_ see the feature actually care. This is a bit flimsy, but is similar to what we did for hangs for the longest time: disregard users who don't experience them.

This also means your dashboard provides an accurate picture of how often a feature is used across all sessions you've examined. This should provide a more useful value for how often a feature is "seen" throughout the portion of the Web experienced by the users reporting this data.

Your numbers should not line up with TMO's unless, for the matching use counter, every subsession saw at least one use of the feature.
Flags: needinfo?(chutten)

Comment 5

24 days ago
I expect casual users of usage counters are expecting they are seeing a total % of page/document loads, not "of subsessions that use the feature at least once"; I was and that's what the docs seem to say:
It seems like a bug to report anything else.
Current explanations look correct.

We could display % of page/document loads in TMO by adding some front-end only work that takes the counter and TOP_LEVEL_CONTENT_DOCUMENTS_DESTROYED or CONTENT_DOCUMENTS_DESTROYED, and displays the ratio. In that case, it would just be "SUM(number of elements in the 1 bucket) / SUM(total)" for the selected dimensions.
Flags: needinfo?(fbertsch)


3 days ago
Assignee: nobody → chutten
Priority: -- → P2


3 days ago

Comment 7

2 days ago
So the plan is: use the true values from USE_COUNTER2* probes and normalize against the count of (TOP_LEVEL_)CONTENT_DOCUMENTS_DESTROYED in the pie charts instead of using the false values, in TMO's front-end.

No changes planned for the aggregator, though I suppose we'll be able to ditch the existing use-counter-related shenanigans if we want to.
We should probably also inform users of the fact that these are proportional top-level page / document.
Possibly in a follow-up?
Use counters don't have a description now, is that a place to put context?
You need to log in before you can comment on or make changes to this bug.