Allow custom_distribution metrics with more than 100 buckets
Categories
(Data Platform and Tools :: Glean: SDK, enhancement, P1)
Tracking
(firefox136 fixed)
Tracking | Status | |
---|---|---|
firefox136 | --- | fixed |
People
(Reporter: chutten, Assigned: chutten)
References
(Blocks 1 open bug)
Details
(Whiteboard: [telemetry-parity])
Attachments
(3 files)
There are 27 histograms of kind enumerated
in Firefox Desktop with 100 buckets or more:
SYSTEM_FONT_FALLBACK_SCRIPT 110
FIREFOX_VIEW_CUMULATIVE_SEARCHES 100
HTTP_REQUEST_PER_PAGE_FROM_CACHE 101
SSL_HANDSHAKE_RESULT 672
SSL_HANDSHAKE_RESULT_FIRST_TRY 672
SSL_HANDSHAKE_RESULT_CONSERVATIVE 672
SSL_HANDSHAKE_RESULT_ECH 672
SSL_HANDSHAKE_RESULT_ECH_GREASE 672
HTTP3_CONNECTION_CLOSE_CODE_3 100
UPDATE_PREF_UPDATE_CANCELATIONS_EXTERNAL 100
UPDATE_PREF_UPDATE_CANCELATIONS_NOTIFY 100
UPDATE_PREF_UPDATE_CANCELATIONS_SUBSEQUENT 100
UPDATE_STATUS_ERROR_CODE_COMPLETE_STARTUP 100
UPDATE_STATUS_ERROR_CODE_PARTIAL_STARTUP 100
UPDATE_STATUS_ERROR_CODE_UNKNOWN_STARTUP 100
UPDATE_STATUS_ERROR_CODE_COMPLETE_STAGE 100
UPDATE_STATUS_ERROR_CODE_PARTIAL_STAGE 100
UPDATE_STATUS_ERROR_CODE_UNKNOWN_STAGE 100
SECURITY_UI 100
SSL_REASONS_FOR_NOT_FALSE_STARTING 512
SSL_CERT_VERIFICATION_ERRORS 100
SSL_CT_POLICY_NON_COMPLIANT_CONNECTIONS_BY_CA_2 256
CERT_VALIDATION_SUCCESS_BY_CA_2 256
CERT_PINNING_FAILURES_BY_CA_2 256
CERT_PINNING_MOZ_RESULTS_BY_HOST 512
CERT_PINNING_MOZ_TEST_RESULTS_BY_HOST 512
GFX_CRASH 100
To migrate these to be custom_distribution
metrics, we'll need a way to make a linear histogram with more than 100 buckets (because an enumerated
histogram with n_values: n
results in a histogram with n + 1
buckets).
Maybe we want the > 100 bucket thing to be a lint, which we can no_lint on a case-by-case basis? Maybe we want to remove the restriction, since custom_distribution
is friction-y enough?
Whatever we choose, we need a way to support these wide and sparse histograms.
Comment 1•12 days ago
•
|
||
I'm okay with removing the maximum: 100
restriction in the metrics schema for the custom distribution to allow glean_parser to generate these, or to set it to an arbitrarily high number that will accommodate the use cases above (how about 1k or 1024 if you prefer powers of 2).
The lint seems more annoying than it's worth as I don't see the bucket count really being abused by anyone, nor do I know the potential harm such abuse would cause?
You might want to double check that the probe-scraper, GLAM, or any other downstream tooling that might have adopted the 100 bucket concept as being fixed. I don't think there would be any requirements, but maybe some assumptions have been made around the custom distributions for convenience.
proposal r+
Assignee | ||
Updated•12 days ago
|
Comment 2•12 days ago
|
||
Assignee | ||
Comment 3•12 days ago
|
||
Comment 4•12 days ago
|
||
Assignee | ||
Comment 5•12 days ago
|
||
Assignee | ||
Comment 6•12 days ago
|
||
Comment 8•9 days ago
|
||
bugherder |
Description
•