Collect `UseCounters` using Glean
Categories
(Core :: DOM: Core & HTML, enhancement, P2)
Tracking
()
People
(Reporter: nalexander, Assigned: chutten)
References
(Regressed 1 open bug)
Details
Attachments
(15 files)
4.48 KB,
text/plain
|
royang
:
data-review+
|
Details |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
Details | Review | |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
48 bytes,
text/x-phabricator-request
|
RyanVM
:
approval-mozilla-beta+
|
Details | Review |
Per Winston Churchill, "Never let a good crisis go to waste." The crisis here is that our UseCounter data is unreliable for some window (hopefully) ending with Bug 1845779. The opportunity here is to get the new UseCounter data into Glean telemetry so that we can transition a decent chunk of instrumentation away from Legacy telemetry.
The main advantages that I am aware of are:
- UseCounter data is large, bloating Legacy main pings
- UseCounter data is "wide", i.e., produces many columns, bloating main ping tables
A new Glean UseCounter ping would allow to eventually streamline Legacy main pings and reduce main ping table width.
Reporter | ||
Comment 1•1 year ago
|
||
I investigated this a little and found that there's a delicate interplay between build-time generation steps and run-time consumption where particular Legacy probes are indexed by number. See https://searchfox.org/mozilla-central/rev/61aca7cda4b217245d9af5aac5bf49bf0d5ecfe3/dom/ipc/WindowGlobalParent.cpp#1106-1120.
It would be helpful for the experts to confirm that this approach is possible, or to suggest alternate approaches that can be GIFTTed. :chutten?
Assignee | ||
Comment 2•1 year ago
|
||
I'm not 100% sure I grasp the significance here, but I think it's "Today, WindowGlobalParent
s all store a shadow structure of use counters, indexed by a numeric (as in supporting arithmetic) id. How can we do this in FOG which supplies identifiers instead?" which sorta-kinda points to "Isn't it convenient that this enum maintains the same order as this contiguous block" as a thing to either emulate or avoid... but I digress.
Assuming that it remains important that use counters only be reported as the document is destroyed, this implies that the Document remains in need of a structure to store its use counters plus some way to map from its structure to Glean metric instances (specifically mozilla::glean::impl::RateMetric
instances, but that's just detail). This mapping necessarily must be provided at the latest by codegen time since it isn't guaranteed to be (nor should it be) stable between builds.
So maybe some sort of GleanUseCounterMapping.h that has /* static */ void IncrementUseCounter(mozilla::UseCounter aCounter) { switch(aUseCounter) { case eUseCounter_custom_DocumentExecCommandContentReadOnly: mozilla::glean::use_counters_page::document_exec_command_content_read_only.addToNumerator(1); mozilla::glean::use_counters_doc::document_exec_command_content_read_only.addToNumerator(1); break; ... }}
.
The method by which we create the metrics in the use_counters.{doc|page}
categories is still up in the air, of course. For now we could presumably force the generation of a checked-in metrics.yaml
whenever a use counter is added or removed plus a mach
command to autogen it. But it'll have telemetry_mirror: USE_COUNTER2_<appropriate stuff goes here>_{DOCUMENT|PAGE}
and then GIFFT'll take care of the mirroring.
Does that help? Or did I answer the wrong questions?
Reporter | ||
Comment 3•1 year ago
|
||
(In reply to Chris H-C :chutten from comment #2)
I'm not 100% sure I grasp the significance here, but I think it's "Today,
WindowGlobalParent
s all store a shadow structure of use counters, indexed by a numeric (as in supporting arithmetic) id. How can we do this in FOG which supplies identifiers instead?" which sorta-kinda points to "Isn't it convenient that this enum maintains the same order as this contiguous block" as a thing to either emulate or avoid... but I digress.
Yes, that's exactly correct. I saw the general scheme (index -> histogram ID) and wanted to get the experts (you!) to look and suggest how we could either do something similar or propose an alternative.
Assuming that it remains important that use counters only be reported as the document is destroyed, this implies that the Document remains in need of a structure to store its use counters plus some way to map from its structure to Glean metric instances (specifically
mozilla::glean::impl::RateMetric
instances, but that's just detail). This mapping necessarily must be provided at the latest by codegen time since it isn't guaranteed to be (nor should it be) stable between builds.So maybe some sort of GleanUseCounterMapping.h that has
/* static */ void IncrementUseCounter(mozilla::UseCounter aCounter) { switch(aUseCounter) { case eUseCounter_custom_DocumentExecCommandContentReadOnly: mozilla::glean::use_counters_page::document_exec_command_content_read_only.addToNumerator(1); mozilla::glean::use_counters_doc::document_exec_command_content_read_only.addToNumerator(1); break; ... }}
.The method by which we create the metrics in the
use_counters.{doc|page}
categories is still up in the air, of course. For now we could presumably force the generation of a checked-inmetrics.yaml
whenever a use counter is added or removed plus amach
command to autogen it. But it'll havetelemetry_mirror: USE_COUNTER2_<appropriate stuff goes here>_{DOCUMENT|PAGE}
and then GIFFT'll take care of the mirroring.
Do we not have an in-tree example where we generate content that would otherwise be hand-written in a metrics.yaml
?
Does that help? Or did I answer the wrong questions?
Nope, this is what I was after. Some generated code that maps (index -> Glean metric) as a giant case statement is a fine approach, and slightly more robust than requiring a contiguous block with stable ordering.
Reporter | ||
Comment 4•1 year ago
|
||
Alessio: as we discussed, this seems low hanging fruit at a particularly valuable point in time. Can your team jump on it?
Comment 5•1 year ago
|
||
(In reply to Nick Alexander :nalexander [he/him] from comment #4)
Alessio: as we discussed, this seems low hanging fruit at a particularly valuable point in time. Can your team jump on it?
Yes, our team can. Will leave the ni? on me to come back with more information after talking to the team.
Assignee | ||
Comment 6•1 year ago
|
||
(In reply to Nick Alexander :nalexander [he/him] from comment #3)
The method by which we create the metrics in the
use_counters.{doc|page}
categories is still up in the air, of course. For now we could presumably force the generation of a checked-inmetrics.yaml
whenever a use counter is added or removed plus amach
command to autogen it. But it'll havetelemetry_mirror: USE_COUNTER2_<appropriate stuff goes here>_{DOCUMENT|PAGE}
and then GIFFT'll take care of the mirroring.Do we not have an in-tree example where we generate content that would otherwise be hand-written in a
metrics.yaml
?
We do not. The closest thing we have is the codegen regression suite (because it's easier to tell what a template change means when you see what it does to a whole file (see for example when I split the codegen into header and impl)). It runs codegen against the tree and compares it against the most recent known "good" version. If there's a mismatch, that's a testfail. Running the test with UPDATE_EXPECT=1
updates the in-tree version to match whatever's being generated.
A similar model could work for UseCounters where we write a test that generates the metrics.yaml
output(s) for UseCounters and compares it(them) against the in-tree version(s), failing on mismatch. That'll keep us from landing/removing a UseCounter without updating the metrics definition.
That still leaves "how we generate metrics.yaml
(s) for Use Counters" as an exercise for the reader, mind. I presume it's at least possible since we did it for Histograms. We have some build scripts stuff in stuff in toolkit/components/glean/build_scripts/
that might be useful, but use-counter-specific stuff should probably live near dom/base/UseCounters.conf
since that's where the metrics.yaml
(s) will go.
Of course, a lot of this goes away if/when we get probe-scraper push-mode working. Glean metrics for Use Counters will lose some tool integration niceties like codesearch and definition file line references in Glean Dictionary (unless we find some way to advertise the patterns to search and the lines of e.g. UseCounters.conf to link to) since those are powered by metrics files being stored in the repo. But we ought to be able to live without them.
Comment 7•1 year ago
|
||
Talked to Chris, who will take this within H2 after the current webidl/PingCentre engineering work.
Assignee | ||
Comment 8•1 year ago
|
||
I've been doing some reading (of the docs and the code (no, I didn't look at all of that, I perused dom/
and t/c/t
and traced back some codegen python scripts) and have (re-)learned things.
- We have two kinds of Use Counter: Normal and Worker. Each of those have sub-kinds for which we require separate counts:
- Normal use counters have sub-kinds for Document (any frame anywhere) and Page (top-level content only).
- Worker use counters have sub-kinds for Dedicated, Shared, and Service workers.
- The Normal kind of Use Counter has a few different families of Use Counters that all come together in an enum. There are families of use counters for:
[UseCounter]
-annotated andUseCounters.conf
-configured DOM attributes and methods (which get codegen'd bydom/base/usecounters.py
toUseCounterList.h
)- deprecated operations manually defined in
nsDeprecatedOperationList.h
- each and every CSS property, lists of which are sourced from
servo/components/style/properties
viaGenerateServoCSSPropList.py
- and some CSS properties which we consider "unknown" to us but we nevertheless count (
counted_unknown_properties.py
)
- The Worker kind of Use Counter doesn't have a document, so none of the CSS use counters and few of the API ones make sense. But some
[UseCounter]
-annotated Web IDL methods/attributes are configured inUseCountersWorker.conf
(at time of writing it's mostlyconsole.*
) and get use counters for each of the three sub-kinds in its own enum. - These two enums are the indices into storage that documents and workers keep about themselves so that, on dtor, they can report the list of these uses that need to be counted.
- To make these counts useful it is important to have a count of the total number of opportunities these counts had opportunity to be used. (This is because use counters are, despite the name, more useful not as counts but as rates). For legacy reasons these denominators are Histograms with
kind: count
(instead of the more efficient Scalar withkind: uint
):- For Normal-kind use counters, the denominators are
TOP_LEVEL_CONTENT_DOCUMENTS_DESTROYED
for Page andCONTENT_DOCUMENTS_DESTROYED
for Document. - For Worker-kind use counters, the denominators are
{DEDICATED|SHARED|SERVICE}_WORKER_DESTROYED
as appropriate
- For Normal-kind use counters, the denominators are
- The mapping from enum value to Use Counter Histogram Id is conducted via arithmetic: the order of the use counters in their enums matches the order of the use counters' histogram ids in its enum.
The plan that's taking shape in my brain at the moment is:
- Leverage the same codegen machinery that DOM uses to generate
enum class UseCounter
andUseCounterWorker
to generate metrics definitions for each use counter as arate
metric and each denominator as acounter
metric into one or moremetrics.yaml
files indom/base/
. By generating them into the source dir, that'll keep us from having to build any magic machinery to replicate this in the pipeline. I expect these to play likeCargo.lock
- if you update something relevant the codegen will update the file fully for you and devs would just need to include it in their patchsets. - Augment
UseCounter.h
with functions that can take aUseCounter
orUseCounterWorker
and accumulate to the correctly-mapped metrics. Call them and increment appropriate denominators during page, document, and worker destruction.- See whether changes to metrics in other parts of the code cause large portions of DOM to be rebuilt. If so, consider splitting
GleanMetrics.h
likeStaticPrefList_*.h
either as a part of this or as a follow-up.
- See whether changes to metrics in other parts of the code cause large portions of DOM to be rebuilt. If so, consider splitting
- Get Data Review for use counters as a concept. New use counters presently do not require data review because they all are being collected from the same products for the same reasons and are treated in the same way. We shouldn't change that for any additions in the future. Glean being opinionated about ensuring there's Data Reviews for everything means we'll need something to link to.
- Don't send these
rate
metrics or the denominatorcounter
metrics in the "metrics" ping. Instead construct a "use-counters" ping. - Schedule the "use-counters" ping to be submitted every (uh, I dunno, say) 48h using
update-timer
. I don't know which component to integrate this against and will reach out to DOM Peers to ask.- I also don't know if
update-timer
is available everywhere we ship Gecko. If it's only available in Desktop then my backup plan for non-update-timer
-having Gecko-using projects is to submit "use-counters" pings on every application startup until and unless a cross-session long-interval timer solution presents itself for that project.
- I also don't know if
- Move the Use Counters documentation to
dom/docs
and update it for the new situation. (At the very least mention something about workers in there and link to the Glean Dictionary) - Conduct system validation (do all of the different kinds of use counters show up? Does the data smell correct compared to Telemetry-sent or Google-reported data? What does the data from Fenix look like?)
- Develop an ETL for turning this into a useful public dataset. I'm thinking a table with columns
received_day, product_name, geo.country, geo.subdivision, use_counter_name, rate
or something like that. I'll be following the process. - Throw together a hideous but functional use of the public dataset and link to it on https://telemetry.mozilla.org (perhaps not dissimilar to https://mozilla.github.io/usecounters/index.html ). The hideousness is important to encourage someone else (who knows how to make things pretty) to make a proper one for, say, data.firefox.com. Or MDN.
- Remove the Telemetry impl of Use Counters from Firefox Desktop. File a ticket for removing it from the pipeline, too (eventually).
- Send an email to fx-data-dev@, firefox-dev@, dev-platform@ about all this. Post a message to #dom:mozilla.org, #fx-desktop-dev:mozilla.org, and Slack#firefox.
- Rest.
I intend to conduct most discussions in Phabricator over patchsets of proposed code changes, but this is a good opportunity for early feedback. So let's give it a go:
Emilio: what say you? Is my understanding correct? Is my plan flawed? (I know it's definitely incomplete). Would the proposed changes fit well into how DOM does use counters? Who else should I ask (as well or instead)?
Comment 9•1 year ago
|
||
The parts of the plan that touch stuff I'm familiar with seem reasonable at a glance. Another interesting bit that happens and I don't see mentioned is that page use counters aren't reported on document destruction, but in the parent process. But that should not change the setup significantly afaict, since AIUI I think you still want to keep something similar to the current code that keeps the use counters bitfield.
Assignee | ||
Comment 10•1 year ago
|
||
(In reply to Emilio Cobos Álvarez (:emilio) from comment #9)
...AIUI I think you still want to keep something similar to the current code that keeps the use counters bitfield.
That is correct. If we wanted to count how often in the documents they use all features then we wouldn't need it (we could instead tell the Glean SDK about each and every use of each countable feature), but if we're wanting to count the number of docs that use things at least once, then storage external-to-Glean like the bitfield is needed until the doc has finished doing stuff that might trigger the use of a feature.
Assignee | ||
Comment 11•1 year ago
•
|
||
In a conversation on #build:mozilla.org, :glandium tells me that the plan I had for a generated-during-build, srcdir-resident, checked-in metrics.yaml
is not only not currently done but is expressly forbidden. Which kinda sucks, since that was going to solve several overlapping problems of usability, scrutability, and visibility... but if it's forbidden it'll be for good reasons.
I'll see about rustling up an alternative method for accomplishing this. Maybe push-mode probe-scraper for mozilla-central is more possible these days. Or maybe a manual step is acceptable while we're working our way through the problem.
Assignee | ||
Comment 12•1 year ago
|
||
Further conversations suggest push-mode probe-scraper for mozilla-central is out of scope.
Instead I'll aim for a mach
command (say mach gen-usecounter-metrics
) to generate the use counter metrics.yaml
(s). We could rely on the build process to error out if a dev adds a use counter without running mach gen-usecounter-metrics
(because the map from UseCounter to metric will presume the presence of a specifically-named metric that won't exist). If upon testing those ergonomics don't suit (they might, but then again "C++ error message" doesn't have the best reputation for comprehensibility), we could run gen-usecounter-metrics
as part of the build and compare it to the one in srcdir and fail the build with an ergonomic error if they don't match.
Comment 13•1 year ago
|
||
Is uploading this file as an artifact to taskcluster instead feasible? That'd be much better. We used to get backed out all the time for forgetting to regenerate the property database...
Assignee | ||
Comment 14•1 year ago
|
||
It could be in the future. Right now, though, probe-scraper
looks for these files inside the repo, so it'd require some feature work to get the pipeline to look for things in something that isn't version control.
I think "the build doesn't work" with or without an ergonomic message about what to do about it ought to keep this from getting far enough that backouts are needed. Feedback will be provided on the developer's machine, for sure... unless the developer neglects to build before submitting a patch. In which case maybe we'll need to look into a commit hook.
Assignee | ||
Comment 15•1 year ago
|
||
DECISION: I'm going to record use counters as counter
metrics, not rate
metrics as an optimization. rate
metrics will report even if the numerator
is 0
, which we expect for the vast majority of all the counters. Normally, this would be an unnecessary optimization because 1) The actual impact of using rate
vs counter
is small per metric, 2) Analysis ergonomics are more important than bytes on the wire or in the db. However, in the case of Use Counters (1) is magnified significantly by the volume of use counters, and (2) is less impactful because we expect ETL to munge the data into derived datasets for common analyses anyway (ie, analysis ergonomics will be addressed by ETL).
Of course, this decision's only based on what I know, so if there's something I don't know or dismissed about how this data is used or ideally represented, I'm happy to review and reverse it.
Assignee | ||
Comment 16•1 year ago
|
||
Let's get this conversation started.
Comment 17•1 year ago
•
|
||
(In reply to Chris H-C :chutten from comment #15)
DECISION: I'm going to record use counters as
counter
metrics, notrate
metrics as an optimization.rate
metrics will report even if thenumerator
is0
, which we expect for the vast majority of all the counters. Normally, this would be an unnecessary optimization because 1) The actual impact of usingrate
vscounter
is small per metric, 2) Analysis ergonomics are more important than bytes on the wire or in the db. However, in the case of Use Counters (1) is magnified significantly by the volume of use counters, and (2) is less impactful because we expect ETL to munge the data into derived datasets for common analyses anyway (ie, analysis ergonomics will be addressed by ETL).Of course, this decision's only based on what I know, so if there's something I don't know or dismissed about how this data is used or ideally represented, I'm happy to review and reverse it.
I think I'm fine with this, as long as this can be used to power the use-cases we need to support: the current https://mozilla.github.io/usecounters/index.html and things such as this https://chromestatus.com/metrics/feature/timeline/popularity/3581
Note that in this case we'll need to document that "missing from the ping" implies "0".
Assignee | ||
Comment 18•1 year ago
|
||
Comment on attachment 9362621 [details]
data collection review request
(load balancing)
Comment 19•1 year ago
|
||
Comment on attachment 9362621 [details]
data collection review request
DATA COLLECTION REVIEW RESPONSE:
- Is there or will there be documentation that describes the schema for the ultimate data set in a public, complete, and accurate way?
Yes, through the Glean Dictionary.
- Is there a control mechanism that allows the user to turn the data collection on and off?
Yes. Data upload can be controlled in the product's preferences. For Firefox Desktop, that's in about:preferences#privacy
.
- If the request is for permanent data collection, is there someone who will monitor the data over time?
Yes, Emilio and the DOM Core team want to permanently monitor this data.
- Using the category system of data types on the Mozilla wiki, what collection type of data do the requested measurements fall under?
Category 1, Technical data
- Is the data collection request for default-on or default-off?
Default on for all channels.
- Does the instrumentation include the addition of any new identifiers?
No
- Is the data collection covered by the existing Firefox privacy notice?
Yes
- Does the data collection use a third-party collection tool?
No
Result: data-review+
Assignee | ||
Comment 20•1 year ago
|
||
DECISION: Going to start with a default ping scheduling of once per app session, aiming for AppShutdownConfirmed
(so we still have network). It's more often than I'd like but still less often than the "main" ping. The most important things are that it'll be easy to understand, easy to implement, easy to maintain, and portable across products (fully contained inside dom/base
. Emilio suggests nsContentUtils
. ).
Assignee | ||
Comment 21•1 year ago
|
||
Assignee | ||
Comment 22•1 year ago
|
||
Takes use counter definitions and generates a use_counter_metrics.yaml with
counter
metric definitions for each use counter and denominator.
Total use counters at time of writing: 2271 (excludes denominators)
Depends on D193245
Assignee | ||
Comment 23•1 year ago
|
||
Depends on D193246
Assignee | ||
Comment 24•1 year ago
|
||
Depends on D193247
Assignee | ||
Comment 25•1 year ago
|
||
Depends on D193248
Assignee | ||
Comment 26•1 year ago
|
||
Depends on D193249
Assignee | ||
Comment 27•1 year ago
|
||
Also, remove vestigial canRecordExtended push and pop
since we're in the neighbourhood.
Depends on D193250
Comment 28•1 year ago
|
||
Comment 29•1 year ago
|
||
Backed out for causing build bustages in UseCounterMetrics.cpp.
- Backout link
- Push with failures
- Failure Log
- Failure line: /builds/worker/workspace/obj-build/dom/base/UseCounterMetrics.cpp:98:34: error: no member named 'js_late_weekday' in namespace 'mozilla::glean::use_counter_page'
Assignee | ||
Comment 30•1 year ago
|
||
Blast. This is because https://bugzilla.mozilla.org/show_bug.cgi?id=1862922#c18 hit autoland
before I did. I get to either wait for it to hit central or find some other way of generating the necessary metrics...
Comment 31•1 year ago
|
||
Comment 32•1 year ago
|
||
Backed out for bustages on UseCounterMetrics.h
Backout link: https://hg.mozilla.org/integration/autoland/rev/2faf4549970af89872e47b208d1a6b71f7497065
Log link: https://treeherder.mozilla.org/logviewer?job_id=436404620&repo=autoland&lineNumber=32021
Assignee | ||
Comment 33•1 year ago
|
||
Ah, non-unified builds need to pass, too. This required some header rejiggering, but now we're clear. However, we're now in soft freeze so this'll have to wait.
Comment 34•1 year ago
|
||
Original Revision: https://phabricator.services.mozilla.com/D193245
Updated•1 year ago
|
Comment 35•1 year ago
|
||
Takes use counter definitions and generates a use_counter_metrics.yaml with
counter
metric definitions for each use counter and denominator.
Total use counters at time of writing: 2271 (excludes denominators)
Original Revision: https://phabricator.services.mozilla.com/D193246
Depends on D193245
Depends on D194225
Updated•1 year ago
|
Comment 36•1 year ago
|
||
Original Revision: https://phabricator.services.mozilla.com/D193247
Depends on D193246
Depends on D194227
Updated•1 year ago
|
Comment 37•1 year ago
|
||
Original Revision: https://phabricator.services.mozilla.com/D193248
Depends on D193247
Depends on D194228
Updated•1 year ago
|
Comment 38•1 year ago
|
||
Original Revision: https://phabricator.services.mozilla.com/D193249
Depends on D193248
Depends on D194229
Updated•1 year ago
|
Comment 39•1 year ago
|
||
Original Revision: https://phabricator.services.mozilla.com/D193250
Depends on D193249
Depends on D194230
Updated•1 year ago
|
Comment 40•1 year ago
|
||
Also, remove vestigial canRecordExtended push and pop
since we're in the neighbourhood.
Original Revision: https://phabricator.services.mozilla.com/D193491
Depends on D193250
Depends on D194231
Updated•1 year ago
|
Comment 41•1 year ago
|
||
Uplift Approval Request
- Fix verified in Nightly: yes
- User impact if declined: Current Use Counter telemetry is unreliable, so failure to uplift would remove some capability to measure browser issues.
- Code covered by automated testing: yes
- Explanation of risk level: Most of the code is new and not user facing
- Needs manual QE test: no
- Is Android affected?: yes
- Steps to reproduce for manual QE testing: n/a
- String changes made/needed: no
- Risk associated with taking this patch: Low
Comment 42•1 year ago
|
||
Comment 43•1 year ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/d9904c352643
https://hg.mozilla.org/mozilla-central/rev/d0a97916fda2
https://hg.mozilla.org/mozilla-central/rev/71e48679c251
https://hg.mozilla.org/mozilla-central/rev/fdbcb0c8a71e
https://hg.mozilla.org/mozilla-central/rev/67fb8e7deeed
https://hg.mozilla.org/mozilla-central/rev/d1d1111b95d5
https://hg.mozilla.org/mozilla-central/rev/238d07f2df95
Comment 44•1 year ago
|
||
Comment on attachment 9364726 [details]
Bug 1852098 - Appease python linters
Approved for 121.0b4.
Updated•1 year ago
|
Updated•1 year ago
|
Updated•1 year ago
|
Updated•1 year ago
|
Updated•1 year ago
|
Updated•1 year ago
|
Updated•1 year ago
|
Comment 45•1 year ago
|
||
uplift |
Comment 46•1 year ago
|
||
Given the regressions should we back out from beta so we have more time to address them?
Assignee | ||
Comment 47•1 year ago
|
||
I don't have a sense for the severity of the regressions, but my feelings make me think that a few dozen kb on an empty process doesn't warrant a backout, especially since we'll be getting it back when we remove the Telemetry-based ones. My set-point for what's a big deal might need changing, though. What's your experience telling you?
Comment 48•1 year ago
|
||
(In reply to Emilio Cobos Álvarez (:emilio) from comment #46)
Given the regressions should we back out from beta so we have more time to address them?
FWIW, I would recommend to take the temporary regression right now, as the short-term plan is to deprecate the legacy stuff. Stand by for details in in bug 1866834.
Comment 49•1 year ago
|
||
4% of an empty process seems pretty big, but Andrew has a better sense for that kind of stuff.
In any case, taking the regression temporarily seems reasonable if we're confident that removing the old ones would recover the regression. Can we at least push a try run with that removed to confirm that's the case?
Assignee | ||
Comment 50•1 year ago
•
|
||
Well, it's 4% of only the unclassified heap on an empty process. It's 0.4% of the sum of unclassified, JS, and explicit. I've pushed a try run for the related installer size regression (bug 1866479), but neglected to run the awsy suites (until just now. Thank you for the reminder).
Oh, I should mention that I did notice us reclaiming a similar absolute amount of memory from explicit heap in local testing, but my local n is small and my local machine might not be as stable a test bed as try, so the range was wide.
Description
•