Bug 1648183 Comment 26 Edit History

Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.

(In reply to Alessio Placitelli [:Dexter] from comment #23)
> (In reply to Greg Mierzwinski [:sparky] from comment #22)
> > Can you elaborate on why? Telemetry adds noise to our performance tests and this is very well known (we disable them on FF). Having it enabled means that the test variance increases which diminishes our ability to detect smaller regressions, and increases our chances to missing regressions which increases the likelihood for slow-regressions to occur. Could we run a bi-weekly test with telemetry enabled instead?
> 
> Because you will testing the performance of a product that's different than the one you'll be releasing. Telemetry is a fundamental subsystem and might affect performance as well. The product might be lightning fast in tests due to that and be somehow slower (I'm not saying it is!) in the wild.
> 
> Moreover, we'd be testing and measuring metrics that are different than the ones that telemetry is collecting in the wild, while having telemetry on will allow us to capture telemetry from performance tests as well, to have a baseline for comparing CI vs real builds in the wild. Some teams are actually pursuing this.
> 
> With that said, of course, while a believe that disabling telemetry would not be the ideal solution for this, I don't own performance telemetry :-)

I fully agree that without telemetry, we aren't testing the same thing we are releasing but we also catch less regressions when it's on. The biggest issues in the variance with telemetry stems from the extra external network requests. Would it be possible to disable these at the least in the tests that are used for regression detection?

Regarding being able to compare telemetry between CI and in the wild, that sounds like a good idea to help us nail down our lab-testing gaps. But I wonder if we could have a telemetry-enabled variant running weekly/bi-weekly for this? We are quite limited in terms of the amount of mobile testing we can do so we need to balance realism with our ability to catch regressions.
(In reply to Alessio Placitelli [:Dexter] from comment #23)
> (In reply to Greg Mierzwinski [:sparky] from comment #22)
> > Can you elaborate on why? Telemetry adds noise to our performance tests and this is very well known (we disable them on FF). Having it enabled means that the test variance increases which diminishes our ability to detect smaller regressions, and increases our chances to missing regressions which increases the likelihood for slow-regressions to occur. Could we run a bi-weekly test with telemetry enabled instead?
> 
> Because you will testing the performance of a product that's different than the one you'll be releasing. Telemetry is a fundamental subsystem and might affect performance as well. The product might be lightning fast in tests due to that and be somehow slower (I'm not saying it is!) in the wild.
> 
> Moreover, we'd be testing and measuring metrics that are different than the ones that telemetry is collecting in the wild, while having telemetry on will allow us to capture telemetry from performance tests as well, to have a baseline for comparing CI vs real builds in the wild. Some teams are actually pursuing this.
> 
> With that said, of course, while a believe that disabling telemetry would not be the ideal solution for this, I don't own performance telemetry :-)

I fully agree that without telemetry, we aren't testing the same thing we are releasing but we also catch less regressions when it's on. The biggest issues in the variance with telemetry stems from the extra external network requests. Would it be possible to disable these at the least in the tests that are used for regression detection?

Regarding being able to compare telemetry between CI and in the wild, that sounds like a good idea to help us nail down our lab-testing gaps. But I wonder if we could have a telemetry-enabled variant running weekly/bi-weekly for this? We are quite limited in terms of the amount of mobile testing we can do so we need to balance realism with our ability to catch regressions.

EDIT: Let me know if you think I should direct these questions at someone else.

Back to Bug 1648183 Comment 26