Closed Bug 1214283 Opened 4 years ago Closed 4 years ago
Standardize, document, and automate performance regression workflow
Right now the process surrounding what we do for performance regressions is chaotic. In order to streamline this process: 1. Define and standardize what happens when a regression is encountered 2. Document the process from step 1, distribute among teams and lists 3. Introduce tooling into Raptor for handling the individual tasks from step 1 4. Automate as much this process as possible The outcome of this should be something that is easily followed by owners/peers, QA, sheriffs, and any other relevant stakeholders.
1. Outlined regression outcomes at https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_OS/Automated_testing/Raptor#Sheriffing_Regressions 2. Shared with dev-fxos on December 28, 2015 3. Created new commands to handle the regression detection and resolution flow. Available for perusal in the organization: https://github.com/mozilla-raptor 4. Automated the flow using the mozilla-raptor/prey repo, which is deployed on Heroku. Heroku runs each of the apps against prey daily for the previous 14 days: https://github.com/mozilla-raptor/prey/blob/master/prey.sh
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → 2.6 S4 - 1/1
You need to log in before you can comment on or make changes to this bug.