Open Bug 1667866 Opened 2 years ago Updated 4 months ago

Add mach try command to rerun performance tests from a specific alert


(Testing :: Performance, enhancement, P5)



(Not tracked)


(Reporter: kimberlythegeek, Unassigned)


(Blocks 2 open bugs)


Add a ./mach try command that will re-run the failing tests from a performance alert. Something like mach try perfalert 88348 or just mach try 88348 where 88348 is the bug number for that alert, or instead use the alert number from Perfherder

I guess the difference here would be that with the alert number, I would assume tests would re-run as soon as an alert is generated. If using a bug number, the tests would re-run when a bug is filed for that alert.

This came up in a brainstorming session, further input is welcome

Flags: needinfo?(dave.hunt)
Flags: needinfo?(jmaher)
Severity: -- → S3
Priority: -- → P5
Duplicate of this bug: 1667885

I think there are good ideas here. I would like clarification.

  1. ./mach try perfalert - this would push to try server, run the builds, and then run the tests; the comment mentions automatically rerunning tests as soon as the alert is generated and I would like clarification if this is designed as a developer tool for reproducing tests, or for automation?

  2. ./mach try perfalert, if this is a bug, it would be nice to run the tests necessary- I am not sure how to get the exact set of tests to run. Would the intention here be to query the perfherder database for all alert summaries associated with the referenced bug and run the tests?

In order to make this work, we will need a clear mapping of alert name -> scheduled test name. This would be more straightforward for AWSY and Raptor, Talos would need a mapping file.

I guess the question is what specific problem/use case are you trying to solve with this, and ensure that the final technical solution meets that.

Overall, I assume this would take a few days to hack together. I would want to make sure it could work easily for:

  • bisecting a failure
  • testing a new fix
  • running on a specific platform (if 200 tests need to run, but you can reproduce it on windows, just run tests there only)
Flags: needinfo?(jmaher)

Good questions, Joel. I think #1 would be a better implementation, with the use case of a developer manually re-running tests

I see this being valuable for confirming/bisecting a regression and for testing a potential fix. By referencing the alert summary and automatically selecting the jobs containing the affected tests, the developer doesn't need to understand the different harnesses or tests. I feel that this could significantly improve the user experience for handling regressions.

Joel brings up a good point on filtering, which could be a valuable advanced feature. If we're concerned about running a large number of tests we could build in a limit, or default to only running jobs for alerts that have been marked as 'important'.

Flags: needinfo?(dave.hunt)

I am concerned about volume of tests as many people do --rebuild 20, this tool should rebuild the correct amount.

Duplicate of this bug: 1729913

a redash query to generate this given a summary id:

  performance_signature ps,
  performance_datum pd,
  performance_alert_summary pas,
  performance_alert pa,
  job j,
  job_type jt
where and
  ps.repository_id=pd.repository_id and and and and
  ( or and
  pd.push_id=pas.push_id and

theoretically we just need to hook this up to an API, maybe could be extended to

or instead we could do:

You need to log in before you can comment on or make changes to this bug.