From Felix: Can we print out something at the very start of the runtests.py run that says: "available metrics: ..." and lists the full set of metrics that we will encounter in the run? This is probably not trivial for now, I suppose, since the metric indicator is produced by dynamically running the benchmark itself, rather than being some sort of separate metadata. And its not so bad to make the user parse the generated by eye to determine what the set of available metrics are... But still, if you think of something real quick, it'd be nice to be alerted up front that some benchmarks are going to generate metric data that I did not think to include in the run.
Assignee: nobody → cpeyer
Status: NEW → ASSIGNED
Flags: flashplayer-qrb? → flashplayer-qrb+
Target Milestone: --- → flash10.1.x-Salt
We would have to store the benchmarks associated with each test somewhere. At the moment I don't see an easy fix for this, so I'm going to defer to future.
Assignee: cpeyer → nobody
Target Milestone: flash10.x - Serrano → Future
You need to log in before you can comment on or make changes to this bug.