Closed Bug 1200217 Opened 9 years ago Closed 4 years ago

UX improvement for Compare - provide valid, default landing values for Submit

Categories

(Tree Management :: Perfherder, enhancement, P3)

enhancement

Tracking

(Not tracked)

RESOLVED INCOMPLETE

People

(Reporter: jfrench, Unassigned)

Details

Possible UX improvement:

[09:57]	jfrench	mikeling: by default, landing on the Compare page, the dropdowns should be in a state which if I clicked on the blue 'Compare revisions' button without any other actions, it would produce a valid compare result between two sets of compare-able revisions.

If there is also a way to prevent invalid choices in all the drop downs that produce no results, I think that would also be an improvement.


Steps to reproduce:

Select the first two revisions in each drop down, and click on the blue Compare revisions button, the result more often than not is, "tests with no results:".
thank you Jonathan to file this: )

After sencond though, I think we may not need auto populate revision number by click compare button or anything else actions. but we indeed to give 'workable revision' (has test date) rather than offer whole revision we get in tips dropdown. When Joel(jmaher) require this 'tip' feature, his original ideal is exactly about give user a list contain compareable test date. However, we haven't figure out the way to valid revision at that time. I also think the perfherder db refactor which is Will(wlach) working with can bring us a way to do that validation. Actually, I do remember there are some issues about perfherder had require api like that already: )

Due to I'm not a expert about UI design and not familiar enough with tree and perf like Jonathan and Will, so I sincerely ask feedback from Joel and will about this, not only for this bug but also for issues related with: )
thank you!
Flags: needinfo?(wlachance)
Flags: needinfo?(jmaher)
great question here.  I think this needs to be answered with some common use cases.

1) pushing to try (bisection, verify a backout, test a fix):  there is a good chance that we will have a try push with limited jobs run (linux tp5) compared to a push on inbound/fx-team with all jobs run many times.  This means that the try push(es) will have missing data.
2) backfilling/retriggering data (could be waiting on a try push): here we want to get a comparison of two revisions that have a known or suspected regression.  We fire up the compare and put it in a bug or a local doc to keep track of.  Then in the future we will have all the data.
3) investigating data filed in a bug- here you will just click a link with proper branch/revisions.

There might be other use cases.  I would have originally thought that only showing revisions that have enough data is useful, but thinking about how many people want to use compare there are enough cases that show supporting any revision is good.  Maybe one key here would be to suggest recent revisions that have full sets of data (highlight ones with data, grey ones without).

I suspect wlach has other thoughts here.
Flags: needinfo?(jmaher)
(In reply to Joel Maher (:jmaher) from comment #2)
> great question here.  I think this needs to be answered with some common use
> cases.
> 
> 1) pushing to try (bisection, verify a backout, test a fix):  there is a
> good chance that we will have a try push with limited jobs run (linux tp5)
> compared to a push on inbound/fx-team with all jobs run many times.  This
> means that the try push(es) will have missing data.
> 2) backfilling/retriggering data (could be waiting on a try push): here we
> want to get a comparison of two revisions that have a known or suspected
> regression.  We fire up the compare and put it in a bug or a local doc to
> keep track of.  Then in the future we will have all the data.
> 3) investigating data filed in a bug- here you will just click a link with
> proper branch/revisions.
> 
> There might be other use cases.  I would have originally thought that only
> showing revisions that have enough data is useful, but thinking about how
> many people want to use compare there are enough cases that show supporting
> any revision is good.  Maybe one key here would be to suggest recent
> revisions that have full sets of data (highlight ones with data, grey ones
> without).
> 
> I suspect wlach has other thoughts here.

Yes, this is a real issue. :BenWa mentioned to me on Friday that he sometimes hits this when he just wants to compare his try job against an arbitrary m-c build (and the build he selects has no perf jobs).

I like the idea of putting some information on whether the push has performance jobs associated with it in the dropdown. Not 100% sure what the UI should look like, I might want to discuss this with :bwinton first.

This will become a lot easier when bug 1192976 lands and we can request performance data by result set. Let's circle back to this once that lands (hopefully this week).
Flags: needinfo?(wlachance)
(In reply to William Lachance (:wlach) from comment #3)
> This will become a lot easier when bug 1192976 lands and we can request
> performance data by result set. Let's circle back to this once that lands
> (hopefully this week).

Hi Will, due to bug 1192976 had been landed, I think we can start to improve and fix issue on compare view, and I notice we have another bug about improve which is bug 1210427, So, I think the main ideal here is we grey out those revision with no results, and we can add the rest of issues in that bug :)
Priority: P5 → P3

This ticket is much older than 18 months, thus closing it as INCOMPLETE.

Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.