Visual metrics subtests appear as tests
Categories
(Tree Management :: Perfherder, defect)
Tracking
(Not tracked)
People
(Reporter: davehunt, Unassigned)
Details
Attachments
(1 file)
31.14 KB,
image/png
|
Details |
The visual metrics results are subtests without a suite-level result. This results in them being displayed in the add test modal and performance signatures endpoint even when subtests are not requested.
It may be that we should present one of these subtests (or some combimation of them) as the suite-level result, but it does seem odd to me that subtests would be hidden by default unless they happen to be missing a suite-level result.
An example endpoint that shows these subtests is https://treeherder.mozilla.org/api/project/mozilla-central/performance/signatures/?framework=13&platform=android-hw-p2-8-0-android-aarch64-shippable&subtests=0 (note the subtests=0). This likely impacts the performance of our dashboards.
Reporter | ||
Comment 1•4 years ago
|
||
igoldan: do you know if this is expected behaviour?
Comment 2•4 years ago
|
||
(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #0)
It may be that we should present one of these subtests (or some conbimation of them) as the suite-level result, but it does seem odd to me that subtests would be hidden by default unless they happen to be missing a suite-level result.
Vismet results don't have a suite-level result, they only contain subtests: https://firefoxci.taskcluster-artifacts.net/X_6C7udhTmqwWEg5e5cHpQ/0/public//perfherder-data.json
Comment 3•4 years ago
|
||
(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #1)
igoldan: do you know if this is expected behaviour?
Yes, this is old, expected behavior.
Comment 4•4 years ago
•
|
||
(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #0)
Created attachment 9161925 [details]
Screenshot 2020-07-07 at 17.45.46.pngThe visual metrics results are subtests without a suite-level result. This results in them being displayed in the add test modal and performance signatures endpoint even when subtests are not requested.
It may be that we should present one of these subtests (or some conbimation of them) as the suite-level result, but it does seem odd to me that subtests would be hidden by default unless they happen to be missing a suite-level result.
Looking through the ingestion code (suite's value rule & subtest's parent rule), Perfherder has an "undocumented" ability of distinguishing subtests from tests.
Basically, subtests were thought as composing entities that together constitute the value of a suite. Optionally, you can inspect their values individually, by clicking on Include subtests
.
If "subtests" don't together constitute a value for a suite (the suite is missing a value), then they aren't considered composing entities. They are actually independent tests, not subtests.
Subtests cannot live without a parent suite, while tests can.
An example endpoint that shows these subtests is https://treeherder.mozilla.org/api/project/mozilla-central/performance/signatures/?framework=13&platform=android-hw-p2-8-0-android-aarch64-shippable&subtests=0 (note the subtests=0). This likely impacts the performance of our dashboards.
This is expected. Perfherder is returning these because this is its business logic.
Comment 5•4 years ago
|
||
In other words, amazon SpeedIndex opt
and the like are actually tests, not subtests. That's why they are displayed.
Comment 6•4 years ago
•
|
||
(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #0)
[...]
It may be that we should present one of these subtests (or some conbimation of them) as the suite-level result, but it does seem odd to me that subtests would be hidden by default unless they happen to be missing a suite-level result.
[...]
A discussion on this aspect can be subjective.
I understand the mindset behind this old implementation & am ok with it. If multiple subtests are together constituting a suite's value, then they're just low-interest, implementation detail. Individual subtest values aren't that important.
But if they don't constitute a suite's value, then they are individual, independent tests. Their individual values are actually important. These tests are mature enough to display themselves, without the need of a parent signature/test.
Either way, if a mindset change is required, it'd likely imply big fullstack changes, backed by big efforts.
Comment 7•4 years ago
|
||
I understand the need for filtering out tests like these. The lists are cluttered with up to 3 types of tests:
- composed tests (AKA suites with a vlue)
- individual tests (not constituting a suite's value)
- subtests (constituting a suite's value)
For our particular use case of mitigating the clutter, we could employ toggles to make these explicit in the UI & for hiding/display either of them.
Reporter | ||
Comment 8•4 years ago
|
||
Thanks Ionut, this certainly helps to clarify. As you suggest, we should only use "subtest" to refer to a result that is a component of a suite value.
:sparky do you think there is a suitable suite value that we should use for visual metrics, or should these stand alone as they currently do?
:igoldan perhaps adding a "subtest" tag in the UI in the various views they appear could improve the user experience? On hovering, it could display a tooltip that shows a definition of subtests. I think another confusing aspect is that the suite value does not have a name. Perhaps even showing "geomean" could improve the UX. What do you think?
With the above answered, and any appropriate followup bugs opened, we should close this as INVALID as there are no "visual metrics subtests".
Reporter | ||
Comment 9•4 years ago
|
||
It occurs to me that our codebase refers to individual tests that share the same suite as subtests. If we're defining a subtest as a result that is a component of a suite value, then we're not currently applying this consistently.
Comment 10•4 years ago
•
|
||
(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #8)
[...]
:igoldan perhaps adding a "subtest" tag in the UI in the various views they appear could improve the user experience? On hovering, it could display a tooltip that shows a definition of subtests.
I like this idea. Better explicit than implicit.
I think another confusing aspect is that the suite value does not have a name. Perhaps even showing "geomean" could improve the UX. What do you think?
All suites have names, as those are mandatory. What's missing are the measurement unit & the algorithm chosen for computing the value.
I totally agree we should make these explicit as well. Even go further & somehow enforce this at application code level.
Comment 11•4 years ago
|
||
(In reply to Dave Hunt [:davehunt] [he/him] ⌚BST from comment #8)
:sparky do you think there is a suitable suite value that we should use for visual metrics, or should these stand alone as they currently do?
No, they are all very different metrics and any combination of these in an overall metric will be confusing so they should stand alone.
Reporter | ||
Comment 12•4 years ago
|
||
I've opened bug 1655019 to explore how we could improve the display of summary results in Perfherder.
Reporter | ||
Updated•4 years ago
|
Description
•