Closed Bug 776545 Opened 12 years ago Closed 7 years ago

Investigate RestAPI requirements to submit Mozmill test results to MozTrap

Categories

(Mozilla QA Graveyard :: MozTrap, enhancement)

enhancement
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX
Future

People

(Reporter: whimboo, Unassigned)

References

()

Details

Anthony is mostly using the _detail fragment in the URL to get the test details: https://moztrap.mozilla.org/manage/cases/_detail/987/

As talked with Cameron last week we should make use of the following URL instead: https://moztrap.mozilla.org/manage/case/987/

But sadly this one gives me a 404. Looks like that not all tests are accessible via that URL but via _details.
Those URLs aren't compatible.  Forgive me if this is long-winded, but there are "case" ids and there are "case version" ids.  A case version is, as it sounds, a version of a case.  A case can have several versions: one for each version of the product it is associated with.

So, in this case there is no "case" with id 987.  That number points to a "version" of the case.  The case itself is actually at this URL: https://moztrap.mozilla.org/manage/case/327/  So its id is 327.  (at least this is the best I could determine)

To be sure you're getting the right number, the only way to do this is to expand details on a case in the manage / cases screen, then right click on the id in the upper left corner and copy the url link.

Does this explanation make sense?  Thanks.

This is an issue that would be remedied by bookmarkable URLs.  This feature is the top of our list for Q3, if we can get resource time to work on it.
Ok, that raises another question to me. When we automate testcases in Mozmill I would like to be able to reference against a specific version of the test in MozTrap. In that case we could use 987 here. But which ID we would have to use when actually submitting results to MozTrap? Will this be 987 or 327? If it can be 987, will the client be notified if it is using an outdated version of the test? That would be a great information, so we are aware that a test in MozTrap has been updated and we have to update our test too.
So, I think this is something I should have a brown-bag about.  Probably several parts of MozTrap that I should.  But here's how things map:

A case applies to a product.  Products have one or more productversions.  Cases have one or more caseversions.  There is a 1 to 1 relationship between caseversions and productversions.

Case            Product
Caseversion1--->Productversion1
Caseversion2--->Productversion2
Caseversion3--->Productversion3

When you submit new results via the API, you will supply a productversion and results for each case.  MozTrap will lookup and find the caseversions for that productversion and apply them.  So you only ever need to know about the case id (327) and the product and productversion.

So, the version of the test case doesn't really become "outdated" as it is just tied to a specific version of the product.  When you create a new version of the product, you get a new set of caseversions for it.
Thanks Cameron. Given that this works as expected lets transform this bug a bit so that we can gather all the required information to let us send the right data to MozTrap. I will also CC Dave, Vlad, and Alex.

(In reply to Cameron Dawson [:camd] from comment #3)
> A case applies to a product.  Products have one or more productversions. 
> Cases have one or more caseversions.  There is a 1 to 1 relationship between
> caseversions and productversions.
> 
> Case            Product
> Caseversion1--->Productversion1
> Caseversion2--->Productversion2
> Caseversion3--->Productversion3

So what happens when Caseversion3 for Productversion3 gets an update and steps or expected results change? Will it get a new Caseversion or is there another version flag which indicates that new version? The thing is that in the past we never have been notified when testcases in Litmus have been updated. So our tests diverge compared to the manual tests. I kinda don't want to have this situation again for MozTrap. I know that Anthony wants to move automated tests into a sepecial 'mozmill' bucket, but even then it's not guarantied that we will be informed.

So what can we do on our side to check if the test data we have is still up-to-date? Is there a special RestAPI call?

> When you submit new results via the API, you will supply a productversion
> and results for each case.  MozTrap will lookup and find the caseversions
> for that productversion and apply them.  So you only ever need to know about
> the case id (327) and the product and productversion.

That's nice. Thanks for the clarification. That means whenever we want to automate a MozTrap test we never should give out the _detail URL to the test.

> So, the version of the test case doesn't really become "outdated" as it is
> just tied to a specific version of the product.  When you create a new
> version of the product, you get a new set of caseversions for it.

Sure, but see above what I meant with outdated in case of steps/expected results changes.
Summary: /manage/case/XYZ doesn't always find the test → Investigate RestAPI requirements to submit Mozmill test results to MozTrap
Hey Henrik-- That's an interesting point about the test being outdated.  I didn't program anything into the API for that, but we could.  There is a date field for each table called  ``modified_on`` that is updated any time there is a change. 

So somehow you would need to persist the date that you wrote the test.  Then we could handle this in one of a few ways:

1. When you run a test, you pass in that date, MozTrap compares that with the modified_on value for the caseversion and throws a warning if it's out of date.

2. I add the modified_on date to the caseversion api, and you can fetch all the caseversions and do the compare yourself (simplest on my end).  You could either do this when you run, or as a separate status check on the tests.

I could get #2 to you in a week or less (I'm kinda swamped right this minute).
(In reply to Cameron Dawson [:camd] from comment #5)
> So somehow you would need to persist the date that you wrote the test.  Then
> we could handle this in one of a few ways:

Sounds good but it makes it hard for us to handle dates. Would there be a way to have a revision number? That would make things way easier to compare with.

> 2. I add the modified_on date to the caseversion api, and you can fetch all
> the caseversions and do the compare yourself (simplest on my end).  You
> could either do this when you run, or as a separate status check on the
> tests.

Is there any other tool which would also make use of the RestAPI? Just wondering if it should be server or client side. If multiple tools will exist a server side implementation would make more sense IMHO.

> I could get #2 to you in a week or less (I'm kinda swamped right this
> minute).

No hurry. I will be away the next week so we have a bit of time to thoughtfully plan this API.
Cam, one more question. Has the Moztrap test a reference to the bug which has been used to implement a new feature or fix a bug? It doesn't directly correlate to result submission but would be important for us to keep track of the automation process.
Henrik-- There really isn't a revision number for a caseversion.  It gets updated in place.  It's just the modified_on field we have to work with in that way.  It is possible we could add a revision number.  But it doesn't seem like that would be any easier than a date.  It's still an additional value you'd have to store.

We don't yet have anything that is using the REST API.  You're our first.  :)

There isn't a specific DB field for bug related to a case.  If you mark a test failed, then the optional bug URL you enter would be associated.  What I have encouraged people to use is to put that in the test description.  If we agree on a format for that, it'd be easy to find.  But I can see your point that it might be more straightforward in a real field.  If you think it would justify the effort to add a bug field, please enter a bug for the feature.  (hmm, kind of ironic...)
(In reply to Cameron Dawson [:camd] from comment #8)
> Henrik-- There really isn't a revision number for a caseversion.  It gets
> updated in place.  It's just the modified_on field we have to work with in
> that way.  It is possible we could add a revision number.  But it doesn't
> seem like that would be any easier than a date.  It's still an additional
> value you'd have to store.
 
A revision would be a normal number whereby a date can look way different. How is it formatted? Which data does it contain? Seconds up to the year, or even milliseconds? I don't know of any VCS/DMS which is using a date only for storing revisions. There is probably a reason for.

> We don't yet have anything that is using the REST API.  You're our first.  :)

Good! So we have to empower ourselves to make this a good one!

> There isn't a specific DB field for bug related to a case.  If you mark a
> test failed, then the optional bug URL you enter would be associated.  What

Most testers will not know about those bugs and will not be able to enter anything. So it would be work for the QA team to find the appropriate bug. It will also reduce the number of possible duplicates. 

> I have encouraged people to use is to put that in the test description.  If
> we agree on a format for that, it'd be easy to find.  But I can see your
> point that it might be more straightforward in a real field.  If you think
> it would justify the effort to add a bug field, please enter a bug for the
> feature.  (hmm, kind of ironic...)

I have filed bug 778046 which contains all what I have to say at the moment.
Cam, can you give us an update? Thanks.
Something really important which came up yesterday during the moztrap meeting is that we still have no unique test ids. The cases will get new ids for those reasons:

* If another test is necessary for a platform and/or version
* If tests are getting duplicated for a new Firefox release

All those cases will cause the case ids to get outdated in our automated tests. I really thought that we get unique ids which we could use to identify a test. But that doesn't seem to work and we have the same situation as with Litmus now. We really have to find a solution for marking our automated tests.
Depends on: 783112
Depends on: 783115
Depends on: 783119
Henrik and I talked about all this stuff.  He's going to enter a few bugs against moztrap for some enhancements.
Depends on: 775266
No longer depends on: 783119
As long as we do not have a couple of test cases automated and in MozTrap we don't think it will be such a high priority to create this connector. Given that we pushed out this work to the future.

Also keep in mind that whenever we have a test automated we do propose to NOT add it to MozTrap. This is duplication which is not necessary.
Severity: normal → enhancement
Target Milestone: --- → Future
Just one more URL we would need is the github repository of the Python RestAPI connector: https://github.com/camd/moztrap-connect
A Pivotal Tracker story has been created for this Bug: http://www.pivotaltracker.com/story/show/41554045
Mass-closing remaining MozTrap bugs as WONTFIX, due to 1) the Mozilla-hosted instance being decommissioned (see https://wiki.mozilla.org/TestEngineering/Testrail), and, for now, 2) the still-up code archived at its GitHub page: https://github.com/mozilla/moztrap (we'll decide what's next for that, in the near future).

See also the history and more-detailed discussion which led us here, at https://groups.google.com/forum/#!topic/mozilla.dev.quality/Sa75hV8Ywvk

(If you'd like, you should be able to filter these notification emails using at least the unique string of "Sa75hV8Ywvk" in the message body.

Thanks!
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → WONTFIX
Product: Mozilla QA → Mozilla QA Graveyard
You need to log in before you can comment on or make changes to this bug.