Closed Bug 838993 Opened 11 years ago Closed 11 years ago

Disable failed Gaia unit tests temporary

Categories

(Firefox OS Graveyard :: Gaia, defect)

defect
Not set
normal

Tracking

(b2g18 fixed, b2g18-v1.0.0 wontfix, b2g18-v1.0.1 fixed)

RESOLVED FIXED
Tracking Status
b2g18 --- fixed
b2g18-v1.0.0 --- wontfix
b2g18-v1.0.1 --- fixed

People

(Reporter: timdream, Assigned: jugglinmike)

References

Details

Attachments

(1 file, 7 obsolete files)

Now the that bug 837581 landed, and we got travis-ci.org helping us testing every branch and every pull request, we would need to make the test worthy.

Currently, there are 200 or so tests failing on every run, presumably because they are not being updated to work against the code.

I propose that we disable these tests, and file bugs against each app so people will fix them and re-enable them.

After that, we could go ahead and request each commit/pull request to keep the test green before merging, and enforce a hard rule where a commit that make the tree red will be backed out.

Obviously this would need module owner's blessing... Vivien :)

BTW, anyone thinks removing rather than disabling is a better idea?
Flags: needinfo?(21)
I feel like this is the best thing to do. Do you have a pull request handy? Otherwise I bet than Julien and Anthony will be really happy to help here since there are working on tests those days.
Flags: needinfo?(21) → needinfo?(anthony)
Disable is fine. Of course, I'd rather have all these fixed now but I suspect this is just a dream.

Maybe we can still try to fix some in the process, at least in parts of the code we know.
Flags: needinfo?(felash)
feel free to take it since I'm working on customization issue :)
Assignee: yurenju → nobody
Assignee: nobody → felash
BTW, It's a exciting time to see build status turn to green :D
Julien: I should be able to get this done today, if it'd help.
Please, go ahead. If you can fix some of them too, that would be awesome :)

be careful as update_manager's tests sometimes fail because of asynchronous stuff; I have a plan to make them work reliably one day but in the mean time...
Thanks for the heads up--I'll be careful around those tests :)

As for getting tests to pass, that is my ultimate goal. In the spirit of this bug, though, I'm going to prioritize just getting everything green as a stop gap
A big snow storm hit Boston before I could finish this. I plan to get back to work on Monday--sorry for the delay.
It looks like the big bulk of failing tests are related to gesture/touch changes, at least for the Gallery and Keyboard.
Attached patch Disable failing unit tests (obsolete) — Splinter Review
This patch has also been submitted as a pull request on GitHub: https://github.com/mozilla-b2g/gaia/pull/8050
I've completed this work. I'm not sure about the preferred workflow here, so I've attached a patch to this ticket and created a pull request on GitHub:

https://github.com/mozilla-b2g/gaia/pull/8050
I usually do just like you :) However we do prefer 8-lines context for diff (git format-patch -U8)

Then you just need to ask for a review on your patch (in the "details" view, or directly when you create the attachment). You can ask me (search for :julien) or :etienne for example.
Thanks, Julien! I'll re-submit the patch as you've described
Attachment #712582 - Attachment is obsolete: true
Attachment #712607 - Flags: review?(felash)
Comment on attachment 712607 [details] [diff] [review]
Disable failing unit tests (with 8-line context)

Review of attachment 712607 [details] [diff] [review]:
-----------------------------------------------------------------

makes me sad, especially the system tests that I did ;)

before checking this in, I'd like that you create one bug per app, and reference it in the comments you added.
Each bug should block on this one.

thanks !
Attachment #712607 - Flags: review?(felash) → feedback+
Depends on: 840489
Depends on: 840493
Depends on: 840495
Depends on: 840497
Depends on: 840500
Attachment #712607 - Attachment is obsolete: true
Attachment #712905 - Flags: review?(felash)
Comment on attachment 712905 [details] [diff] [review]
Disable failing unit tests (with references to new application bugs)

I've also updated the relevant pull request on GitHub:

https://github.com/mozilla-b2g/gaia/pull/8050
Comment on attachment 712905 [details] [diff] [review]
Disable failing unit tests (with references to new application bugs)

I still see tests errors in the calendar app with your patch. Do you see them too ?
Attachment #712905 - Flags: review?(felash)
I haven't seen any other failures in the Calendar app's test suite. I've just applied to latest master to be sure. Here are my thoughts:

- Do they fail consistently on your end? I know you said that "update_manager" may have some non-deterministic failures, and I'm now finding the SMS tests to fail unpredictably, as well.

- Should it matter if we run the tests from B2G Desktop or FireFox Nightly? The documentation implies that it doesn't [1], but I'm not convinced.

[]1 https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_OS/Platform/Testing/Gaia_unit_tests?redirectlocale=en-US&redirectslug=Mozilla%2FFirefox_OS%2FGaia_Unit_Tests
(In reply to mike from comment #19)
> I haven't seen any other failures in the Calendar app's test suite. I've
> just applied to latest master to be sure. Here are my thoughts:
> 
> - Do they fail consistently on your end? I know you said that
> "update_manager" may have some non-deterministic failures, and I'm now
> finding the SMS tests to fail unpredictably, as well.

yep this is consistent but this may be because I'm using a french locale and these tests are not mocking the l10n library.

2 of them do not seem to be related to date/time issue though :
  4) [calendar] views/modify_account #save on success:
     Error: redirects to complete url: expected null to equal '/settings'
  7) [calendar] app #go result:
     Error: expected undefined to equal '/settings'
  8) [calendar] db "before all" hook:
     Error: Error: should not have an error when deleting db: expected false to be truthy (http://test-agent.gaiamobile.org:8080/common/test/helper.js?time=1360693789278:30)


> 
> - Should it matter if we run the tests from B2G Desktop or FireFox Nightly?
> The documentation implies that it doesn't [1], but I'm not convinced.

There are difference between B2G Desktop and FF Nightly, especially we don't have the same gecko at all. So we might see differences.

I'm using FF Nightly because it's easier/faster but the target is to use B2G Desktop.
I'm also using FF Nightly for the same reason.

I'll look into changing my locale try to reproduce the l10n-related failures.

As for the 2 you've shared specifically, I'm not sure how to proceed. I could just disable them without experiencing the failures, but this doesn't seem like a great strategy for finding bugs.

Finally, I've finally experienced the unpredictable "update_manager" failures. I will disable those as I encounter them while investigating the above failures.
When we see things like that it sometimes means that there is a asynchronous issue. For example, my computer might be faster, and so the test tests too early.

I'd be more comfortable disabling those too, but with a comment that says that it's unpredictable.
Some tests have been found to fail inconsistently. This patch aggressively disables the suites that contain these tests, as it is difficult to verify precisely which tests are faulty (that work will be left to the process of re-enabling the tests and tracked with the relevant application-specific bug).

I've updated the pull request on GitHub:

https://github.com/mozilla-b2g/gaia/pull/8050
Attachment #712905 - Attachment is obsolete: true
Attachment #713413 - Flags: review?(felash)
As noted in the latest patch, I have aggressively disabled the suites that contain the non-deterministic tests. It's difficult to say how long it will take to correct the flaws in these tests, so I believe the "quick and easy" approach is preferable here (especially considering that yet another test was unknowingly broken yesterday!).

Julien: I installed the French i10n version of FireFox Aurora, but I was unable to reproduce the failures you have alluded to. If provided with the test output, I would be happy to disable those tests on your behalf.

I've updated the pull request on GitHub:

https://github.com/mozilla-b2g/gaia/pull/8050
What's failing here :

  1) [calendar] calendar/calc #formatHour 7 hours:
     Error: expected '7 Wed Feb 13 2013 07:39:53 GMT+0100 (CET)' to equal '7 AM'
  2) [calendar] calendar/calc #formatHour 23 hours:
     Error: expected '11 Wed Feb 13 2013 23:39:53 GMT+0100 (CET)' to equal '11 PM'
  3) [calendar] db "before all" hook:
     Error: Error: should not have an error when deleting db: expected false to be truthy (http://test-agent.gaiamobile.org:8080/common/test/helper.js?time=1360773592912:30)
Thanks, Julien! I'll update the patch with these disabled.

There will be a bit of a delay, though, as conflicting changes just landed in master.
Comment on attachment 713413 [details] [diff] [review]
Disable failing unit tests (with non-deterministic tests disabled)

ok !
just put r? back when you're ready :)
Attachment #713413 - Flags: review?(felash)
Depends on: 841045
Attached patch Disable failing unit tests (obsolete) — Splinter Review
This patch adds those failures reported by Julien (which seem to be related to assumptions around localization details). It also disables more tests that have recently been made to fail on the "master" branch.
Attachment #713413 - Attachment is obsolete: true
Attachment #713463 - Flags: review?(felash)
Hi Julien: I had to create a new patch, so unless I'm misunderstanding, it seemed incorrect to put the "r?" back on the now-obsolete patch. Please let me know if I've handled this incorrectly.

Also note that recent changes to master have affected the patch contents:

- I've been able to avoid disabling one of the SMS tests
- I've had to disable tests in the "Contacts" application (I've created a new bug to track these tests and referenced in-line, just as with the others)

I've also updated the associated pull request on GitHub:

https://github.com/mozilla-b2g/gaia/pull/8050
I get tests that fails when I run all the tests at once, but not when I run only the tests for the app:
  2) [sms] Threads-list Tests Threads-list rendering Check HTML structure:
     Error: expected 1 to equal 3

  3) [sms] Threads-list Tests Threads-list rendering Render unread style properly:
     Error: expected 0 to equal 1

  4) [sms] Threads-list Tests Threads-list edit mode Check edit mode form:
     Error: expected 1 to equal 4

Some tests, in the contrary, are failing only when launching the calendar tests (they don't fail when launching all tests):
  1) [calendar] views/months_day #_updateHeader:
     Error: expected 'Friday 11 May 2012' to contain 'vendredi 11 mai 2012'
  2) [calendar] views/months_day #render:
     Error: expected '<div data-l10n-date-format="agenda-date-format" data-date="Wed Feb 13 2013 00:00:00 GMT+0100 (CET)" class="day-title">Wednesday 13 February 2013</div>' to contain 'mercredi'
  3) [calendar] views/months_day #handleEvent selectedDayChange:
     Error: expected '<div data-l10n-date-format="agenda-date-format" data-date="Wed Feb 01 2012 00:00:00 GMT+0100 (CET)" class="day-title">Wednesday 1 February 2012</div>' to contain 'mercredi'
  4) [calendar] app #go result:
     Error: expected undefined to equal '/settings'

The 3 first seems to be because of a locale problem. I'd say they work when launching all tests because one of the previous tests maybe mocks l10n and forgets to "unmock" it in the teardown.

The last one... well I don't know at all.


This test fails in both only calendar and all tests mode
  5) [calendar] db "before all" hook:
     Error: Error: should not have an error when deleting db: expected false to be truthy (http://test-agent.gaiamobile.org:8080/common/test/helper.js?time=1360779082480:30)

(current master)
(In reply to mike from comment #29)
> Hi Julien: I had to create a new patch, so unless I'm misunderstanding, it
> seemed incorrect to put the "r?" back on the now-obsolete patch. Please let
> me know if I've handled this incorrectly.

nope that's perfect.

Just hope we'll arrive at a stable point someday, this looks like a moving target :/
Thanks again for the info, Julien. I'm working on disabling these tests, but while I do, I thought I'd raise a concern here:

There is a large amount of uncertainty across Gaia's unit tests. Even with the relatively low tolerance I'm using to classify "bad" tests, it is possible that we could miss some of them. It  Any single such test, no matter how insignificant, will defeat the process of Continuous Integration.

Furthermore, it's difficult to estimate the effort necessary to re-enable "flaky" tests. I have never experienced some of the failures cited here. While disabling them is a straightforward task, re-enabling them will involve isolating the conditions that trigger them to fail, and then fixing the bug (in the production code or in the test logic itself). But it's possible that the underlying problem may be fixed by the time we begin debugging (or that it has *already* been fixed). This means the criteria for completion will be more nuanced than "is the test passing?"--it will be, "has the underlying bug that caused the test to fail some of the time on someone else's computer already been fixed?"

All this makes me think that a re-write of some applications' unit tests may be appropriate. I know this is a tall order from someone just getting involved with the project, but I would feel remiss if I didn't at least make the suggestion. To those involved: does this seem like a realistic response to the issues I've described here? Are there other steps we can take address them?
(In reply to mike from comment #32)
> Thanks again for the info, Julien. I'm working on disabling these tests, but
> while I do, I thought I'd raise a concern here:
> 
> There is a large amount of uncertainty across Gaia's unit tests. Even with
> the relatively low tolerance I'm using to classify "bad" tests, it is
> possible that we could miss some of them. It  Any single such test, no
> matter how insignificant, will defeat the process of Continuous Integration.

yep so must disable them if we find some more in the future.


> Furthermore, it's difficult to estimate the effort necessary to re-enable
> "flaky" tests. I have never experienced some of the failures cited here.
> While disabling them is a straightforward task, re-enabling them will
> involve isolating the conditions that trigger them to fail, and then fixing
> the bug (in the production code or in the test logic itself). But it's
> possible that the underlying problem may be fixed by the time we begin
> debugging (or that it has *already* been fixed). This means the criteria for
> completion will be more nuanced than "is the test passing?"--it will be,
> "has the underlying bug that caused the test to fail some of the time on
> someone else's computer already been fixed?"

Let's be pragmatic and after this bug disables the failing tests, let's keep as rationale "a failing test is a test failing on the CI server".

If one specific test fails on another computer (eg: mine ;) ), we can file a specific bug about this specific test with more information. Either (1) the infrastucture to run the tests on the CI is incorrect (ie: it's too forgiving), or (2) the one to run the tests on the dev's computer is incorrect (ie: say: wrong locale.. even if I believe a test should not depend on a specific locale), or (3) the test makes wrong assumption and this must be fixed. My prefered solution would be both 1 and 3 :)

> 
> All this makes me think that a re-write of some applications' unit tests may
> be appropriate. I know this is a tall order from someone just getting
> involved with the project, but I would feel remiss if I didn't at least make
> the suggestion. To those involved: does this seem like a realistic response
> to the issues I've described here? Are there other steps we can take address
> them?

My rationale here is : even if they're not perfect, these tests exist, so it's better than not having any. Of course they can be improved in some applications, and it's maybe our role as "expert test writer" (well I'm still a beginner in JS tests ;) ) to write one good test in each application, to make it easier for the other dev to write good tests.

For example, I've recently seen that Mocha does not handle very well exceptions in asynchronous tests. I found a way to make it way better but I think at the end Mocha should be fixed (I'll file a bug there for that).
Attached patch Disable failing unit tests (obsolete) — Splinter Review
Attachment #713463 - Attachment is obsolete: true
Attachment #713463 - Flags: review?(felash)
Attachment #713994 - Flags: review?(felash)
I've submitted a new patch (and updated the pull request on GitHub). Julien: I share your hope for a stable baseline, and I appreciate the help in getting there.

(In reply to Julien Wajsberg [:julienw] from comment #33)
> (In reply to mike from comment #32)
> > Thanks again for the info, Julien. I'm working on disabling these tests, but
> > while I do, I thought I'd raise a concern here:
> > 
> > There is a large amount of uncertainty across Gaia's unit tests. Even with
> > the relatively low tolerance I'm using to classify "bad" tests, it is
> > possible that we could miss some of them. It  Any single such test, no
> > matter how insignificant, will defeat the process of Continuous Integration.
> 
> yep so must disable them if we find some more in the future.

My worry is that this won't be acceptable when passing CI is a technically-enforced requirement, and any contributor may be blocked by a non-deterministic condition. If the authority of the CI server is questionable, then it loses its value.

> > Furthermore, it's difficult to estimate the effort necessary to re-enable
> > "flaky" tests. I have never experienced some of the failures cited here.
> > While disabling them is a straightforward task, re-enabling them will
> > involve isolating the conditions that trigger them to fail, and then fixing
> > the bug (in the production code or in the test logic itself). But it's
> > possible that the underlying problem may be fixed by the time we begin
> > debugging (or that it has *already* been fixed). This means the criteria for
> > completion will be more nuanced than "is the test passing?"--it will be,
> > "has the underlying bug that caused the test to fail some of the time on
> > someone else's computer already been fixed?"
> 
> Let's be pragmatic and after this bug disables the failing tests, let's keep
> as rationale "a failing test is a test failing on the CI server".
> 
> If one specific test fails on another computer (eg: mine ;) ), we can file a
> specific bug about this specific test with more information. Either (1) the
> infrastucture to run the tests on the CI is incorrect (ie: it's too
> forgiving), or (2) the one to run the tests on the dev's computer is
> incorrect (ie: say: wrong locale.. even if I believe a test should not
> depend on a specific locale), or (3) the test makes wrong assumption and
> this must be fixed. My prefered solution would be both 1 and 3 :)

This seems reasonable to me

> > 
> > All this makes me think that a re-write of some applications' unit tests may
> > be appropriate. I know this is a tall order from someone just getting
> > involved with the project, but I would feel remiss if I didn't at least make
> > the suggestion. To those involved: does this seem like a realistic response
> > to the issues I've described here? Are there other steps we can take address
> > them?
> 
> My rationale here is : even if they're not perfect, these tests exist, so
> it's better than not having any. Of course they can be improved in some
> applications, and it's maybe our role as "expert test writer" (well I'm
> still a beginner in JS tests ;) ) to write one good test in each
> application, to make it easier for the other dev to write good tests.

While I have some thoughts on how to improve the existing tests, I consider my criticism here to be of a more fundamental nature than, for instance, best practices. As I noted above, tests that fail sporadically undermine the CI process.

> For example, I've recently seen that Mocha does not handle very well
> exceptions in asynchronous tests. I found a way to make it way better but I
> think at the end Mocha should be fixed (I'll file a bug there for that).

This sounds great! I really like Mocha, and I'd love to help improve it. Please post a link to the issue once you've opened it. Personally, I'd like to see the ability to disable tests/suites (with functions like "xtest"--see the Jasmine test framework for an example: http://pivotal.github.com/jasmine/jsdoc/symbols/jasmine.Env.html#xdescribe ). It would be useful if the test runner could be aware of disabled tests (and report on them).
(In reply to mike from comment #35)

> My worry is that this won't be acceptable when passing CI is a
> technically-enforced requirement, and any contributor may be blocked by a
> non-deterministic condition. If the authority of the CI server is
> questionable, then it loses its value.

Yep, all tests must be deterministic in the end. The path to this state is first to land this patch, then to disable or fix any test that is still failing. (I'd prefer to fix it of course :) ).

Do you have a better idea ?

> > > All this makes me think that a re-write of some applications' unit tests may
> > > be appropriate. I know this is a tall order from someone just getting
> > > involved with the project, but I would feel remiss if I didn't at least make
> > > the suggestion. To those involved: does this seem like a realistic response
> > > to the issues I've described here? Are there other steps we can take address
> > > them?
> > 
> > My rationale here is : even if they're not perfect, these tests exist, so
> > it's better than not having any. Of course they can be improved in some
> > applications, and it's maybe our role as "expert test writer" (well I'm
> > still a beginner in JS tests ;) ) to write one good test in each
> > application, to make it easier for the other dev to write good tests.
> 
> While I have some thoughts on how to improve the existing tests, I consider
> my criticism here to be of a more fundamental nature than, for instance,
> best practices. As I noted above, tests that fail sporadically undermine the
> CI process.

And they must be fixed :)

> > For example, I've recently seen that Mocha does not handle very well
> > exceptions in asynchronous tests. I found a way to make it way better but I
> > think at the end Mocha should be fixed (I'll file a bug there for that).
> 
> This sounds great! I really like Mocha, and I'd love to help improve it.
> Please post a link to the issue once you've opened it. Personally, I'd like
> to see the ability to disable tests/suites (with functions like "xtest"--see
> the Jasmine test framework for an example:
> http://pivotal.github.com/jasmine/jsdoc/symbols/jasmine.Env.html#xdescribe
> ). It would be useful if the test runner could be aware of disabled tests
> (and report on them).

You can do that in Mocha :) It should support suite.only, test.only, suite.skip, test.skip. The most useful for us is using .skip because .only works only in one file (this means that if you use .only in one file, the tests in other files will still run).

I see what you mean here: instead of commenting the tests we should use this. And I think you're completely right.

One more test is failing today because of Bug 840322 but I commented there and I think this will be fixed by them.

I see other spurious tests failing only sometimes...

I'll see if I have time to produce a patch today and I'll submit you as reviewer (I asked Vivien (:vingtetun) and he's ok with that), to try to go faster than the test breakers..
I tried using test.skip but for some reason this is not very reliable; for example, it (sometimes ?) disable other tests too. I'll rely on comments for now.
(In reply to mike from comment #35)
> This sounds great! I really like Mocha, and I'd love to help improve it.
> Please post a link to the issue once you've opened it.

Indeed it was more a bug from Firefox : Bug 355430.

But I've just seen that our very own test framework supports having a function as the |done| argument, and it allows the framework to try/catch this function ! I'll ask James Lal to try to upstream this.
(In reply to Julien Wajsberg [:julienw] from comment #36)
> > My worry is that this won't be acceptable when passing CI is a
> > technically-enforced requirement, and any contributor may be blocked by a
> > non-deterministic condition. If the authority of the CI server is
> > questionable, then it loses its value.
> 
> Yep, all tests must be deterministic in the end. The path to this state is
> first to land this patch, then to disable or fix any test that is still
> failing. (I'd prefer to fix it of course :) ).
> 
> Do you have a better idea ?

My idea was to disable and re-write the tests for entire applications wherever a single test is found to exhibit non-determinism. This approach may be too idealistic when weighed against the larger goals of the project. It really depends on how disruptive one believes bugs in the CI process would be.
(In reply to Julien Wajsberg [:julienw] from comment #37)
> I tried using test.skip but for some reason this is not very reliable; for
> example, it (sometimes ?) disable other tests too. I'll rely on comments for
> now.

Thanks for pointing out the ".skip" API! I was playing with it and experienced the same behavior.
Depends on: 841755
Julien: It looks like Bug 838622 may have been resolved, so it may no longer be necessary to disable those tests.
Attached patch patch v7 (obsolete) — Splinter Review
Disabled the latest tests that fail. Some were failing because of my system's french locale, but they should not.

One test is still failing but I'll provide a patch for that in Bug 840322.

You can apply the patch with "wget -O - <patch url> | git am" :)
Attachment #713994 - Attachment is obsolete: true
Attachment #713994 - Flags: review?(felash)
Attachment #714374 - Flags: review?(mike)
Attached patch patch v8Splinter Review
Reenabling dialer tests now that Bug 838622 is fixed.

If you r+ while I'm still here I can push it myself, otherwise just add the keyword "checkin-needed".
Attachment #714374 - Attachment is obsolete: true
Attachment #714374 - Flags: review?(mike)
Attachment #714385 - Flags: review?(mike)
I've pulled from master and applied the patch. This should come as no surprise, but I'm seeing another failure :/ Here's the report:

  1) [settings] settings > "before each" hook:
     TypeError: navigator.mozSetMessageHandler is not a function
      at settings_init (http://settings.gaiamobile.org:8080/js/settings.js:99)
      at (anonymous) (http://settings.gaiamobile.org:8080/test/unit/settings_test.js:48)
      at wrapper (http://test-agent.gaiamobile.org:8080/common/test/mocha_generators.js:62)
      at run (http://test-agent.gaiamobile.org:8080/common/vendor/mocha/mocha.js:3709)
      at next (http://test-agent.gaiamobile.org:8080/common/vendor/mocha/mocha.js:3973)
      at (anonymous) (http://test-agent.gaiamobile.org:8080/common/vendor/mocha/mocha.js:3984)
      at (anonymous) (http://test-agent.gaiamobile.org:8080/common/vendor/mocha/mocha.js:4932)
  
This is in the setup for the entire file, so all the associated tests will have to be disabled. Would you like me to do this?
I've added a patch to Bug 840322 so that you can apply it to check everything is passing.
(In reply to mike from comment #44)
> I've pulled from master and applied the patch. This should come as no
> surprise, but I'm seeing another failure :/ Here's the report:

Yep, that's what I was saying in "One test is still failing but I'll provide a patch for that in Bug 840322. :)"

Just grab the latest patch in Bug 840322, I just uploaded it.
Comment on attachment 714385 [details] [diff] [review]
patch v8

All green on my end! (1313 total tests)
Attachment #714385 - Flags: review?(mike) → review+
master: https://github.com/mozilla-b2g/gaia/commit/1c853f124b869218c88a1452377fd9c1dedb23bd
Assignee: felash → mike
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Blocks: 841815
Comment on attachment 714385 [details] [diff] [review]
patch v8

NOTE: If blocking-basecamp+ is set, just land it for now.

[Approval Request Comment]

No risk at all because all changes are in test files. We need to uplift this to prevent future uplifting conflicts.
Attachment #714385 - Flags: approval-gaia-v1?(21)
Attachment #714385 - Flags: approval-gaia-v1?(21) → approval-gaia-v1+
Comment on attachment 714385 [details] [diff] [review]
patch v8

Vivien, the uplifter needs to know if the approval is for only v1-train or for both v1-train and v1.0.1. Sorry about that.
Attachment #714385 - Flags: approval-gaia-v1+ → approval-gaia-v1?(21)
(In reply to Julien Wajsberg [:julienw] from comment #50)
> Comment on attachment 714385 [details] [diff] [review]
> patch v8
> 
> Vivien, the uplifter needs to know if the approval is for only v1-train or
> for both v1-train and v1.0.1. Sorry about that.

This feels safe to land on v1.0.1 and v1-train.
Attachment #714385 - Flags: approval-gaia-v1?(21) → approval-gaia-v1+
v1-train@96dffe8cdb68205e8475a8bd10f455c5a7ccfaff
v1.0.1@d497a67c8ece5beec2954b7fb986aebda1f83d37
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: