Closed Bug 1184670 Opened 9 years ago Closed 6 years ago

Automatically run Intern test suite against stage & prod servers after deployments

Categories

(developer.mozilla.org Graveyard :: Code Cleanup, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED INCOMPLETE

People

(Reporter: groovecoder, Unassigned)

References

Details

When the Intern test suite runs reliably (bug 1184662), we should run it automatically after every stage & production deployment.

Previously, we attempted to run it on Travis. [1] But, Travis will run it based on commits and/or pull requests - not deployments.

So, we should run it either:

* At the end of chief_deploy.py; possibly using Jenkins REST API
* Via Jenkins polling the stage & prod revision.txt URLs [2][3]
* (Probably not) watching IRC bots' messages

Bonus points for a nifty way to shame folks on IRC when we break the tests.

[1] https://github.com/mozilla/kuma/pull/2947/files
[2] https://developer.allizom.org/media/revision.txt
[3] https://developer.mozilla.org/media/revision.txt
What browsers do you need these tests to run on? We have a Selenium Grid instance running where we primarily support running tests on Firefox. My preference would be to run these tests on Sauce Labs or BrowserStack on any supported environment, with a fallback to running them against Firefox on our infrastructure.

Following the documentation at https://kuma.readthedocs.org/en/latest/tests-ui.html I was able to successfully run against Firefox locally, but I haven't been able to run using Sauce Labs. If we're going to use our Selenium Grid instance we will need a way to point the tests at a remote server, namely selenium-hub1.qa.scl3.mozilla.com:4444
Flags: needinfo?(lcrouch)
Based on GA sessions [1], the top 10 browsers are (with cumulative %):

1. Chrome on Windows (37.7%)
2. Firefox on Windows (55.6%)
3. Chrome on Mac (73.1%)
4. Chrome on Linux (78.7%)
5. Firefox on Linux (82.6%)
6. IE on Windows (86.4%)
7. Safari on Mac (89.5%)
8. Firefox on Mac (92.4%)
9. Chrome on Android (94.3%)
10. Safari on iOS (95.9%)

Which selenium grid gives us the best coverage?

[1] http://screencast.com/t/FjRBdnDHZ3jN
Flags: needinfo?(lcrouch) → needinfo?(dave.hunt)
To be honest if you want a range of environments I'm going to recommend Sauce Labs. Do we need a blocking bug for getting the Sauce Labs integration working? Perhaps it's already working but needs some documentation?
Flags: needinfo?(dave.hunt) → needinfo?(lcrouch)
Sauce Labs sounds like a good fit.  In the very short term, I'd like to make Firefox our primary target.  Our previous tests were only run on Firefox and I've found Firefox's selenium driver is much, much more reliable.
Depends on: 1186899
Filed bug 1186899 for the Sauce Labs integration.
Flags: needinfo?(lcrouch)
Depends on: 1188558
Now that we have intern on the mdn admin node, I've added code to our chief_deploy.py that should run it at the end of the deployment. It needs 2 environment variables exported:

BROWSERSTACK_USERNAME=
and
BROWSERSTACK_ACCESS_KEY=

:jakem - can we update the admin node to have these environment variables available for chief_deploy.py when it runs ctx.local?

:mbrandt - can WebQA get a Browserstack account and credentials to WebOps?
Flags: needinfo?(nmaul)
Flags: needinfo?(mbrandt)
(In reply to Luke Crouch [:groovecoder] from comment #6)
> Now that we have intern on the mdn admin node, I've added code to our
> chief_deploy.py that should run it at the end of the deployment. It needs 2
> environment variables exported:
> 
> BROWSERSTACK_USERNAME=
> and
> BROWSERSTACK_ACCESS_KEY=
> 
> :jakem - can we update the admin node to have these environment variables
> available for chief_deploy.py when it runs ctx.local?
> 
> :mbrandt - can WebQA get a Browserstack account and credentials to WebOps?

I'll answer for Web QA, here - as I mentioned previously in https://bugzilla.mozilla.org/show_bug.cgi?id=1186899#c5 we don't have any in-place, standing agreement with BrowserStack.  If you need to provision a more-robust and long-lasting agreement with them, we (MDN dev + Travis + myself + BrowserStack) can potentially work something out.  Until then, I'd ask and encourage you to contact them via their https://www.browserstack.com/pricing link which mentions "Free for Open Source."
Flags: needinfo?(mbrandt)
:jakem or :cyliang - the code in https://github.com/mozilla/kuma/pull/3535 also needs a couple more environment variables. So the full list is:

BROWSERSTACK_USERNAME
BROWSERSTACK_ACCESS_KEY
INTERN_PERSONA_USERNAME
INTERN_PERSONA_PASSWORD

I can provide values for all of these in a private message or bug.
Flags: needinfo?(cliang)
Update: https://github.com/mozilla/kuma/pull/3535 should work, but we still get flaky results when we set maxConcurrency higher than 1. :( Which means the Intern command takes a long time to run thru the 6 browser variations we're testing.

:mbrandt - I'm considering changing this to be a cron job instead, if that works for you? That way we can let it run as long as we want, and the 10-20m BrowserStack test suite isn't happening inside our deployment script.
Flags: needinfo?(mbrandt)
A couple (In reply to Luke Crouch [:groovecoder] from comment #9)
> the Intern command takes a long time to run thru the 6 browser variations
> we're testing.

Can we reduce this to fewer browsers and continue along the path of tests must pass before a deploy to stage? Perhaps 1 environment.

A full set of tests + all environments could then be run as part of a cron job and act as blocker to production pushes.



> :mbrandt - I'm considering changing this to be a cron job instead, if that
> works for you? That way we can let it run as long as we want, and the 10-20m
> BrowserStack test suite isn't happening inside our deployment script.
Flags: needinfo?(lcrouch)
Flags: needinfo?(mbrandt)
This week I'll keep trying some more browserstack configs to see if I can get a reliable passing config under 10m of run-time.
Flags: needinfo?(lcrouch)
:jakem or :cyliang - I updated https://github.com/mozilla/kuma/pull/3549/files so instead of environment variables, we can use commander_settings like we do for other things. We need these settings:

BROWSERSTACK_ACCESS_KEY
INTERN_DOMAIN
INTERN_USERNAME
INTERN_PASSWORD

Note: we might be able to replace INTERN_DOMAIN with the extant REMOTE_HOSTNAME value if that value is a FQDN like developer.mozilla.org or developer.allizom.org? Or is it a local hostname?
:mbrandt - does WebQA have a test Persona account we could use for our INTERN_USERNAME and INTERN_PASSWORD? We can pre-register it on both stage & prod for the purpose of the test suite.
Flags: needinfo?(mbrandt)
(In reply to Luke Crouch [:groovecoder] from comment #13)
> :mbrandt - does WebQA have a test Persona account we could use for our
> INTERN_USERNAME and INTERN_PASSWORD? We can pre-register it on both stage &
> prod for the purpose of the test suite.

afaik, no, we don't have a specific account set aside for this. Typically we haven't allowed test accounts to do the 'cud' of 'crud' to prod. That said, for some sites we have manually crafted and hard coded persona credentials to verify persistence of test accounts on stage and prod.

If you're unfamiliar, for disposable account on dev and staging instances http://personatestuser.org/ is a good resource.
Flags: needinfo?(mbrandt)
We use http://personatestuser.org/ for one test (to check if a new Persona users gets redirected to the "create account" page), but we don't want to be _creating_ a fake MDN user in our DB every time we run these tests.  We need a reliable user/pass and user configured (with proper permissions) to ensure our tests have access to stuff like wiki edit / page creation features.
(In reply to David Walsh :davidwalsh from comment #15)
> We use http://personatestuser.org/ for one test (to check if a new Persona
> users gets redirected to the "create account" page), but we don't want to be
> _creating_ a fake MDN user in our DB every time we run these tests.  

It's not a bad test pattern to create fake users in a dev/staging environment. In particular if the test verifies that the user persists, usually via a workflow that follow; create account, logout, login. It can also be coupled with an account deletion test.

> We need a reliable user/pass and user configured (with proper permissions) to ensure
> our tests have access to stuff like wiki edit / page creation features.

In the past we've hand crafted accounts with proper permissions and kept the credentials to accounts that need to persist in a separate config file. Can this be done during a deploy to dev/stage, allowing a script to create known users with specific permissions that tests can then use?
I'd like to do both. I.e., test the full flow of create account, sign in, edit wiki, sign out; AND use a test account with the proper admin permissions to test the template and admin features.

So, most immediately, we should create a test admin account for both stage & prod and store its persona credentials in commander_settings.INTERN_USERNAME and commander_settings.INTERN_PASSWORD.

:davidwalsh - you said you wanted to make a new account besides mdn-services@mozilla.com? What account should it be?
Flags: needinfo?(dwalsh)
groovecoder: PM the creds and/or leave them in a text file on one of the MDN servers.  I'll add them to the commander configs.
Flags: needinfo?(nmaul)
Flags: needinfo?(cliang)
command settings values are in /home/lcrouch/intern-bs-values on developeradm.
Flags: needinfo?(dwalsh)
Commits pushed to master at https://github.com/mozilla/kuma

https://github.com/mozilla/kuma/commit/0aeac5a811a93c63137451b56e6c3bb889cfa9ef
bug 1184670 - Don't allow admin tests on prod

https://github.com/mozilla/kuma/commit/f39359b0e419c5498bb719db1b6cc9bb78a70ccc
Merge pull request #3558 from mozilla/1184670-admin

bug 1184670 - Don't allow admin tests on prod
We've migrated from Intern to Python-based Selenium tests with pytest. Selenium tests are run in Jenkins with Firefox and Chrome against staging when pushing to the stage-integration-tests branch. Firing off the tests is still manual, but part of the deployment process.

https://kuma.readthedocs.io/en/latest/deploy.html

The tests are not solid. They occasionally break, especially around Selenium updates, but sometimes because staging is waking up from not being used. I consider solid tests a pre-condition for further automation. We'll continue that work under other bugs, like bug 1402950.
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → INCOMPLETE
Product: developer.mozilla.org → developer.mozilla.org Graveyard
You need to log in before you can comment on or make changes to this bug.