Closed Bug 498425 Opened 15 years ago Closed 12 years ago

Automatically run mozmill updates tests on release builds prior to release

Categories

(Release Engineering :: Release Automation: Other, defect, P3)

defect

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: bhearsum, Assigned: mozilla)

References

Details

(Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness][blocked])

Over the past two weeks we've hit two updater bugs: bug 498273 and bug 496917. The former, and probably the latter would've been caught if we had a test that tested "can this build update?" We have update_verify, which tests whether the MARs for the current release are correct, and this same system could probably be used to test the current release's builds for the ability to update. Off the top of my head, it could look like this: * Generate updates for the current release * Generate extra snippets that point to a MAR, but contain a later version number * Test the update through the in-product updater (Mossop says we can probably do this by dropping the MAR into the appdir). * Test for success (exit code? some expected change in the appdir?)
Mass move of bugs from Release Engineering:Future -> Release Engineering. See http://coop.deadsquid.com/2010/02/kiss-the-future-goodbye/ for more details.
Component: Release Engineering: Future → Release Engineering
Priority: -- → P3
Whiteboard: [automation][updates]
Assignee: nobody → ccooper
OS: Mac OS X → All
Hardware: x86 → All
CC-ing the mozmill folks on this. Is this something that we already use mozmill for as part of the QA release process? I don't want to duplicate effort here, although these steps can be brought into the release automation to take some pre-release work off of QA's plate, I'm all for that.
QA doesn't have a test for this AFAIK.
We are testing: * Updates are available on the specified channel * Update paths like partial/complete w/ and w/o fallback * The updater ui (mostly buttons to navigate to other wizard pages) * The version number (build id), which has to be greater than before the update * The locale, which should not have been changed
These tests that Henrik mentioned are automated using Mozmill to drive the browser, update it, check it upon restart etc. We are currently working quite hard on bug 516984 to get mozmill to play nicely with buildbot, and once that is done, we should be able to have a simple way for you to run these update tests from buildbot via mozmill at any time.
This one is a little bit more involved because there won't be an update generated for the build yet....we need to either set up a fake AUS with an update, or skip the "ping AUS" part and fake out the updater by putting the right files in the right places.
Whiteboard: [automation][updates] → [automation][updates][mozmill-automation]
Yup, I think the approach in comment 6 is correct. Specifically: - generate the MARs for a nightly - test them - publish them on the nightly update channel only after they test green
Ben, as you have mentioned a while back on IRC it will take a bit of time to get this test cycle implemented. Do you have an idea when it could happen? In the meantime we would like to enhance our Mozmill tests by executing the updates as soon as they are available. Right now we are starting the test-run at 8am PDT. So far there is no way for us to get a notification when updates are available. It would be really helpful if we could find a way to let our machines know that updates are available. Do you think that is doable in the short term?
It sounds like mozmill is the way to go here, assuming we can resolve bug 516984. Can the existing mozmill tests from comment #4 be copied/modified to do a manual install of the MAR files and check for validity? https://wiki.mozilla.org/Software_Update:Manually_Installing_a_MAR_file
Whiteboard: [automation][updates][mozmill-automation] → [automation][updates][mozmill-automation][oldbugs][triagefollowup]
(In reply to comment #9) > Can the existing mozmill tests from comment #4 be copied/modified to do a > manual install of the MAR files and check for validity? Sorry, seems like your reply didn't make my inbox. :( While reading the wiki page I don't see any step which requires tests inside the browser. Everything has to be done from the command line, right? What would be the benefits by using Mozmill? Also I thought that this type of test is already running on your machines? What type of validity checks you want to have? I would still agree with the initial comment from Ben that using an update snippet for those mar files and testing this update with the real user interface is the way we should go.
(In reply to comment #10) > While reading the wiki page I don't see any step which requires tests inside > the browser. Everything has to be done from the command line, right? What would > be the benefits by using Mozmill? Also I thought that this type of test is > already running on your machines? What type of validity checks you want to > have? > > I would still agree with the initial comment from Ben that using an update > snippet for those mar files and testing this update with the real user > interface is the way we should go. Yes, I think we're talking about the same thing. Based on comment #4, you already have mozmill tests to verify that an update has occurred. What I'd like to know is whether it would be possible to install said update via the command-line steps rather that the in-browser UI and then perform the same checks. If mozmill can't do the command-line part, we could script that part up in the release factories, and then run (a subset of) the mozmill update verification steps via buildbot. Can mozmill perform command-line ops, or is it limited to driving the browser? That will let me know how much code I'll need to write, and I can start getting it ready.
Not sure if we have the same expectations from running those Mozmill tests on your side. Issues like bug 601469 have been raised the importance of running our tests before we offer the update in our various update channels. Those bugs like the one above we will not be able to find when running the updates in the command line. Our currently automatically executed test-runs kick in too late here. Updates are already publicly available. Does that meet another type of tests from your side? If yes, then I have understood this bug not correctly and we have to file another bug. Ben? (In reply to comment #11) > (In reply to comment #10) > to know is whether it would be possible to install said update via the > command-line steps rather that the in-browser UI and then perform the same > checks. > > If mozmill can't do the command-line part, we could script that part up in the > release factories, and then run (a subset of) the mozmill update verification > steps via buildbot. Mozmill is able to execute commands in a shell. Therefore you could use the python callbacks or start such a command directly from within the browser. But keep in mind that Mozmill runs inside the browser as an extension, so the application is open and cannot be patched. I still think using Mozmill for your described tests is an overhead while a simple script can achieve the same process.
(In reply to comment #12) > Not sure if we have the same expectations from running those Mozmill tests on > your side. Issues like bug 601469 have been raised the importance of running > our tests before we offer the update in our various update channels. Those bugs > like the one above we will not be able to find when running the updates in the > command line. Our currently automatically executed test-runs kick in too late > here. Updates are already publicly available. Does that meet another type of > tests from your side? If yes, then I have understood this bug not correctly and > we have to file another bug. Ben? AIUI, this bug only covers pre-testing updates for releases (changing summary/whiteboard to reflect that). I agree that some sort of update checking would be a good idea for nightlies as well, but the two systems are sufficiently different that we should handle that in a separate bug. The manual MAR install script option makes more sense for the nightly case where we have no AUS test channels. > (In reply to comment #11) > Mozmill is able to execute commands in a shell. Therefore you could use the > python callbacks or start such a command directly from within the browser. But > keep in mind that Mozmill runs inside the browser as an extension, so the > application is open and cannot be patched. I still think using Mozmill for your > described tests is an overhead while a simple script can achieve the same > process. For releases, we just want is to move the current Mozmill update testing into release automation. AIUI, the betatest and releasetest channels are available immediately after pushing, so there's no reason for releng to do the push and then hand-off to QA to run an automated update test. Let's kick it all off in one go. We're tracking our snippet automation work in bug 594930.
Summary: need a way to test a builds ability to update, before we release → Run automated update tests on release builds prior to release
Whiteboard: [automation][updates][mozmill-automation][oldbugs][triagefollowup] → [release-process-improvement][automation][updates][mozmill-automation][oldbugs]
(In reply to comment #13) > I agree that some sort of update checking would be a good idea for nightlies as > well, but the two systems are sufficiently different that we should handle that > in a separate bug. The manual MAR install script option makes more sense for > the nightly case where we have no AUS test channels. This appears to be bug 588396.
(In reply to comment #14) > (In reply to comment #13) > > I agree that some sort of update checking would be a good idea for nightlies as > > well, but the two systems are sufficiently different that we should handle that > > in a separate bug. The manual MAR install script option makes more sense for > > the nightly case where we have no AUS test channels. > > This appears to be bug 588396. I see. Thanks for the clarification. As I found bug 588398 is the real one I'm looking for here.
Move of Mozmill related project bugs to newly created components. You can filter out those emails by using "Mozmill-Tests-to-MozillaQA" as criteria.
Component: Release Engineering → Mozmill Automation
Product: mozilla.org → Mozilla QA
QA Contact: release → mozmill-automation
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs]
Version: other → unspecified
Moved by accident. Back to RelEng.
Component: Mozmill Automation → Release Engineering
Product: Mozilla QA → mozilla.org
QA Contact: mozmill-automation → release
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs]
Version: unspecified → other
Adjusting summary to reflect the current consensus. This bug is only about running existing mozmill update tests from release automation immediately after we've pushed to betatest/releasetest. whimboo: where can I find the existing mozmill update tests? I'd like to try to start running in our staging env this week.
Summary: Run automated update tests on release builds prior to release → Automatically run mozmill updates tests on release builds prior to release
(In reply to comment #18) > Adjusting summary to reflect the current consensus. This bug is only about > running existing mozmill update tests from release automation immediately after > we've pushed to betatest/releasetest. That means we now have two different ways we want to accomplish such a goal: * Run the automation on RelEng side completely decoupled from QA * Run the automation on QA side, which is triggered via pulse (bug 617816) We should talk about which option of both is the best. Therefore I would really like to get Geo into the boot. > whimboo: where can I find the existing mozmill update tests? I'd like to try to > start running in our staging env this week. You only have to clone our automation repository for now: http://hg.mozilla.org/qa/mozmill-automation/ The testrun_update.py script is what you are looking for. With --help you get all the necessary information how to trigger an update run and report the results if wanted. You could use the following URL to report to our dashboard (http://mozmill-crowd.brasstacks.mozilla.com/db/) or report to a local file (file://report.json). If you push to our dashboard you can check the results here: http://mozmill-crowd.brasstacks.mozilla.com/#/update/reports
(In reply to comment #19) > * Run the automation on RelEng side completely decoupled from QA > * Run the automation on QA side, which is triggered via pulse (bug 617816) > > We should talk about which option of both is the best. Therefore I would really > like to get Geo into the boot. We can still direct all the reporting to brasstacks as is currently done though. The releng pool of slaves is much larger than the QA pool. If we're going to be sim-shipping and shipping more frequently, I'd rather use the larger pool. > The testrun_update.py script is what you are looking for. With --help you get > all the necessary information how to trigger an update run and report the > results if wanted. You could use the following URL to report to our dashboard > (http://mozmill-crowd.brasstacks.mozilla.com/db/) or report to a local file > (file://report.json). > > If you push to our dashboard you can check the results here: > http://mozmill-crowd.brasstacks.mozilla.com/#/update/reports Thanks for the info. Is there a staging version of brasstacks I could report to so as not to pollute the existing data with my testing results?
(In reply to comment #20) > The releng pool of slaves is much larger than the QA pool. If we're going to be > sim-shipping and shipping more frequently, I'd rather use the larger pool. Fully agree. But we should talk about, so we do not spend our time in trying to get something to work, which could be done in a better way. I will talk with Geo later today. > Is there a staging version of brasstacks I could report to so as not to pollute > the existing data with my testing results? Push to mozmill-crowd. That doesn't matter. We do not really have reports there yet.
Henrik: After talking to Matt, we should continue forward with the download/staging script and other things needed to speed up/ease the process on our side. That may happen in parallel with developing the final process with releng. I'm fairly confident that no matter what way this discussion goes, there'll be times when we want to kick off something on-demand from within our systems only, so that won't be wasted effort. Chris, et al from releng: I agree that ultimately this would run better from within releng's systems, it's simply a question of when. It would definitely be great if we can feel the process out so we can get an idea of what would be needed to get this under releng's umbrella. So yeah, if it's not too much trouble please run what we have in your staging and let us know how it goes. From there we can figure out what it would look like to do something triggered within QA and transition vs. implement within releng in the first place.
(In reply to comment #22) > After talking to Matt, we should continue forward with the download/staging > script and other things needed to speed up/ease the process on our side. That > may happen in parallel with developing the final process with releng. I haven't said we should stop this. My comments from above were clearly about triggered updates, and not about our on-demand testing activities.
Status: NEW → ASSIGNED
Priority: P3 → P2
Finally returning to this. whimboo: thanks for the info in comment #19. This will be helpful. My concern right now is how to get a stable version of mozmill to run the update tests with. AIUI, for nightly testing on m-c, we include mozmill with the packaged tests and install from there. This is not appropriate for releases though. We need to be running a known-stable version of mozmill that we can test and rev at defined intervals. Do we have such tagging in place right now for mozmill, i.e. how does QA ensure mozmill consistency from one release to the next? That's something I'd like to piggyback on for the releng work.
(In reply to comment #24) > My concern right now is how to get a stable version of mozmill to run the > update tests with. AIUI, for nightly testing on m-c, we include mozmill with > the packaged tests and install from there. This is not appropriate for releases > though. We need to be running a known-stable version of mozmill that we can > test and rev at defined intervals. We have similar requirements for our Mozmill Crowd extension. Users of that extension should only use the latest stable release, and we decide when we want to upgrade them to the next release. Therefore work happened to create a virtual environment for our tests. It can be found in the Mozmill-Crowd repository: https://github.com/whimboo/mozmill-crowd/tree/master/environments > Do we have such tagging in place right now for mozmill, i.e. how does QA ensure > mozmill consistency from one release to the next? That's something I'd like to > piggyback on for the releng work. Depends on how you want to integrate Mozmill. You can use a copy from the git repository with a given source stamp or install a given version via pypi. That's what we are doing in our test environment. The QA automation team is working closely together with Clint's team to ensure the best quality. We are testing all fixed bugs to ensure we regress hopefully nothing. Given our existing set of tests we have a good base of detecting regressions.
(In reply to comment #24) > Finally returning to this. > > whimboo: thanks for the info in comment #19. This will be helpful. > > My concern right now is how to get a stable version of mozmill to run the > update tests with. AIUI, for nightly testing on m-c, we include mozmill with > the packaged tests and install from there. This is not appropriate for releases > though. We need to be running a known-stable version of mozmill that we can > test and rev at defined intervals. > The version in m-c is the latest released version of mozmill. The development of mozmill happens in github: https://github.com/mozautomation/mozmill. When we release a new version of mozmill, we update the code in m-c. The only other time we update the mozmill code in m-c is for emergency regression fixes.
(In reply to comment #19) > The testrun_update.py script is what you are looking for. With --help you get > all the necessary information how to trigger an update run and report the > results if wanted. You could use the following URL to report to our dashboard > (http://mozmill-crowd.brasstacks.mozilla.com/db/) or report to a local file > (file://report.json). Can I get some more information here about how exactly QA is invoking the mozmill update tests, i.e. which command-line options you're setting and where you're getting the variables from? I want to be invoking them in exactly the same way, not doing some approximation. The same goes for the actual installation of mozmill. I suspect we'll need to do some kind of virtualenv setup in production, but I'd like to know how the QA testing setup is created initially so I can get as close as possible.
(In reply to comment #27) > Can I get some more information here about how exactly QA is invoking the > mozmill update tests, i.e. which command-line options you're setting and where > you're getting the variables from? I want to be invoking them in exactly the > same way, not doing some approximation. > > The same goes for the actual installation of mozmill. I suspect we'll need to > do some kind of virtualenv setup in production, but I'd like to know how the QA > testing setup is created initially so I can get as close as possible. I should note that I *do* actually have the mozmill update tests running here on Mac, at least. Need to try Linux and Windows still. I just want to make sure I'm running the test the same way QA would when I go to add it to buildbot.
Chris, can we talk about that during the All Hands? I think it would make more sense to sit together first in a small group.
I haven't made any progress on this, and Aki has talked about writing it in mozharness to make it easy for others to work on/extend too.
Assignee: coop → aki
Status: ASSIGNED → NEW
Priority: P2 → P3
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs] → [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness]
I haven't gotten feedback so far, so I wonder what was the issue here. Also what's the mozharness? We should better work together on a good solution which fits all of our needs and will make it easier to share code in the future.
Mozharness is a new set of interfaces for tools to talk to the buildbot infrastructure. It is what we are driving the tools toward in terms of standardization and using it as a means to drive a better component design (like you've seen us work toward with the low-level bits of mozmill). Mozharness has a simple integration solution that would work in the short term for your mozmill tests and you wouldn't need to change much (if any) code to work with it.
Do we have any documentation for mozharness? I wonder how our automation scripts would play with it.
(In reply to comment #33) > Do we have any documentation for mozharness? I wonder how our automation > scripts would play with it. We are not quite there yet but here is the repo http://hg.mozilla.org/build/mozharness Instead of buildbot calling directly to whatever steps; buildbot would call a mozharness that will know how to run various type os jobs. In this case, it would be to call mozmill.
Not really, no, but essentially think of it as a python wrapper with configuration and logging. Once that's in place, the idea is that developers, or qa, can use the same scripts as releng/buildbot, just different config files.
Aki, without checking the code I assume mozharness is a plain script to execute some tests, but doesn't include any code to distribute tasks across machines? We will have to implement such a system in the near future, so I would like to see that we have kinda the same entry points. That's probably important when you want to run our tests on the releng side. Can we have a call or getting a mail thread started? Would be great.
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness] → [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness][triagefollowup]
I think the plan of record here is that either Aki or Armen are going to pick this up once they have time. Armen has some other mozharness bugs to work on, but it's unclear yet whether this bug should be tackled before those other ones. I favor him fixing this bug first because it would drive our oldbug count down and give Armen experience working with mozharness that he could then use for unittests/talos. As Aki points out though, Armen would have a working system to mimic if he tackled the unittest mozharness scripts first.
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness][triagefollowup] → [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness]
I'm happy to help/advise if needed, fwiw
On slave: venv with mozmill, mercurial (mercurial necessary?) Add to mozharness: download firefox (base transfer) extract firefox (firefox specific) new profile (firefox specific) massage profile? (firefox specific) * clone mozharness * clone mozmill-tests * download firefox * extract firefox * new profile ** put update prefs in new profile? * mozmill -b FIREFOXPATH -t TESTPATH --show-all -p PROFILEDIR --debug mozmill-restart -b ~/Desktop/ffx50b1.app/Contents/MacOS/firefox-bin -p ~/.emptyprofile -t tests/update/testDirectUpdate/ --show-all --debug This gave 3 pass/3 fail/1 skipped Not sure if I should be using mozmill or mozmill-restart or if I need to do other things, but I have enough to start on some of these pieces in mozharness. I need to see if mozmill's logger conflicts with mozharness' logger.
(In reply to comment #39) > venv with mozmill, mercurial (mercurial necessary?) Yes, Mercurial is necessary. But the failure you get only happens with the latest releases. So please install v1.7.3 via pip. > Add to mozharness: > download firefox (base transfer) > extract firefox (firefox specific) You don't have to extract. You can directly specify an installer or archive. > new profile (firefox specific) Not necessary, we always create a fresh profile for the tests. > massage profile? (firefox specific) Nope. You are already set. > * clone mozharness > * clone mozmill-tests You will need mozmill-automation, not mozmill-tests. The automation script automatically clones the tests. > * download firefox > * extract firefox > * new profile The last two steps are not needed. > ** put update prefs in new profile? No change to do here. > * mozmill -b FIREFOXPATH -t TESTPATH --show-all -p PROFILEDIR --debug > > mozmill-restart -b ~/Desktop/ffx50b1.app/Contents/MacOS/firefox-bin -p > ~/.emptyprofile -t tests/update/testDirectUpdate/ --show-all --debug You can't use mozmill itself to run our tests. Use the testrun_update.py script from the mozmill-automation repository here. testrun_update.py channel=XYZ [--no-fallback] firefox-zyx.dmg With this command it should work and show the results in wiki format on the command line. Finally you should use --report to save everything as a JSON report. Just ask me if something is unclear.
./testrun_update.py --channel=releasetest ~/Desktop/Firefox\ 5.0b1.dmg gives me *** Cloning repository to '/var/folders/RG/RGg-TnxsFXi4u-MLP4SiAU+++TI/-Tmp-/tmpnF0EU2.mozmill-tests' /src/talosrunner/mozmill-automation/libs/testrun.py:179: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 e.message) Traceback (most recent call last): File "./testrun_update.py", line 107, in <module> main() File "./testrun_update.py", line 104, in main run.run() File "/src/talosrunner/mozmill-automation/libs/testrun.py", line 657, in run TestRun.run(self) File "/src/talosrunner/mozmill-automation/libs/testrun.py", line 278, in run self.clone_repository() File "/src/talosrunner/mozmill-automation/libs/testrun.py", line 179, in clone_repository e.message) Exception: Failure in setting up the mozmill-tests repository. 'str' object has no attribute 'get' using mercurial 1.9 and mac 5.0b1. I tried commenting out the cloning portion in libs/testrun.py, but it seems to use that information in several places. I can work on the download portion, as well as whether mozharness is doable/the best solution, without being blocked by this, however.
(In reply to comment #41) > using mercurial 1.9 and mac 5.0b1. Please see my last comment. You have to use Mercurial 1.7.3. > I can work on the download portion, as well as whether mozharness is > doable/the best solution, without being blocked by this, however. For the download you can also use download.py, which has support for releases, release candidate builds, and daily builds.
pip install http://mercurial.selenic.com/release/mercurial-1.7.3.tar.gz worked for that. download.py worked for me; I'll have to see if it's versatile enough for everything we need. ./download.py -p mac -v 5.0b7 ./testrun_update.py --channel=beta --report=file://report.json firefox-5.0b7.en-US.mac.dmg worked. Much smoother than I expected :) I'll look into how to wrap/schedule this and what kind of test matrix we want.
Depends on: 671724
(In reply to comment #43) > pip install http://mercurial.selenic.com/release/mercurial-1.7.3.tar.gz > worked for that. If that would cause a problem on your end, I will have to investigate why the latest Mercurial release doesn't work and fix it. Is it urgent? > download.py worked for me; I'll have to see if it's versatile enough for > everything we need. Sure. If you miss something or find bugs let me know. I'm kinda open to enhance this script. > ./download.py -p mac -v 5.0b7 > ./testrun_update.py --channel=beta --report=file://report.json > firefox-5.0b7.en-US.mac.dmg You can also use --directory to download all files to a specific folder. When you call testrun_update.py you can also only specify the folder. The script will fetch all available builds from this folder and executes those sequentially. > Much smoother than I expected :) > I'll look into how to wrap/schedule this and what kind of test matrix we > want. Great. Beside that we are also working on the on-demand update script, you are probably interested in too. See bug 657081.
Conversation is happening both here and in bug 588398. Per bhearsum (original reporter), this bug is about running a matrix of update tests of previous releases/betas, in mozmill, at release time, to verify that they update correctly. The next step here is to create a sample matrix of previous releases / locales to test against which channels, and create release builders to run those in an automated fashion. Retrying on unexpected failures + chunking large matrices for parallelization would be good to have as well. I think I have what I need for this; commenting here for completeness.
I'm able to run this on a staging talos snow leopard box by: mkdir /Users/cltbld/aki cd /Users/cltbld/aki unset PYTHONPATH /tools/buildbot-0.8.4-pre-moz2/bin/python /tools/misc-python/virtualenv.py --distribute /Users/cltbld/aki/venv venv/bin/pip install http://people.mozilla.org/~asasaki/mozmill-deps/mercurial-1.7.3.tar.gz venv/bin/pip install http://people.mozilla.org/~asasaki/mozmill-deps/mozrunner-2.5.5.tar.gz venv/bin/pip install http://people.mozilla.org/~asasaki/mozmill-deps/jsbridge-2.4.4.tar.gz venv/bin/pip install http://people.mozilla.org/~asasaki/mozmill-deps/ManifestDestiny-0.2.2.tar.gz venv/bin/pip install http://people.mozilla.org/~asasaki/mozmill-deps/mozmill-1.5.4.tar.gz # rsync w.i.p. mozharness over into git-mozharness/ # unset PYTHONPATH, which is set in .bash_profile for some reason unset PYTHONPATH # run tests venv/bin/python git-mozharness/scripts/mozmill_updates.py --channel beta --venv-path /Users/cltbld/aki/venv After this, the tests proceed to fail because we're blocking access to the outside world from .build.m.o in bug 617414. We either need to decide not to do that and open things back up, or we need download.m.o to point to internal mirrors only (bug 646076). Marking dependencies.
Depends on: 646046, 646076
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness] → [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness][triagefollowup][blocked]
Triage: marked this Old Bug as blocked since I can't proceed until we have internal-only mirrors for staging; also marked as triagefollowup since I won't have time in Q4 even if it were unblocked.
(In reply to Aki Sasaki [:aki] from comment #47) > Triage: marked this Old Bug as blocked since I can't proceed until we have > internal-only mirrors for staging; also marked as triagefollowup since I > won't have time in Q4 even if it were unblocked. Given that bug 613620 (blocking bug 646076) isn't going to see work for a few weeks still, I think you can just let this one lie, unless you're really looking to unload it.
Whiteboard: [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness][triagefollowup][blocked] → [release-process-improvement][automation][updates][mozmill-automation][oldbugs][mozharness][blocked]
Chris, do we currently send Pulse messages out for release/beta candidate builds? I would assume so. I would assume the routing_key will contain mozilla-release or mozilla-beta as branch? I haven't checked that yet while I'm still working on the system for daily builds on central and aurora. Thanks.
(In reply to Henrik Skupin (:whimboo) from comment #49) > Chris, do we currently send Pulse messages out for release/beta candidate > builds? I would assume so. I would assume the routing_key will contain > mozilla-release or mozilla-beta as branch? I haven't checked that yet while > I'm still working on the system for daily builds on central and aurora. > Thanks. yeah, we should be. The routing key would be prefixed with 'release-', so 'release-mozilla-beta....'
No longer blocks: hg-automation
Mass move of bugs to Release Automation component.
Component: Release Engineering → Release Engineering: Automation (Release Automation)
No longer blocks: hg-automation
WONTFIX'ing this because we want to run these tests on RelEng hardware instead. This is tracked in bug 813629, and we're hoping to look at it in the first half of next year.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WONTFIX
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.