Closed Bug 430200 Opened 16 years ago Closed 16 years ago

Creating Nightly and "on change" ARM builds for Fennec

Categories

(Release Engineering :: General, defect, P1)

All
Linux
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: mfinkle, Assigned: joduinn)

References

Details

(Keywords: mobile)

Attachments

(9 files, 10 obsolete files)

223 bytes, patch
bhearsum
: review+
pavlov
: review+
Details | Diff | Splinter Review
580 bytes, patch
bhearsum
: review+
Details | Diff | Splinter Review
5.22 KB, patch
nthomas
: review-
Details | Diff | Splinter Review
497 bytes, patch
nthomas
: review+
Details | Diff | Splinter Review
7.65 KB, patch
bhearsum
: review+
Details | Diff | Splinter Review
7.64 KB, patch
bhearsum
: review+
Details | Diff | Splinter Review
7.17 KB, patch
bhearsum
: review+
Details | Diff | Splinter Review
669 bytes, patch
bhearsum
: review+
Details | Diff | Splinter Review
4.39 KB, patch
nthomas
: review+
Details | Diff | Splinter Review
There are nightly XULRunner ARM builds which include SDK:
ftp://ftp.mozilla.org/pub/mobile/nightly/

We have created a buildable configuration of fennec (mobile browser front end) and would like to create nightly (or "on change") builds as well. The source for the browser is still in SVN and can be found here:
http://viewvc.svn.mozilla.org/vc/projects/fennec/mobile/

We would love to move the source _out_ of SVN and into either the 1.9 CVS tree or into an hg repo for mobile - which ever is decided to be the best plan.

Since the code is still under development but easily accessible for review, I am not attaching a patch until we have a clear plan for where the code should live.

Next steps:
* What is blocking build from starting nightly builds from fennec
* Decide where the code should reside (CVS or hg)
* Review the configuration parts of the code
I don't think there's any reason to land this code in CVS. It should just live in its own repository, either SVN or hg.

Do you have build instructions?
Build instructions are fairly simple:
1. Pull a mozilla xulrunner tree
2. Build xulrunner or use an SDK
3. Pull the fennec code into "mozilla/mobile"
4. --enable-application=mobile in your mozconfig
5. make -f client.mk build

I will attach a sample mozconfig (the one I use)

The project structure is based on this:
http://developer.mozilla.org/en/docs/Creating_XULRunner_Apps_with_the_Mozilla_Build_System
Keywords: mobile
Attached file Sample mozconfig for mobile browser (obsolete) —
This mozconfig assumes building with a xulrunner SDK
You could also use a mozconfig that builds both xulrunner and fennec at the same time too.
(In reply to comment #0)
> Next steps:
> * What is blocking build from starting nightly builds from fennec

We need to know the scope of this project first. For that, additional details like:
- answers to the other questions in this bug like cvs-vs-hg, daily build or builds on checkin, location of working config files, so we are automating existing builds (not debuging builds/compiler). 
- hardware allocation: what machines are you planning to run this on? 
- plans on what tests you want run: talos? unittest? 
- plans on what builds you want: debug *and* opt builds?

...would be needed so we understand what exactly you are looking for, and give you some accurate estimates. 

(I guess this is additional to the existing linux mobile builds, and proposed windows mobile builds? ... and not replacement of any?)
As far as I can tell, fennec does not currently have any binary components? Is this correct? If so, we don't need a slave with an ARM toolchain: any old slave will do. Unless there are platform ifdefs in the chrome, it also means that the resulting package is cross-platform. Is this the case?
(In reply to comment #5)
> (In reply to comment #0)
> > Next steps:
> > * What is blocking build from starting nightly builds from fennec
> 
> We need to know the scope of this project first. For that, additional details
> like:
> - answers to the other questions in this bug like cvs-vs-hg, daily build or
> builds on checkin, location of working config files, so we are automating
> existing builds (not debuging builds/compiler). 

- The code is currently in SVN. Unless we need to move to hg right away, I'm ok with leaving it.
- Build on checkin would be the best

> - hardware allocation: what machines are you planning to run this on? 
Final target is currently an nokia n8x0, but we can build on any slave since we have no binary code. Since xulrunner for ARM also needs to be built _and_ we want to bundle with that xulrunner, we should probably build in the same slave as the ARM xulrunner.

> - plans on what tests you want run: talos? unittest? 
We would like to run talos and unittest, but need to figure that part out (running on the device?)

> - plans on what builds you want: debug *and* opt builds?
Fennec has no binary code, so there is no debug/opt issues yet

> 
> ...would be needed so we understand what exactly you are looking for, and give
> you some accurate estimates. 
> 
> (I guess this is additional to the existing linux mobile builds, and proposed
> windows mobile builds? ... and not replacement of any?)
> 
In addition to those builds. But we want to build fennec _with_ those xulrunner builds (as stated above)

(In reply to comment #6)
> As far as I can tell, fennec does not currently have any binary components? Is
> this correct? If so, we don't need a slave with an ARM toolchain: any old slave
> will do. Unless there are platform ifdefs in the chrome, it also means that the
> resulting package is cross-platform. Is this the case?
> 
This will likely be the case when we move to having ARM and Windows Mobile builds. Building fennec with xulrunner (for the desired platform) seems like a good approach.
For talos and unit tests we'll need to run the resulting build on a device.  Stuart got talos going on and n810 this week, so we should be good to go from that side.  I'd image that we would set up an n810 with usb networking connected to the slave that's making the builds.  With such a set up you can scp the build over (or nfs mount the drive) and execute run tests over ssh.
Any reason we can't just have the N810 be the buildbot slave directly?

In any case, I think we should have separate bugs for doing the builds and running the various kinds of tests on them. Performance-testing in a mobile context is likely to be a complicated affair to get right.

So if we can make this bug just about producing builds: what kind of package does the build system create? A zip file?

It sounds like adding a build type to do fennec builds on the same slave that the mobile XULRunner is on would be relatively simple.
(In reply to comment #9)
> Any reason we can't just have the N810 be the buildbot slave directly?
> 

We had trouble with some prerequisite packages when trying to do this in Scratchbox -- but I believe they were rooted in QEMU problems, so this should be do-able on an actual device.


> It sounds like adding a build type to do fennec builds on the same slave that
> the mobile XULRunner is on would be relatively simple.
> 

Should be.
(In reply to comment #9)
> Any reason we can't just have the N810 be the buildbot slave directly?
No there's no problem with having the device be the buildbot slave for testing.  The only question is how to do the networking?  Would we connect it to a usb network run from the master? or just use a wifi connection?

> 
> In any case, I think we should have separate bugs for doing the builds and
> running the various kinds of tests on them. Performance-testing in a mobile
> context is likely to be a complicated affair to get right.
> 
> So if we can make this bug just about producing builds: what kind of package
> does the build system create? A zip file?


Currently the build bot produces tarballs.  We'd like to ship .deb installers once bug #418851 (awaiting review) and bug #418852 are resolved.
> 
> It sounds like adding a build type to do fennec builds on the same slave that
> the mobile XULRunner is on would be relatively simple.
> 

I'm not talking about the XULRunner package. I'm talking about the fennec package... what kind of file is it?
@brad, stuart is going to need the fix in 426444 to run talos against a XR app.

@bsmedberg, fennec atm is just a bunch of chrome and js. You can view everything here: http://viewvc.svn.mozilla.org/vc/projects/fennec/fennec/browser/

(In reply to comment #13)
> @bsmedberg, fennec atm is just a bunch of chrome and js. You can view
> everything here:
> http://viewvc.svn.mozilla.org/vc/projects/fennec/fennec/browser/
> 

Doug, actually, that branch is dead now. All new code will go here:
http://viewvc.svn.mozilla.org/vc/projects/fennec/mobile/
Blocks: 430651
For those people building xulrunner _and_ fennec at the same time
Attachment #319812 - Attachment mime type: application/octet-stream → text/plain
So we will need this config to be built when mozilla-central is open for check-ins and we have official builds going from there, right?
right, this is the config we are using to build both xulrunner and fennec.

We must also include:
ac_add_app_options xulrunner --with-arm-kuser
No longer blocks: 430651
Depends on: 430651
Flags: blocking-fennec1.0+
Priority: -- → P1
Target Milestone: --- → Fennec M6
Attached file buildbot master (obsolete) —
Here is the buildbot master I put together for building fennec.  Some of the paths might need to be changed and we might want to set something differently for cloning and then updating.  I'm not sure how to make this build on checkin and could use some help there.  Also, how do we get this to do nightly builds as well?
Attachment #317013 - Attachment is obsolete: true
Attachment #319812 - Attachment is obsolete: true
Attachment #330635 - Flags: review?(bhearsum)
Attached file mobile mozconfig (obsolete) —
this should get checked in to the build configs repo that is pulled and copied by the buildbot master file.
Attachment #330637 - Flags: review?(bhearsum)
Is this bug dependent on bug 445191?
no
Comment on attachment 330635 [details]
buildbot master

What are the expectations here...? Are you just looking for feedback on a setup you will run, or are you expecting us to take this over?
I'm expecting after some feedback on how to make things better, and for this to get checked in to the proper places (buildbot-configs/mobile, f.e.) so that we can start getting automated builds done on mozilla servers.
Comment on attachment 330637 [details]
mobile mozconfig

You guys know better than me when it comes to a mobile mozconfig...
Attachment #330637 - Flags: review?(bhearsum)
Comment on attachment 330635 [details]
buildbot master

Here is some very quick feedback:

>####### BUILDERS
>CVSROOT = ':ext:stgbld@cvs.mozilla.org:/cvsroot'
>OBJDIR = 'objdir'
>CONFIG_REPO_URL = 'http://hg.mozilla.org/build/buildbot-configs'
>CONFIG_SUBDIR = 'mobile'
>STAGE_USERNAME = 'mobilebld'
>STAGE_SERVER = 'stage.mozilla.org'
>STAGE_BASE_PATH = '/home/mobilebld/mobile-builds'

probably want to push to /home/ftp/pub/mobile once this is up and running



this looks fine for something you guys are running.
Attachment #330635 - Flags: review?(bhearsum)
Attachment #330635 - Flags: review?(joduinn)
Component: General → Release Engineering
Flags: blocking-fennec1.0+
Product: Fennec → mozilla.org
QA Contact: general → release
Target Milestone: Fennec M6 → ---
Version: Trunk → other
OS: All → Linux
Summary: Creating Nightly or "on change" ARM builds for Fennec → Creating Nightly and "on change" ARM builds for Fennec
Assignee: nobody → joduinn
this bug mentions svn a lot, but we've moved on/out of it in to hg, so please ignore all those comments.
Quick update:

1) Installed updated scratchbox and tools onto staging moz2-linux-slave04.b.m.o using instructions on https://wiki.mozilla.org/Mobile/Build/cs2007q3#Upgrading_a_centos_ref_vm_with_scratchbox_installed

2) Manually followed the steps listed in attached master.cfg. First attempt to build last night failed out because of code bug that wouldnt compile! :-( Patch for that landed this morning. Next attempt to manually do build steps on slave04 worked.

3) The two deb files produced by the build have been posted on people, and location sent to stuart/brad/dougt to look at and see if it works ok for them.

Next issues:
1) Need to review master.cfg attached to see if that approach is supportable in production.

2) Assuming the manually produced build is ok, I 'll start rolling out these same set of changes to the production linux pool of slaves.

3) We're still hitting the following error thrown when existing scratchbox: 
...
 'import site' failed; use -v for traceback
 Traceback (most recent call last):
   File "/scratchbox/tools/bin/sb-conf", line 3, in ?
     import os, sys
 ImportError: No module named os
...
This happens *every* time we exit scratchbox, regardless of what we do within scratchbox. From irc with Stuart, Brad, this likely needs a change made to PYTHONPATH within the scratchbox script. Stuart/Brad to investigate...
Status: NEW → ASSIGNED
IIRC the problem is that we need to use a different Python for Buildbot/Twisted than the scratchbox scripts use...I think I worked around it by having .bash_profile on the slave use the correct ones for Buildbot, and setting the ones Twisted needs in the master.cfg....I'd have to refresh my memory though.
yeah, the problem is just PYTHONHOME needs to be unset for scratchbox.  If there is a good way in the master to do that, should be good.
From irc with Stuart just now, some options are:

1) unset PYTHONHOME in master.cfg when calling scratchbox
2) modify scratchbox to unset PYTHONHOME at beginning
3) create new script (stuart.sh?!) which unset PYTHONHOME, and then calls scratchbox. Change master.cfg to call stuart.sh
(In reply to comment #31)
> yeah, the problem is just PYTHONHOME needs to be unset for scratchbox.  If
> there is a good way in the master to do that, should be good.
> 

(In reply to comment #32)
> From irc with Stuart just now, some options are:
> 
> 1) unset PYTHONHOME in master.cfg when calling scratchbox
> 2) modify scratchbox to unset PYTHONHOME at beginning
> 3) create new script (stuart.sh?!) which unset PYTHONHOME, and then calls
> scratchbox. Change master.cfg to call stuart.sh
> 
errr... so are we sure PYTHONHOME is the cause?

Manually unsetting PYTHONHOME did not fix it. I tried the following simplified setup, and still got the error when exiting scratchbox:
$ echo $PYTHONHOME
/tools/python
$ PYTHONHOME=
$ echo $PYTHONHOME

$ /scratchbox/login ls
configs  dep  maemo-sdk-nokia-binaries_4.0.1  maemo-sdk-rootstrap_4.0.1_armel.tgz  maemo-sdk-rootstrap_4.0.1_i386.tgz
'import site' failed; use -v for traceback
Traceback (most recent call last):
  File "/scratchbox/tools/bin/sb-conf", line 3, in ?
    import os, sys
ImportError: No module named os
$
$ 
(In reply to comment #32)
> 2) modify scratchbox to unset PYTHONHOME at beginning
This approach didnt work for me; instead I found changes to /scratchbox/login broke my scratchbox install on slave04. Re-installed scratchbox.

> 3) create new script (stuart.sh?!) which unset PYTHONHOME, and then calls
> scratchbox. Change master.cfg to call stuart.sh.
Created new tiny /scratchbox/moz_scratchbox script which unsets PYTHONHOME, and then calls the usual /scratchbox/login. This worked just fine, and solves the error messages noticed in comment#29. 

I'm attaching moz_scratchbox for review, and suggest we change master.cfg to use /scratchbox/moz_scratchbox instead of /scratchbox/login. 
1) This new tiny script is called by build processes instead of calling /scratchbox/login. This script unsets PYTHONHOME, and then calls the usual /scratchbox/login. This solves the error messages output by scratchbox, see examples in comment#29. 

2) I created this file in same directory, and with same permissions as /scratchbox/login. Not sure if this was required but it seemed a good idea.
...
$ ls -la /scratchbox/login  /scratchbox/moz_scratchbox 
-rwxr-xr-x 1 root root 7041 Aug 15 16:26 /scratchbox/login
-rwxr-xr-x 1 root root  212 Aug 17 21:01 /scratchbox/moz_scratchbox
$ 

3) I feel this script should be checked in, but am not sure where this script should live. Mobile repo in hg? The new RelEng tools repo in hg? Somewhere in cvs?
Attachment #334232 - Flags: review?(bhearsum)
Attachment #334232 - Flags: review?(artparmet)
Attachment #334232 - Flags: review?(artparmet) → review?(pavlov)
Comment on attachment 334232 [details] [diff] [review]
Simple wrapper script to workaround PYTHONHOME problem with scratchbox.

This looks fine, no need to keep the commented out 'echo', though. This should live in hg.m.o/build/tools, imho. It is not part of the mobile-browser code.
Attachment #334232 - Flags: review?(bhearsum) → review+
Attachment #334232 - Flags: review?(pavlov) → review+
Where should the mobile mozconfig live, now that we're moving from cvs to hg? The attached above (https://bugzilla.mozilla.org/attachment.cgi?id=330637) seems fine, I'm just trying to figure where it should live.
Has the steup of the scratchbox environment referenced in the 'buildbot master' attchment been documented? For example, do you use http://scratchbox.org/download/files/sbox-releases/apophis/tarball/scratchbox-toolchain-cs2007q3-glibc2.5-arm7-1.0.7-3-i386.tar.gz or http://scratchbox.org/download/files/sbox-releases/apophis/tarball/scratchbox-toolchain-arm-gcc4.1-uclibc20061004-1.0.4-i386.tar.gz or do you get the tools somewhere else? Do you use just the debian-sarge configuration or some other? There are lots of ways to setup a scratchbox environment. Is there some document in which you have captured how you are setting it up for the nightly builds?
John:

in the buildbot-configs repo under /mobile/linux-arm/mozconfig I assume.
(In reply to comment #37)
> Where should the mobile mozconfig live, now that we're moving from cvs to hg?
> The attached above (https://bugzilla.mozilla.org/attachment.cgi?id=330637)
> seems fine, I'm just trying to figure where it should live.
> 

Depends. If mobile gets it's own master it goes in /buildbot-configs/mobile/linux-arm. If we stick it on the "mozilla2" master it goes in /buildbot-configs/mobile/linux-arm.
(In reply to comment #40)
> (In reply to comment #37)
> > Where should the mobile mozconfig live, now that we're moving from cvs to hg?
> > The attached above (https://bugzilla.mozilla.org/attachment.cgi?id=330637)
> > seems fine, I'm just trying to figure where it should live.
> > 
> 
> Depends. If mobile gets it's own master it goes in
> /buildbot-configs/mobile/linux-arm. If we stick it on the "mozilla2" master it
> goes in /buildbot-configs/mobile/linux-arm.
> 

Urgh. The latter path should be /buildbot-configs/moizlla2/linux-arm.
Attachment #337501 - Flags: review?(bhearsum) → review-
Comment on attachment 337501 [details] [diff] [review]
mobile mozconfig as hg patch for the build/buildbot-configs repo

>+++ b/mozilla2/linux-arm/mozconfig
>+mk_add_options AUTOCONF=autoconf2.13
>+

afaik, we don't need to define AUTOCONF on our build machines, please remove this.

other than that, looks fine. r=bhearsum with that change.
removed the autoconf line, but otherwise same.
Attachment #337501 - Attachment is obsolete: true
Attachment #337910 - Flags: review?(bhearsum)
Attachment #337910 - Flags: review?(bhearsum) → review+
Comment on attachment 337910 [details] [diff] [review]
[checked in] mobile mozconfig as hg patch for the build/buildbot-configs repo

landed in mozilla2/ and mozilla2-staging/

changeset:   313:ca7de8696361
Attachment #337910 - Attachment description: mobile mozconfig as hg patch for the build/buildbot-configs repo → [checked in] mobile mozconfig as hg patch for the build/buildbot-configs repo
To work around bug#454881, I've added a step to build automation which appends:
ac_add_options --disable-tests

...to the checked out version of build/buildbot-configs/mozilla2/linux-arm/mozconfig.
1) Removed autoconf entry, as we have this by default on our build machines

2) added "--disable-tests" as suggested by dougt. It will be a while before the underlying issue is figured out, and checking in this change is easier then addding extra build steps to insert it dynamically. Once tests are working again, dougt/joduinn can post patch to remove this.
Attachment #338409 - Flags: review?(nthomas)
Comment on attachment 338409 [details] [diff] [review]
further tweaks to mobile mozconfig

r-. Based on your comment you only really want the --disable-tests part of this. The mozilla2/config.py part isn't mentioned, assuming it's stray. Ben removed the autoconfig stuff when he landed attachment 337910 [details] [diff] [review].
Attachment #338409 - Flags: review?(nthomas) → review-
changeset:   346:5fd64beabab4
tag:         tip
user:        Nick Thomas <nrthomas@gmail.com>
date:        Sat Sep 13 16:43:35 2008 +1200
summary:     Bug 430200,  Creating Nightly and "on change" ARM builds for Fennec - disable tests, r=joduinn, r=me
Attachment #338427 - Flags: review+
This is running on staging-master, and seems fine. If this review goes ok, I'll do similar patch for production-master.
Attachment #330635 - Attachment is obsolete: true
Attachment #338609 - Flags: review?(bhearsum)
Attachment #330635 - Flags: review?(joduinn)
Comment on attachment 338609 [details] [diff] [review]
buildbot master changes to pickup mobile builds on staging-master

>diff -r 706acd2907ea mozilla2-staging/master.cfg
>--- a/mozilla2-staging/master.cfg	Thu Sep 11 11:32:41 2008 -0400
>+++ b/mozilla2-staging/master.cfg	Mon Sep 15 03:08:59 2008 -0700
>@@ -195,13 +195,23 @@
> 
> ##### Release automation
> 
>-import release_master
>-reload(release_master)
>+#import release_master
>+#reload(release_master)
>+#
>+#c['builders'].extend(release_master.builders)
>+#c['schedulers'].extend(release_master.schedulers)
>+#c['change_source'].extend(release_master.change_source)
>+#c['status'].extend(release_master.status)
> 

Please don't comment these out on the checked-in copy.

>+####### Mobile
>+
>+import mobile_master
>+reload(mobile_master)
>+
>+c['builders'].extend(mobile_master.c['builders'])
>+c['schedulers'].extend(mobile_master.c['schedulers'])
>+c['change_source'].extend(mobile_master.c['change_source'])
>+c['status'].extend(mobile_master.c['status'])


>+++ b/mozilla2-staging/mobile_master.py	Mon Sep 15 03:08:59 2008 -0700

Two overall things about this file:
* Some of the 'config' variables are unnecessary (HGURL, STAGE_SERVER, etc.). Just import the existing config file to get access to them:
import config as nightly_config
reload(nightly_config)

and then down below you can reference nightly_config.HGURL, nightly_config.STAGE_SERVER.

The others, (OBJDIR, CONFIG_SUBDIR, etc), should be moved to mobile_config.py, which can then be imported like this:
import mobile_config
reload(mobile_config)
from mobile_config import *

* Please remove the 'c' dict entirely. While we were on the phone I left it in for the sake of expedience. What we really want is this:
change_source = []
schedulers = []
...
...
schedulers.append(Scheduler(...))

Here's release_master.py: http://hg.mozilla.org/build/buildbot-configs/file/0afc5c39c3cb/mozilla2-staging/release_master.py and release_config.py: http://hg.mozilla.org/build/buildbot-configs/file/0afc5c39c3cb/mozilla2-staging/release_config.py

It's got examples of both of these things.

>+####### SCHEDULERS AND CHANGE SOURCES
>+c['change_source'] = []
>+c['schedulers'] = []
>+

I didn't change these while we were on the phone for the sake of expedience. For 

>+import buildbotcustom.changes.hgpoller
>+from buildbotcustom.changes.hgpoller import HgPoller

For buildbotcustom imports use the following pattern:
import buildbotcustom.changes.hgpoller)
reload(buildbotcustom.changes.hgpoller)
from buildbot.changes.hgpoller import HgPoller

Without these we will be unable to 'reconfig' the master to pick-up HgPoller changes.


>+####### BUILDERS
>+OBJDIR = 'objdir'
>+CONFIG_SUBDIR = 'mozilla2/linux-arm'

For staging (this patch), this needs to be mozilla2-staging/linux-arm

>+CONFIG_REPO_URL = 'http://hg.mozilla.org/build/buildbot-configs'
>+SBOX_HOME = '/scratchbox/users/cltbld/home/cltbld/'

>+linux_lock = SlaveLock(name='linux_arm_lock', maxCount=1)
>+

We discovered that these don't do what we thought they did. We use max_builds=1 in BuildSlaves.py on the slave to achieve this. Delete all the of lock stuff.


>+
>+# check in the right mozconfig...
>+#linux_arm_dep_factory.addStep(ShellCommand(
>+#    command = ['/scratchbox/moz_scratchbox', '-p',
>+#    'cp', 'mozconfig-mobile', 'dep/mozilla-central/.mozconfig'],
>+#    description=['copying', 'mozconfig'],
>+#    descriptionDone=['copy', 'mozconfig'],
>+#    haltOnFailure=True
>+#))
>+

If we don't want to do this, don't check it in.


>+#linux_arm_dep_factory.addStep(ShellCommand(
>+#    command = ['/scratchbox/moz_scratchbox', '-p', '-d', 'dep/mozilla-central',
>+#               'autoconf2.13'],
>+#    description=['running', 'autoconf'],
>+#    descriptionDone=['run', 'autoconf'],
>+#    haltOnFailure=True
>+#))

Same here.

>+#linux_arm_dep_factory.addStep(ShellCommand(
>+#    command = ['ssh pavlov@stage.mozilla.org ' + \
>+#               'mkdir -p /home/ftp/pub/mobile/dep'],
>+#    description=['creating', 'upload dir'],
>+#    descriptionDone=['create', 'upload dir'],
>+#    haltOnFailure=False
>+#))

Same here.
>+linux_arm_dep_factory.addStep(ShellCommand(
>+    command = ['bash', '-c',
>+               'scp %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (SBOX_HOME, OBJDIR) + \
>+               '%s/build/mozilla-central/%s/xulrunner/xulrunner/*.deb ' % (SBOX_HOME, OBJDIR) + \
>+               '%s/build/mozilla-central/%s/mobile/mobile/*.deb ' % (SBOX_HOME, OBJDIR) + \
>+               '%s@%s:%s/tinderbox-builds/mozilla-central-linux-arm' % (STAGE_USERNAME, STAGE_SERVER, STAGE_BASE_PATH)],
>+    description=['uploading', 'build'],
>+    descriptionDone=['upload', 'build'],
>+    haltOnFailure=True
>+))

This is icky. It's OK for now though, I think. When Ted finishes bug 454594 we can switchover to that for uploading.

What are we doing about nightly builds? I don't see any steps for them here.


r- for this, see comments inline.
Attachment #338609 - Flags: review?(bhearsum) → review-
Comment on attachment 338707 [details] [diff] [review]
buildbot master changes to pickup mobile builds on staging-master (v2!)

>+# most of the config is in an external file
>+import config
>+reload(config)
>+from config import *
>+import mobile_config
>+reload(mobile_config)
>+from mobile_config import *

Unless there's a specific reason to import * from both of these let's not. If both of the files ever define a variable that's used in both this could be the source of confusing bugs. Either:

import config as nightly_config
reload(nightly_config)
import mobile_config
reload(mobile_config)
from mobile_config import *

or

import config
reload(config)
from config import *
import mobile_config
reload(mobile_config)

is much preferred.

Other than that, this patch looks good.
Attachment #338707 - Flags: review?(bhearsum) → review-
Attachment #338729 - Flags: review?(bhearsum) → review+
Comment on attachment 338729 [details] [diff] [review]
[checked in] buildbot master changes to pickup mobile builds on staging-master (v3!)

ok, looks good.
Comment on attachment 338427 [details] [diff] [review]
[as checked in] Disable tests

Checked into mozilla2-staging/linux-arm/mozconfig too, 
  http://hg.mozilla.org/build/buildbot-configs/rev/6fec1e99b867
at John's request.
This disables tests in staging, and also checks in the mobile mozconfig file. 

This is in addition to attachment#338409 [details] [diff] [review] above, which disabled tests in mobile builds in production.
Attachment #338745 - Flags: review?(nthomas)
Attachment #338745 - Attachment is obsolete: true
Attachment #338745 - Flags: review?(nthomas)
Comment on attachment 338745 [details] [diff] [review]
Disable tests for mobile builds on staging

Already landed.
Attachment #338729 - Attachment description: buildbot master changes to pickup mobile builds on staging-master (v3!) → [checked in] buildbot master changes to pickup mobile builds on staging-master (v3!)
Comment on attachment 338729 [details] [diff] [review]
[checked in] buildbot master changes to pickup mobile builds on staging-master (v3!)

changeset:   353:48faa106d55c
I've updated the instructions at:
https://wiki.mozilla.org/Mobile/Build/cs2007q3 to add:

1) new settings to /etc/sysctl.conf
vm.vdso_enabled = 0
vm.mmap_min_addr = 4096
...as this was causing scratchbox on staging machine (moz2-linux-slave04.b.m.o)
to throw errors during install.

2) moving scratchbox install to /builds/scratchbox to avoid filling root
partition. Also added symlink in / so scratchbox would find bits where they were
expected and run correctly.
Quick update:

moz2-linux-slave04 processes the new extra entries in /etc/sysctl.conf without any problems.   
  $ /etc/sysctl -p
...
 vm.vdso_enabled = 0
vm.mmap_min_addr = 4096

By contrast, moz 2-linux-slave01, moz2-linux-slave02, both throw an error when trying to append the same identical settings into /etc/sysctl.conf. 
vm.vdso_enabled = 0
error: "vm.mmap_min_addr" is an unknown key

I note the kernels are different:
moz2-linux-slave01 is using kernel: 2.6.18-53.1.6.el5 
moz2-linux-slave02 is using kernel 2.6.18-53.1.21.el5
moz2-linux-slave04 is using kernel: 2.6.18-92.1.10.el5  
...and wonder if this sysctl problem is somehow caused by different kernel related.
(In reply to comment #61)
> ...and wonder if this sysctl problem is somehow caused by different kernel
> related.
Typo. Meant to say "...somehow caused by the different kernel versions?"


Also, I just hit the same exact problem with moz2-linux-slave03.b.m.o... which has  2.6.18-53.1.19.el5 kernel.
(In reply to comment #62)

> Also, I just hit the same exact problem with moz2-linux-slave03.b.m.o... which
> has  2.6.18-53.1.19.el5 kernel.

Same problem for:
moz2-linux-slave05.b.m.o with kernel 2.6.18-53.1.19.el5
moz2-linux-slave06.b.m.o with kernel 2.6.18-53.1.19.el5
Stuart/Vlad/Brad/Doug:

On each of the production slaves, I am able to wget at the linux prompt, but not within the scratchbox prompt. I'm stumped since I hit this Friday. Any ideas?

$ cd /tmp
$ wget http://repository.maemo.org/stable/4.0.1/armel/maemo-sdk-rootstrap_4.0.1_armel.tgz
--00:20:17--  http://repository.maemo.org/stable/4.0.1/armel/maemo-sdk-rootstrap_4.0.1_armel.tgz
Resolving repository.maemo.org... 80.67.66.57
Connecting to repository.maemo.org|80.67.66.57|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 33181866 (32M) [application/x-tar]
...
$
$ /scratchbox/moz_scratchbox 
Welcome to Scratchbox, the cross-compilation toolkit!
Use 'sb-menu' to change your compilation target.
See /scratchbox/doc/ for documentation.
[sbox-CHINOOK-ARMEL-2007: ~] > cd /tmp
[sbox-CHINOOK-ARMEL-2007: /tmp] > wget http://repository.maemo.org/stable/4.0.1/armel/maemo-sdk-rootstrap_4.0.1_armel.tgz
--00:20:30--  http://repository.maemo.org/stable/4.0.1/armel/maemo-sdk-rootstrap_4.0.1_armel.tgz
           => `maemo-sdk-rootstrap_4.0.1_armel.tgz.3'
Resolving repository.maemo.org... failed: Host not found.
[sbox-CHINOOK-ARMEL-2007: /tmp] > 


Notes: 
1) all 5 production slaves and 1 staging slave were all setup the same way. I note that the 1 staging slave *is* able to wget from with scratchbox. 

2) on each of the production slaves, I am able to wget a file at the linux shell prompt, but not wget the same file from within scratchbox. This leads me to think that the network/machine are ok, but that something is amiss with the scratchbox install.

3) the same scratchbox installs are unable to do "apt-get", for the exact same "host not found" problem.
Do mobile builds in production, just like already done in staging. Changed slave(s), stage-server and userID from staging to production.

Note: production linux-arm/mozconfig already landed above, so not included in this patch.
Attachment #339629 - Flags: review?(bhearsum)
Fixed this by doing:

$ cd /scratchbox/users/cltbld/targets/CHINOOK-ARMEL-2007/etc/
$ mv resolv.conf resolv.conf.orig
$ ln -s /etc/resolv.conf

Its worth noting that the staging (working) and production (broken) slaves were all using the same settings for resolv.conf, so still not sure what is the root cause of the different behavior here. However, at least I can now use "apt-get" within scratchbox on the production slaves.



Big, big thank you to gavin for figuring this out!!!
Comment on attachment 339629 [details] [diff] [review]
buildbot master changes to pickup mobile builds on production-master

>diff --git a/mozilla2/master.cfg b/mozilla2/master.cfg
>--- a/mozilla2/master.cfg
>+++ b/mozilla2/master.cfg
>@@ -24,17 +24,16 @@ from config import *
> 
> from buildbot.scheduler import Scheduler, Nightly, Periodic
> from buildbot.status.tinderbox import TinderboxMailNotifier
> 
> import buildbotcustom.changes.hgpoller
> import buildbotcustom.process.factory
> reload(buildbotcustom.changes.hgpoller)
> reload(buildbotcustom.process.factory)
>-

nit: why are you changing the whitespace here?

>+##### Release automation
>+
>+import release_master
>+reload(release_master)
>+
>+c['builders'].extend(release_master.builders)
>+c['schedulers'].extend(release_master.schedulers)
>+c['change_source'].extend(release_master.change_source)
>+c['status'].extend(release_master.status)
>+

There is no release automation on production, remove this section.



>diff --git a/mozilla2/mobile_master.py b/mozilla2/mobile_master.py

>+linux_arm_dep_factory.addStep(ShellCommand(
>+    command = ['bash', '-c',
>+               'scp -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
>+               '%s/build/mozilla-central/%s/xulrunner/xulrunner/*.deb ' % (mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
>+               '%s/build/mozilla-central/%s/mobile/mobile/*.deb ' % (mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
>+               '%s@%s:%s/tinderbox-builds/mozilla-central-linux-arm' % (STAGE_USERNAME, STAGE_SERVER, STAGE_BASE_PATH)],
>+    description=['uploading', 'build'],
>+    descriptionDone=['upload', 'build'],
>+    haltOnFailure=True
>+))

Make sure this directory exists on stage.m.o before we run any builds, otherwise this upload will fail.

>+
>+linux_arm_dep_builder = {
>+    'name': 'mobile-linux-arm-dep',
>+    'slavenames': ['moz2-linux-slave1', 'moz2-linux-slave02', 'moz2-linux-slave03', 'moz2-linux-slave05', 'moz2-linux-slave06'],
>+    'builddir': 'mobile-linux-arm-dep',
>+    'factory': linux_arm_dep_factory,
>+    'category': 'mobile'
>+}
>+builders.append(linux_arm_dep_builder)
>+



>diff --git a/mozilla2/release_config.py b/mozilla2/release_config.py
>new file mode 100644
>--- /dev/null
>+++ b/mozilla2/release_config.py
>diff --git a/mozilla2/release_master.py b/mozilla2/release_master.py
>new file mode 100644
>--- /dev/null
>+++ b/mozilla2/release_master.py

same thing here w.r.t. release automation



remove the unnecessary whitespace changes, all references to release automation and then this is OK. I really don't like relying on the moz_scratchbox script, but...meh.
Attachment #339629 - Flags: review?(bhearsum) → review-
(In reply to comment #67)
> (From update of attachment 339629 [details] [diff] [review])
> >diff --git a/mozilla2/master.cfg b/mozilla2/master.cfg
> >--- a/mozilla2/master.cfg
> >+++ b/mozilla2/master.cfg
> >@@ -24,17 +24,16 @@ from config import *
> > 
> > from buildbot.scheduler import Scheduler, Nightly, Periodic
> > from buildbot.status.tinderbox import TinderboxMailNotifier
> > 
> > import buildbotcustom.changes.hgpoller
> > import buildbotcustom.process.factory
> > reload(buildbotcustom.changes.hgpoller)
> > reload(buildbotcustom.process.factory)
> >-
> nit: why are you changing the whitespace here?
I thought it was more readable, but just personal style. Reverted.


> >+##### Release automation
> >+
> >+import release_master
> >+reload(release_master)
> >+
> >+c['builders'].extend(release_master.builders)
> >+c['schedulers'].extend(release_master.schedulers)
> >+c['change_source'].extend(release_master.change_source)
> >+c['status'].extend(release_master.status)
> >+
> 
> There is no release automation on production, remove this section.
Thought it was good to have this match whats in staging. Removed.


> >diff --git a/mozilla2/mobile_master.py b/mozilla2/mobile_master.py
> >+linux_arm_dep_factory.addStep(ShellCommand(
> >+    command = ['bash', '-c',
> >+               'scp -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> >+               '%s/build/mozilla-central/%s/xulrunner/xulrunner/*.deb ' % (mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> >+               '%s/build/mozilla-central/%s/mobile/mobile/*.deb ' % (mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> >+               '%s@%s:%s/tinderbox-builds/mozilla-central-linux-arm' % (STAGE_USERNAME, STAGE_SERVER, STAGE_BASE_PATH)],
> >+    description=['uploading', 'build'],
> >+    descriptionDone=['upload', 'build'],
> >+    haltOnFailure=True
> >+))
> Make sure this directory exists on stage.m.o before we run any builds,
> otherwise this upload will fail.
Good catch, the directory wasnt there. Now created.


> >+
> >+linux_arm_dep_builder = {
> >+    'name': 'mobile-linux-arm-dep',
> >+    'slavenames': ['moz2-linux-slave1', 'moz2-linux-slave02', 'moz2-linux-slave03', 'moz2-linux-slave05', 'moz2-linux-slave06'],
> >+    'builddir': 'mobile-linux-arm-dep',
> >+    'factory': linux_arm_dep_factory,
> >+    'category': 'mobile'
> >+}
> >+builders.append(linux_arm_dep_builder)
> >+
> 
> 
> 
> >diff --git a/mozilla2/release_config.py b/mozilla2/release_config.py
> >new file mode 100644
> >--- /dev/null
> >+++ b/mozilla2/release_config.py
> >diff --git a/mozilla2/release_master.py b/mozilla2/release_master.py
> >new file mode 100644
> >--- /dev/null
> >+++ b/mozilla2/release_master.py
> 
> same thing here w.r.t. release automation
Yep, removed.

> remove the unnecessary whitespace changes, all references to release automation
> and then this is OK. I really don't like relying on the moz_scratchbox script,
> but...meh.
We have to use *something* to fix all the silly python path problems with scratchbox; this was the easiest way I could find. Its installed on each slave. Having moz_scratchbox as a bugzilla patch is totally silly, agreed. I'd be happy to check that in someplace too, if you like.
Comment on attachment 339629 [details] [diff] [review]
buildbot master changes to pickup mobile builds on production-master

replaced by later attachment#339807 [details] [diff] [review]
Attachment #339629 - Attachment is obsolete: true
(In reply to comment #66)
> Fixed this by doing:
> 
> $ cd /scratchbox/users/cltbld/targets/CHINOOK-ARMEL-2007/etc/
> $ mv resolv.conf resolv.conf.orig
> $ ln -s /etc/resolv.conf
> 
> Its worth noting that the staging (working) and production (broken) slaves were
> all using the same settings for resolv.conf, so still not sure what is the root
> cause of the different behavior here. However, at least I can now use "apt-get"
> within scratchbox on the production slaves.
> Big, big thank you to gavin for figuring this out!!!

Rats... not out of the woods yet. Even with this change in place, I'm hitting:
scratchbox> hg clone http://hg.mozilla.org/mozilla-central mozilla-central
abort: error: Temporary failure in name resolution
scratchbox>

Lets try kernel update, like was done on slave04, and see if that does the trick.
Attachment #339807 - Flags: review?(bhearsum) → review+
Can someone knowledgeable comment on where we are getting builds? The comments above are a bit opaque and it came up at the Mobile meeting today.
> Lets try kernel update, like was done on slave04, and see if that does the
> trick.
Kernel update did not fix it, but Doug spotted the problem. 

Using symlink to /etc/resolv.conf does not work when *inside* scratchbox, as
scratchbox silently got confused between the scratchbox version of
/etc/resolv.conf and the host o.s. /etc/resolv.conf. However, instead of
symlink, copying resolv.conf worked nicely. New instructions are:

$ cd /scratchbox/users/cltbld/targets/CHINOOK-ARMEL-2007/etc/
$ mv resolv.conf resolv.conf.orig
$ cp /etc/resolv.conf .

I've now updated moz2-linux-slave1/02/03/05/06, and all are now correctly
resolving http connections.

Thanks Doug!
Comment on attachment 339807 [details] [diff] [review]
[checked in] buildbot master changes to pickup mobile builds on production-master (v2)

changeset:   371:f737bb639c0d
Attachment #339807 - Attachment description: buildbot master changes to pickup mobile builds on production-master (v2) → [checked in] buildbot master changes to pickup mobile builds on production-master (v2)
1) Dated subdirectories:
I havent yet got the MozillaStageUpload custom class working with this mobile build. This means that we dont have dated directories yet. Instead, for now, each build is placed, over the previous build, in the same directory:
ftp://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mobile-browser-linux-arm/

Aside: Ted has work-in-progress in another bug to generate that buildid / timestamp  way of generating and creating that directory - which should work for us here also. 



2) Results on Tinderbox:
For the staging environment, we post build results on MozillaTest tinderbox:
http://tinderbox.mozilla.org/showbuilds.cgi?tree=MozillaTest

For the production environment, I think we should post build results onto the main Firefox tinderbox:
http://tinderbox.mozilla.org/showbuilds.cgi?tree=MozillaTest

Pro: any m-c checkins which break mobile, will close the tree.
Con: mobile developers get locked down if m-c developers are locked down.
1) This fixes a bug in how HgPoller was being used. We now correctly detect checkins to trigger new builds.

2) Changed output dirname to mobile-browser-linux-arm as requested.

3) This fixes reporting of build results from buildbot master to tinderbox server page. For staging, results are posted to the "MozillaTest" page. For production, results would be to the "Firefox" page.


This is currently running on staging-master.m.o. This can be running on production-master.m.o after the patch is r+'d and landed.
Attachment #341261 - Flags: review?(bhearsum)
Comment on attachment 341261 [details] [diff] [review]
fix staging and production buildbot master config files

>diff --git a/mozilla2-staging/mobile_master.py b/mozilla2-staging/mobile_master.py
> change_source.append(HgPoller(
>     hgURL=config.HGURL,
>     branch='mozilla-central',
>-    pushlogUrlOverride='http://hg.mozilla.org/mozilla-central/index.cgi/pushlog',
>+    pushlogUrlOverride='http://hg.mozilla.org//mozilla-central/index.cgi/pushlog',

If this change truly fixes the problem here, HgPoller has a bug.

> schedulers.append(Scheduler(
>-    name="mobile dep scheduler",
>-    branch="HEAD",
>+    name="mobile mozilla-central dep scheduler",
>+    branch="mozilla-central",
>     treeStableTimer=3*60,
>-    builderNames=["mobile-linux-arm-dep"]
>+    builderNames=["mobile-linux-arm-dep"],
>+    fileIsImportant=lambda c: isHgPollerTriggered(c, config.HGURL)
>+))
>+
>+schedulers.append(Scheduler(
>+    name="mobile mobile-browser dep scheduler",
>+    branch="mobile-browser",
>+    treeStableTimer=3*60,
>+    builderNames=["mobile-linux-arm-dep"],
>+    fileIsImportant=lambda c: isHgPollerTriggered(c, config.HGURL)
>+))
>+
>+status.append(TinderboxMailNotifier(
>+    fromaddr="mozilla2.buildbot@build.mozilla.org",
>+    tree='MozillaTest',
>+    extraRecipients=["tinderbox-daemon@tinderbox.mozilla.org", "joduinn@mozilla.com"],

You will get mail every time a build start and ends, regardless of pass or fail - is this what you want?

> linux_arm_dep_factory.addStep(ShellCommand(
>     command = ['bash', '-c',
>-               'scp -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
>+               'scp -p -v -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \

Please remove the '-v' flag.


>diff --git a/mozilla2/mobile_master.py b/mozilla2/mobile_master.py

> linux_arm_dep_factory.addStep(ShellCommand(
>     command = ['bash', '-c',
>-               'scp -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
>+               'scp -p -v -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \

Same thing here.

I still don't think that you need to worry about dated directories until Ted's stuff lands (it's not far off). You could probably do something like:
WithProperties('scp ... ... .../tinderbox-builds/mobile-browser-linux-arm/%(buildnumber)s')

At runtime, the %(buildnumber)s will get translated into the "build number" of the build. This number is unique for each build.

r- because of the 'scp -v' stuff - please remove it from both configs. And if you don't want to get 2 e-mails for every single mobile build that happens, remove yourself from the TinderboxMailNotifier.
Attachment #341261 - Flags: review?(bhearsum) → review-
(In reply to comment #77)
> (From update of attachment 341261 [details] [diff] [review])
> >diff --git a/mozilla2-staging/mobile_master.py b/mozilla2-staging/mobile_master.py
> > change_source.append(HgPoller(
> >     hgURL=config.HGURL,
> >     branch='mozilla-central',
> >-    pushlogUrlOverride='http://hg.mozilla.org/mozilla-central/index.cgi/pushlog',
> >+    pushlogUrlOverride='http://hg.mozilla.org//mozilla-central/index.cgi/pushlog',
> 
> If this change truly fixes the problem here, HgPoller has a bug.

That was my worry also while debugging last night. Turns out this didnt fix the HgPoller problem. While debugging, I made this change so the (non-working) URLs be consistent with the other (working) URLs of http://staging-master.build.mozilla.org:8010/changes. I guess I could undo this, its an easy change, but feels better to have all the ChangeSources be consistent, and I didnt want to get distracted cleaning up the others while in the middle of this bug.


> >+status.append(TinderboxMailNotifier(
> >+    fromaddr="mozilla2.buildbot@build.mozilla.org",
> >+    tree='MozillaTest',
> >+    extraRecipients=["tinderbox-daemon@tinderbox.mozilla.org", "joduinn@mozilla.com"],
> 
> You will get mail every time a build start and ends, regardless of pass or fail
> - is this what you want?
Actually, it was what I wanted, but I'll remove it from this patch. I'll make this change locally on staging-master when I'm ready to do my other email-experiment-project.



> > linux_arm_dep_factory.addStep(ShellCommand(
> >     command = ['bash', '-c',
> >-               'scp -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> >+               'scp -p -v -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> 
> Please remove the '-v' flag.
done.


> > linux_arm_dep_factory.addStep(ShellCommand(
> >     command = ['bash', '-c',
> >-               'scp -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> >+               'scp -p -v -oIdentityFile=~/.ssh/%s %s/build/mozilla-central/%s/mobile/dist/*.tar.bz2 ' % (STAGE_SSH_KEY, mobile_config.SBOX_HOME, mobile_config.OBJDIR) + \
> 
> Same thing here.
done.


> I still don't think that you need to worry about dated directories until Ted's
> stuff lands (it's not far off). You could probably do something like:
> WithProperties('scp ... ...
> .../tinderbox-builds/mobile-browser-linux-arm/%(buildnumber)s')
> 
> At runtime, the %(buildnumber)s will get translated into the "build number" of
> the build. This number is unique for each build.
Tried that last night, but must have been doing something wrong. Lets put this live for now, and revisit when Ted's work is available. 

> r- because of the 'scp -v' stuff - please remove it from both configs. And if
> you don't want to get 2 e-mails for every single mobile build that happens,
> remove yourself from the TinderboxMailNotifier.

Thanks for the speedy review - hopefully this time is ok.
Attachment #341261 - Attachment is obsolete: true
Attachment #341276 - Flags: review?(bhearsum)
Comment on attachment 341276 [details] [diff] [review]
fix staging and production buildbot master config files (v2)

changeset:   379:af1646bb98a3
Attachment #341276 - Flags: review?(bhearsum) → review+
I've updated production-master with the new config and done a 'reconfig'.
Brad, Stuart:

(In reply to comment #76)
> Created an attachment (id=341261) [details]
> 3) This fixes reporting of build results from buildbot master to tinderbox
> server page. For staging, results are posted to the "MozillaTest" page. For
> production, results would be to the "Firefox" page.

Please clarify what tinderbox page you want the production results displayed on. "Firefox" or "Mobile"? 

Stuart, your email just now asked to put the builds on "Mobile" tinderbox. However, from discussions at summit, I thought Brad wanted the builds to show up on "Firefox" tinderbox, so that non-mobile-developers who checkin a fennec-breaking-change see and revert that change immediately. (Brad, if I misunderstood, please correct?)

Either way is a trivial change for me, just let me know which you'd prefer.
Firefox tinderbox is fine -- can you delete the mobile tinderbox tree?
John, no misunderstanding.  We'd like these results on the main Firefox tinderbox page.
Why should Fennec builds be on the main Firefox tree? Fennec builds from mobile-browser, no? That seems like it'll get very confusing to have something on the Firefox tree that doesn't build ... Firefox. Plus, there's already way too many machines on the Firefox tree. :(
Fennec builds from mozilla-central and mobile-browser.  It is also the code name for Mobile Firefox, so I think its appropriate to be on the Firefox tree. 

Also, mobile-browser has no c/cpp code, so most likely any bustages will be bustages in mozilla-central.
Brad, Vlad: After discussions in Firefox meeting last week, Stuart and I agreed  to put these builds on Mobile tinderbox page for now, not the Firefox tinderbox page, until discussions with rest of Firefox team are resolved.
Attachment #343166 - Flags: review?(bhearsum)
Attachment #343166 - Flags: review?(bhearsum) → review+
Comment on attachment 343166 [details] [diff] [review]
[checked in] display production mobile builds on "Mobile" tinderbox page, not on "MozillaTest"

changeset:   423:a892e71f9495
Attachment #343166 - Attachment description: display production mobile builds on "Mobile" tinderbox page, not on "MozillaTest" → [checked in] display production mobile builds on "Mobile" tinderbox page, not on "MozillaTest"
Comment on attachment 343166 [details] [diff] [review]
[checked in] display production mobile builds on "Mobile" tinderbox page, not on "MozillaTest"

master has been updated and reconfig'ed. new builds should be going to the Mobile tinderbox.
Comment on attachment 345480 [details] [diff] [review]
[checked in] make cloning steps not turn the whole build red when they "fail"

changeset:   482:cb6db3dbdcd9

and into use on the staging master. We need to untangle an interaction with buildbotcustom which gave           
  File "/builds/buildbot/moz2-master/master.cfg", line 204, in <module>
   reload(release_master)
  File "/builds/buildbot/moz2-master/release_master.py", line 159, in <module>
   ausHost=nightly_config.AUS2_HOST
  <type 'exceptions.TypeError'>: __init__() got an unexpected keyword argument 'stageSshKey'
on production-master
Attachment #345480 - Attachment description: make cloning steps not turn the whole build red when they "fail" → [checked in] make cloning steps not turn the whole build red when they "fail"
Attachment #345480 - Flags: review?(nthomas) → review+
I checked what changes there were in buildbotcustom since the last checkout on production-master; there's just the release related changes in process/factory.py between rev1.22 to 1.26. So I went ahead and did a cvs up there, and put configs/ back up to the tip (changeset 482; somehow the patch here ended up as a local change so it's been use since comment #91). It reconfig'd fine then.
what is left to get this bug resolved fixed?  we're still waiting on buildid directories among other things.
afaik, the only thing remaining is the buildID subdirs.
No longer blocks: 455755
From discussion with Christian, we'd like to remove 
http://ftp.mozilla.org/pub/mozilla.org/mobile/nightly/

(which contains a few early mobile builds in bz2 format) and replace it with a symlink to where we have been producing the nightly builds: 

http://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mobile-browser-linux-arm/

I've sent a post to dev.platforms.mobile about this also.
(In reply to comment #95)
> From discussion with Christian, we'd like to remove 
> http://ftp.mozilla.org/pub/mozilla.org/mobile/nightly/
  > 
> (which contains a few early mobile builds in bz2 format) and replace it with a
> symlink to where we have been producing the nightly builds: 
> 
> http://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mobile-browser-linux-arm/
> 
> I've sent a post to dev.platforms.mobile about this also.
Done. 


> You can also remove http://ft p.mozilla.org/pub/mozilla.org/mobile/dep/
Done.


Most of the work needed here has been long done. For the sake of clarity, I'm now closing this long long bug. I've filed bug#469294 and bug#469290 to track the two remaining separate issues. If there is anything else that I missed, please ping me.
Blocks: 469294, 469290
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: