Closed Bug 505232 Opened 10 years ago Closed 10 years ago
Install Puppet client on the Linux x86-64 tinderbox
10.33 KB, text/plain
To help prevent things like bug 505215 in the future.
I probably won't get to this soon.
Assignee: bhearsum → nobody
Status: ASSIGNED → NEW
Component: Release Engineering → Release Engineering: Future
Whiteboard: [buildslaves][puppet] → [buildslaves][puppet][linux64]
This platform shares a lot of the 32-bit configs, but notably the following are different: * grub.conf * sysctl.conf * no scratchbox stuff * binary RPMs (like debug packages) * some devtools packages * nagios packages, though this part of the config is shared This is mostly cshields' and Armen's work, that I've merged together with the latest in the master repository.
Mass move of bugs from Release Engineering:Future -> Release Engineering. See http://coop.deadsquid.com/2010/02/kiss-the-future-goodbye/ for more details.
Component: Release Engineering: Future → Release Engineering
Priority: -- → P3
Comment on attachment 426739 [details] [diff] [review] add x86_64 bit as a supported platform in puppet configs I attached a newer version of this patch onto bug 519074.
(In reply to comment #4) > (From update of attachment 426739 [details] [diff] [review]) > I attached a newer version of this patch onto bug 519074. Can we dup this bug?
Armen, let's keep this bug open for tracking updating the existing slaves.
Comment on attachment 426739 [details] [diff] [review] add x86_64 bit as a supported platform in puppet configs Review on this was done in bug 519074
Attachment #426739 - Attachment is obsolete: true
Armen is doing this
Assignee: bhearsum → armenzg
Originally read from https://wiki.mozilla.org/ReferencePlatforms/Linux-CentOS-5.0#Install_Puppet but few steps change (like the jdk version). Check below. as root: rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm cd /tools mv jdk1.5.0_15 jdk-1.5.0_15 rm -f jdk ln -s jdk-1.5.0_15 jdk echo "10.2.71.136:/export/buildlogs/puppet-files /N nfs ro 0 0" >> /etc/fstab mkdir /N mount -a # manual interaction needed yum install ruby facter puppet ruby-shadow augeas-libs ruby-augeas puppetd --test --server production-puppet.build.mozilla.org On the master as root (accept the certificate): puppetca --sign moz2-linux64-slave10.build.mozilla.org Back on the slave (the certificate is now signed): puppetd --test --server production-puppet.build.mozilla.org On the slave: When you are ready to leave the slave in production modify in /etc/sysconfig/puppet and set the value of PUPPET_SERVER to production-puppet.build.mozilla.org
This step is also necessary: chkconfig --level 2345 puppet on The tac generator created a new buildbot tac file. I will use what I backed up. cd /builds/slave/ mv buildbot.tac.off buildbot.tac I modified /etc/sysconfig/puppet. The slave is back in the production pool.
This line has probably put the slaves I was working on in bad shape. echo "10.2.71.136:/export/buildlogs/puppet-files /N nfs ro 0 0" >> /etc/fstab Slaves 2, 7, 8 and 11 are not connected to the masters. Slaves 7 and 8 are completely down. They have not synced up since their certificate were not accepted (dns outage) Slaves 2 and 11 did not sync up properly.
(In reply to comment #11) > Created an attachment (id=428987) [details] > logs when trying to sync up with production-puppet > > This line has probably put the slaves I was working on in bad shape. > echo "10.2.71.136:/export/buildlogs/puppet-files /N nfs ro > 0 0" >> /etc/fstab > > Slaves 2, 7, 8 and 11 are not connected to the masters. > Slaves 7 and 8 are completely down. They have not synced up since their > certificate were not accepted (dns outage) > Slaves 2 and 11 did not sync up properly. 7 and 8 were powered off. I turned them back on and they're back in the production pool for now. 10 has been fully completed and is back in the pool. 02 and 11 are purposely disconnected and will be dealt with tomorrow.
(In reply to comment #12) > 02 and 11 are purposely disconnected and will be dealt with tomorrow. I looked at the errors on this (some internal Puppet exceptions) and it looks like the lost connection to the Puppet server in the middle of a sync. Simply running puppetd again fixes it. They're synced up and back in the pool.
Status: ASSIGNED → NEW
Priority: P1 → P3
All Linux 64 slaves now have puppet installed on them and are back to the production pool. Big thanks to bhearsum for helping me fix/debug all this.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.