Closed Bug 790339 Opened 13 years ago Closed 13 years ago

configure new linux foopies for use with new pandas

Categories

(Infrastructure & Operations Graveyard :: CIDuty, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: hwine, Assigned: kmoir)

References

Details

(Whiteboard: [reit-panda])

Once the 5 new linux foopies are available (to be filed dependency), configure them for the remaining new panda chassis listed in bug 777359. The plan is to not move "panda chassis 2" foopies (bug 789516) until after the smoke test work is done. foopies should be assigned panda chassis on 1:1 basis.
Depends on: 790340
Assignee: nobody → kmoir
I just checked, these machines are up and puppetized. I think bug 780233 has to be resolved and the pandas have to have the latest images on them and available before I can configure the new foopies to connect to them.
Depends on: 780233
No longer depends on: 780233
Depends on: 789516
jmaher will report back how the smoke tests for pandas are going in bug 789516. No ETA when we will be ready to kick this on staging with these foopies.
changing dependency - the software load is done, we're only waiting final hookup to the actual pandas (bug 789516)
Blocks: 778733
No longer blocks: 789497
I've configured foopy 33 (0001-0012) and foopy 34 (013-036) to accept the new pandas since Armen mentioned in an email that we actually only have 36 devices (others are on loan for testing). Of course, additional work is required once the new pandas are actually imaged and available.
sorry that should have read foopy 33 (0001-0024) and foopy 34 (0025-0036)
Blocks: 799698
IIUC correctly the pandas have been re-imaged and most of them are ready to be used. What is the ETA for this to have jobs running? I know I will be asked in the Mobile Testing meeting.
I configured foopies 33 (chassis 2 panda-0022..0033), 33 (chassis 4 panda-0046..0057) and 35 (chassis 5 panda-0058..0069) a few days ago. Now working out issues running tests on the pandas, patching buildbot etc.
Whiteboard: [reit-panda]
The following have also been kickstarted and are awaiting configuration: foopy39.p1.releng.scl1.mozilla.com foopy40.p1.releng.scl1.mozilla.com foopy41.p1.releng.scl1.mozilla.com foopy42.p1.releng.scl1.mozilla.com foopy43.p1.releng.scl1.mozilla.com foopy44.p1.releng.scl1.mozilla.com foopy45.p1.releng.scl1.mozilla.com foopy46.p2.releng.scl1.mozilla.com foopy47.p2.releng.scl1.mozilla.com foopy48.p2.releng.scl1.mozilla.com foopy49.p2.releng.scl1.mozilla.com foopy50.p2.releng.scl1.mozilla.com foopy51.p2.releng.scl1.mozilla.com foopy52.p2.releng.scl1.mozilla.com foopy67.p5.releng.scl1.mozilla.com foopy68.p5.releng.scl1.mozilla.com foopy69.p5.releng.scl1.mozilla.com foopy70.p5.releng.scl1.mozilla.com foopy71.p5.releng.scl1.mozilla.com foopy72.p5.releng.scl1.mozilla.com foopy73.p5.releng.scl1.mozilla.com (These are for panda pods 1, 2, and 5. 3 and 4 will likely follow today).
and now add: foopy53.p3.releng.scl1.mozilla.com foopy54.p3.releng.scl1.mozilla.com foopy55.p3.releng.scl1.mozilla.com foopy57.p3.releng.scl1.mozilla.com foopy58.p3.releng.scl1.mozilla.com foopy59.p3.releng.scl1.mozilla.com foopy60.p4.releng.scl1.mozilla.com foopy61.p4.releng.scl1.mozilla.com foopy62.p4.releng.scl1.mozilla.com foopy63.p4.releng.scl1.mozilla.com foopy64.p4.releng.scl1.mozilla.com foopy65.p4.releng.scl1.mozilla.com foopy66.p4.releng.scl1.mozilla.com bug 806662 tracks a disk issue with foopy56, so that one isn't in service yet
Depends on: 800079
foopies 35-93 are also configured, will post the script I used later so in can be included in buildfarm/mobile
foopies 39 to 45 have been setup with setup_foopy.py (which currently only supports b2g pandas setup).
What is left in here? I'm trying to clean up the various tracking bugs.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
Component: Platform Support → Buildduty
Product: Release Engineering → Infrastructure & Operations
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in before you can comment on or make changes to this bug.