Closed Bug 889967 Opened 11 years ago Closed 9 years ago

verify script called by mozharness for android devices doesn't reboot via mozpool


(Release Engineering :: Applications: MozharnessCore, defect)

Not set


(Not tracked)



(Reporter: kmoir, Assigned: kmoir)




(2 files)

The mozharness scripts for android makes a call to the sut_tools/ script.  This makes a call to sut reboot.  This could lead to problems where the reboot by the mozpool could occur after the sut reboot because the devices are in the ready state and managed by mozpool. The end result is that we could have unexpected reboots of devices.
second sentence should have read

"This could lead to problems where a device rebooted by sut could be identified as a device down by mozpool and spuriously rebooted"
Blocks: 829211
We probably should look into making verify to call mozpool's reboot calls only for pandas.

Any thoughts?
(In reply to Armen Zambrano G. [:armenzg] (Release Enginerring) from comment #2)
> We probably should look into making verify to call mozpool's reboot calls
> only for pandas.
> Any thoughts?

This is exactly the point of this bug. We want to use the api for reboots for any device managed by mozpool.

Its not the case yet, but we can certainly make it be so. One blocker for the always nature is the fact that a locked_out device won't reboot if we ask mozpool to do it. (which means talos jobs won't reboot at present)

I also think this change should be seperately rolled out from a switch-talos-to-mozharness deploy
Assignee: nobody → kmoir
Product: → Release Engineering
There are two scripts that are invoked by the mozharness scripts and from command line
calls verifyDevice
calls canPing -> reboot if cannot can ping
checksut -> calls updateSUTVersion -> calls updateSUT -> calls -> def doUpdate in -> 
checkAndFixScreen -> calls soft_reboot (not called now)
cleanupDevice -> calls cleanupDevice in which sets  reboot_needed = True for pandas and calls soft_reboot_and_verify in via command line main
-> calls installOneApp --> calls installApp --> calls _runCmds
 then finds robocop.apk and tries to install it too via installOneApp
 --then calls waitForDevice 
waitForDevice waits for the device to become available again after the install although I can't find a call to reboot the device

The other non-mozpool reboot is the step outside of the mozharness script.  This should not be needed because the mozharness script calls the close_request action which returns the device to the mozpool.  If you set the reboot_command to a blank string in this will stop these reboots.
PLATFORMS['android']['mozharness_config'] = {
    'mozharness_python': '/tools/buildbot/bin/python',
    'hg_bin': 'hg', 
    'reboot_command': "",
    'talos_script_maxtime': 10800,
Hey Kim,

Good thinking to highlight this - this could really help stability.

Also to note, there is a cronjob running on the foopies (/etc/cron.d/foopy) that runs /builds/ every 5 minutes. monitors each of the pandas on the foopies (see /builds/watcher.log and /builds/<panda>/watcher.log) which is also calling the following python code (in tools repo, not in mozharness):


I think it would make sense at the same time to review what overlap exists between and their mozpool management, since this may also be a place where a conflict of interest could occur.

This code was written at a time when mozpool was not managing all pandas, and is also used by tegras, so a review might highlight functionality that can be disabled/removed, which is not appropriate for mozpool-managed devices.

I believe kitten herder has been disabled for pandas and tegras.
Not sure about other tools that might be watching pandas too (slave api, nagios, ...).
Are there any other buildbot steps that could be running, which might be rebooting pandas, or is that all migrated to mozharness now?

Maybe we should document all of this as a first step to validating the integrity of the current architecture.
(the reason i suggest we document it is simply that i'm not sure that anybody has a *full* overview of *all* systems that are potentially validating and rebooting devices - maybe if we capture this, we might realise there is redundancy in there, and maybe this might be a cause of twistd failures, disconnects, unexpected reboots etc, that we see)
The two components that you both mention should be the full picture.
I doubt that we're doing calls to slaveapi to reboot pandas. In any case, it would a releng running it manually.
since the mozharness script returns the device to the pool with the close_request action

tested in staging
Attachment #821709 - Flags: review?(armenzg)
Comment on attachment 821709 [details] [diff] [review]
removes the final reboot via outside mh script

Review of attachment 821709 [details] [diff] [review]:

I'm sure we would have caught this on staging if it had turned the step to orange:

It seems we will now be able to save 1min per job!

FYI, I would have thought that None would have been better than '', however, I'm happy to know that it also works.

/tools/buildbot/bin/python scripts/external_tools/ -f ../reboot_count.txt -n 1 -z
 in dir /builds/panda-0127/test/. (timeout 1200 secs)
 watching logfiles {}
 argv: ['/tools/buildbot/bin/python', 'scripts/external_tools/', '-f', '../reboot_count.txt', '-n', '1', '-z']
 using PTY: False
[sudo] password for cltbld: 
Sorry, try again.
[sudo] password for cltbld: 
Sorry, try again.
[sudo] password for cltbld: 
Sorry, try again.
sudo: 3 incorrect password attempts
Attachment #821709 - Flags: review?(armenzg) → review+
Comment on attachment 821709 [details] [diff] [review]
removes the final reboot via outside mh script

None is better than an empty string
Attachment #821709 - Flags: checked-in+
in production
Dustin: Our mozpool scripts return the devices to the pool once they are done with them via close_request.  They used to have a relay reboot invoked via mozharness outside of mozpool after this step,  but I removed it last week.  Am I correct in my assumption that once the device the mozpool request is closed, no further action is required like a reboot?  I'm seeing some weird behaviour on some of our masters with pandas not attaching so was wondering what your thoughts were on this.
Flags: needinfo?(dustin)
That's correct.  You certainly shouldn't be doing anything (ping, SUT, reboot, etc.) to a device after releasing it or before successfully requesting it, nor should *anything* *at* *all* be talking to the relays.  So I'm glad you removed that :)
Flags: needinfo?(dustin)
Callek: So the approach I have used to address the verify reboots is to not reboot the panda in cleanupDevice in and then reboot it via mozpool when returns.  I don't really like this approach because it is not elegant but I don't know of another way to implement it. Do you have any suggestions.  (I'm still working through some issues with this approach)
Flags: needinfo?(bugspam.Callek)
so my thought here...

When running inside mozharness I *believe* we have access to mozpoolclient module.

So we modify to use mozpoolclient instead of talking to the relay.

We could even do a try: catch: on the import if we really want to be reliable here.

Alternatively we could reimpliment the mozpoolclient parts we care about and call them directly without caring if we're in mozharness or not.

We call softreboot in more places than JUST during a harness run...  specifically and (^[^\0]*%24&hitlimit=&tree=build)

-- Either approach requires us to NOT do any device manipulation outside of buildbot running (e.g. if we're a panda/device-in-mozpool we don't do anything unless we're running IN a buildbot job)
Flags: needinfo?(bugspam.Callek)
Working on this now and there are a lot of dependencies from mozharness and mozpoolclient that need to be pulled into to accomplish this.  Callek suggested that it might be a good idea to put these all into a separate module that could be installed in the venv and accessible via
This is a script I was using on the foopy to test things before I ran into roadblocks and am now reconsidering the approach in comment #15.  My intention was to incorporate this bit of code into the soft_reboot in  The issue is that and etc are invoked via a script so they don't have any access to the mozpool handler that is currently associated with that device.  I don't know if you can create a second mozpool handler to work with a device that already has one open with it and try to reboot the device via device_power_cycle in  

My next thoughts would be to import and and invoke them that way instead of the command line.  This doesn't cover all of the times that soft_reboot is called but this could be refactored.  Or serialize the state of the mozpool handler objects, write it to a file and reload from within by adding a command line argument when the scripts are invoked.  I don't know.  Basically it's a problem that the scripts are invoked versus imported because we want to access the mozpool handler.
Assignee: kmoir → bugspam.Callek
Blocks: 932231
Component: Platform Support → Mozharness
This is now very low priority, but looks like Kim has made the most recent efforts here.

It will also likely become easier to architect once tegras are disabled.

Either way it is still a valid bug in our current use case of pandas (we shouldn't be touching relay boards directly, and should ideally remove the need for devices.json or at least for devices.json to manage port/host/etc of the power units.)
Assignee: bugspam.Callek → kmoir
This can be closed since Pandas are on there way out in bug 1186615
Closed: 9 years ago
Resolution: --- → INVALID
You need to log in before you can comment on or make changes to this bug.