Tracking Bug: On-Demand Release Testing

RESOLVED FIXED

Status

Mozilla QA Graveyard
Mozmill Automation
RESOLVED FIXED
7 years ago
4 years ago

People

(Reporter: geo, Assigned: geo)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

(URL)

Attachments

(1 attachment, 3 obsolete attachments)

This is the tracking bug for the work on the on-demand update testing system, including the testing daemon and the test executor. 

It also includes the build staging script, which is currently the first WIP on Bug 628659. 

I am intentionally not including that bug as a dependency, as the bug is more sweeping towards handling updates and bfts, along with perhaps eventually executing the tests themselves. So, instead I simply note it here as a related bug.

I'll follow up with patches generated to get these components into our hg repo.
Created attachment 540267 [details] [diff] [review]
Scripts for on-demand update testing
Attachment #540267 - Flags: review?(hskupin)
Comment on attachment 540267 [details] [diff] [review]
Scripts for on-demand update testing

>+++ b/heartbeat-emitter.py

Can we please push all files into a sub directory? Or have you had issues to include packages from directories higher in the hierarchy?

>+# The Original Code is MozMill Test code.

"MozMill automation code"

>+# This script generates a periodic heartbeat in order to keep traffic going
>+# across the Mozmill queue. This prevents the on-demand queue daemons from
>+# freezing due to silent sockets closing themselves.

Please use the """ way of setting the description for the whole file. See the download.py script as reference.

>+# Import the pulse publishers
>+from mozillapulse import publishers
>+from time import sleep
>+import datetime
>+
>+# Import some pre-baked message types
>+# Note messages can really be anything, these are just to make it clear
>+# for consumers
>+from mozillapulse.messages import base
>+
>+import sys

Would be nice when we can separate the systems default packages from the additional ones, which have to be installed.

>+            now = datetime.datetime.now().isoformat()
>+            print "Sending heartbeat: %s" % now
>+            pulse.publish(mymessage)
>+            sleep(60)

I would need a bit of clarification. How will other consumers react on those messages? Are they seeing those? How will it work with multiple instances of this script running across different machines? Can those messages be distinguished?

>+++ b/heartbeat-listener.py
[..]
>+# The Original Code is MozMill Test code.
[..]
>+# This is a script to verify that the heartbeats are travelling across the
>+# appropriate queue.
>+# 
>+
>+# Import
>+from mozillapulse import consumers
>+import datetime
>+import uuid

Same as mentioned above.

>+def output (message):
>+    print message

What's the purpose of this wrapper function?

>+def got_message(data, message):
>+    received = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S%z")

You should use utcnow() instead of now().

>+++ b/push_update.py
[..]
>+# The Original Code is MozMill Test code.
>+
>+# Import the pulse publishers
>+from mozillapulse import publishers
>+
>+# Import some pre-baked message types
>+# Note messages can really be anything, these are just to make it clear
>+# for consumers
>+from mozillapulse.messages import base
>+
>+import sys
>+from optparse import OptionParser

As mentioned above.

>+def main():
>+    # args/opts
>+    usage = """usage: %prog [options] branch channel
>+
>+  Pushes a request for an on-demand update test to the listening daemon.
>+
>+  Branch and channel must be supplied, and will be passed to the update
>+  script run by the daemon.
>+
>+  If platform is supplied as an option, only daemons for that platform
>+  will run update tests. If platform is not supplied, all daemons will
>+  run update tests.
>+
>+  If cluster is supplied as an option, only daemons for that cluster
>+  will run update tests. If cluster is not supplied, only daemons that
>+  did not supply a cluster will run update tests."""

This comment should be global. Then it will be automatically used when --help has been specified.

>+    parser.add_option('-c', '--cluster',
>+                      action='store', type='string', dest='cluster',
>+                      help='name of cluster to push to')

This is an option, so what are we default it to?

>+    parser.add_option('-p', '--platform',
>+                      action='store', type='string', dest='platform',
>+                      help='platform to push to [default: all]')

Please specify the default value via 'default="all"'.

>+    # Did we specify cluster? If so, add the cluster key
>+    if options.cluster:
>+        mymessage.data['cluster'] = options.cluster
>+        cluster_string = 'cluster=%s' % (options.cluster)
>+    else:
>+        cluster_string = 'unclustered'

What does it mean technically if we do not set the cluster? Do any listener which receives the signal start to execute those tests? I think we should always include the cluster property with the right value (specific machine or 'all') set.

>+    # Did we specify platform? If so, add the platform key
>+    if options.platform:
>+        mymessage.data['platform'] = options.platform
>+        platform_string = 'platform "%s"' % (options.platform)
>+    else:
>+        platform_string = 'all platforms'

Same comments as for cluster.

>+    print 'Requesting update tests (%s) for %s: branch=%s, channel=%s' % (cluster_string,
>+                                                                          platform_string,
>+                                                                          branch,
>+                                                                          channel)

Please use placeholder descriptors here like %(CLUSTER)s. Please see the scraper package.

>+    # Make a publisher
>+    pulse = publishers.PulseTestPublisher()
>+
>+    # Send the message to the broker through the proper exchange with the correct
>+    # routing key and payload
>+    pulse.publish(mymessage)
>+
>+    # Disconnect to be nice to the server
>+    pulse.disconnect()

Can there be any exceptions? If that's the case we should handle disconnect in the same way as in the heartbeat scripts.

>+++ b/updated.py

We should find a better name. I assume the 'd' suffix should mean a daemon? In the repo it's not understandable. What about update-listener, which we can later rename to ondemand-listener.

>+    p = subprocess.Popen(args)
>+    retcode = p.wait()
>+    # XXX: should do something if failed
>+    print '  finished, return code: %s' % retcode

Please mention that we lack this capability right now due to a bug in the automation scripts. Please reference its id.

>+# Define a callback
>+def got_message(data, message):
>+    global platform
>+    global script
>+    global cluster
>+    global debug

Do we really have to redeclare those global variables? I have never seen that before.

>+        if requested_platform:
>+            print '  platform = "%s"' % requested_platform
>+        else:
>+            print '  platform = all platforms'
>+
>+        branch = data['payload']['branch']
>+        channel = data['payload']['channel']
>+        print '  branch = %s' % (branch)
>+        print '  platform = %s' % (platform)

Nit: printing platform twice?

>+def main():
>+    # args/opts
>+    usage = """usage: %prog [options] platform
>+
>+  Launches a listening daemon for the on-demand update system.
>+
>+  Platform must be supplied. If an update is requested without a platform
>+  or for the supplied platform, the daemon will run the specified update
>+  script (or the default script, if not specified).
>+
>+  If a cluster is specified, the daemon will only run updates for requests
>+  on that cluster. If no cluster is specified, the daemon will only run
>+  updates for requests with no cluster."""

Same as above. Please make it a global comment.

>+    parser.add_option('-s', '--script',
>+                      action='store', type='string', dest='script',
>+                      default='./release_update.py',
>+                      help='update script to run [default: %default]')

Once I have finished my script it will be included in the config file, so we will not need that parameter anymore. Also what is '%default'? An user will not know about its value.

>+    # Tell the broker what to listen for and give the callback
>+    pulse.configure(topic='mozmill.#', callback=got_message)
>+
>+    # Block and call the callback function when a message comes in
>+    print 'Listening for update test requests...'
>+    pulse.listen()

What about a try/except/finally clause here? We should clean-up correctly.
Attachment #540267 - Flags: review?(hskupin) → review-
Created attachment 543889 [details] [diff] [review]
Scripts for on-demand release testing

OK, version 2. This does all the on-demand release stuff, as well as incorporates most of the previous review comments.

Listener script runs as before.
Pusher script runs like:

release_push.py [options] bft branch
release_push.py [options] update branch channel

Reply to previous review follows:

(In reply to comment #2)
> >+++ b/heartbeat-emitter.py
> 
> Can we please push all files into a sub directory? Or have you had issues to
> include packages from directories higher in the hierarchy?

Done, they're in the "ondemand" subdirectory.

> >+# The Original Code is MozMill Test code.
> 
> "MozMill automation code"

Done, all four files.

> Please use the """ way of setting the description for the whole file. See
> the download.py script as reference.

Added module-level docstrings to all four files.

> >+# Import the pulse publishers
> >+from mozillapulse import publishers
>...
> >+import sys
> 
> Would be nice when we can separate the systems default packages from the
> additional ones, which have to be installed.

Done, all four files. One block for system packages, one block for pulse.

> 
> >+            now = datetime.datetime.now().isoformat()
> >+            print "Sending heartbeat: %s" % now
> >+            pulse.publish(mymessage)
> >+            sleep(60)
> 
> I would need a bit of clarification. How will other consumers react on those
> messages? Are they seeing those? How will it work with multiple instances of
> this script running across different machines? Can those messages be
> distinguished?

One emitter needed somewhere for the entire system (across all clusters), no harm if more than one is running. The deal is that the sockets close if you don't keep them alive by sending traffic. The heartbeat is strictly a keepalive.

This is a bit of a hack. Eventually heartbeat/keepalive move into the pulse platform itself and we can get rid of it.

> >+def output (message):
> >+    print message
> 
> What's the purpose of this wrapper function?

Was a mistake leaving it in; I've removed it and reverted the file back to print statements.

> 
> >+def got_message(data, message):
> >+    received = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S%z")
> 
> You should use utcnow() instead of now().

The general rule is to use now() when you're doing something that displays to console, since you generally want that to be system-local. utcnow() is generally only used for data structures so they're time-agnostic.

In this case, it's a message for debug use so you can see when the heartbeats froze up or crashed out, so system-local is appropriate. As it happens, most systems we'd run this on are UTC anyway, so doesn't much matter.

> >+    usage = """usage: %prog [options] branch channel
>...
> >+  did not supply a cluster will run update tests."""
> 
> This comment should be global. Then it will be automatically used when
> --help has been specified.

There's no automatic about it. Check your own download.py script--you have to specify description=__doc__ to make that happen, and the description kwarg isn't even documented for that constructor. The way I did it is exactly how the optparse docs say to do usage strings. 

Plus, I tried moving this stuff to docstring and doing it the way you suggested. the description=__doc__ thing hoses formatting in multi-line docstrings; the only reason yours work are they're single line. 

Upshot is, I'm doing it the standard way, had problems doing it the non-standard way, and am going to keep it as-is.

> 
> >+    parser.add_option('-c', '--cluster',
> >+                      action='store', type='string', dest='cluster',
> >+                      help='name of cluster to push to')
> 
> This is an option, so what are we default it to?

It defaults to "None". Options can do so, what with being optional. :)

I did put a [default=no cluster] in the help string though.

> 
> >+    parser.add_option('-p', '--platform',
> >+                      action='store', type='string', dest='platform',
> >+                      help='platform to push to [default: all]')
> 
> Please specify the default value via 'default="all"'.

Default isn't "all". That would be a platform named "all". Default, as above, is None, which internally means "no specific platform," which is to say, all of them. 

The reason you do these this way is because you have no idea what people will want to do in the future. 

So you don't set up magic sentinel strings that mean what you want because someone might use them and get awfully confused. Instead, you use a value like None or undefined or what-have-you because they can't possibly specify that. 

It's good interface practice, whether you're talking functions or scripts--magic values are bad.

> What does it mean technically if we do not set the cluster? Do any listener
> which receives the signal start to execute those tests? I think we should
> always include the cluster property with the right value (specific machine
> or 'all') set.

Everything without a cluster is logically lumped together into a "no cluster" category. So if you send a request with no cluster, all the listeners with no clusters will react to it. If you send a request with a cluster, only listeners for that cluster will react to it.

I changed the wording in usage a bit to reflect this and make it a little clearer.

I don't really want to make this a required param--it's a PITA to type every time you do a request, plus you have to know what the exact name of the cluster is. For now, I recommend editing the default when it's staged on a machine so it doesn't have to be retyped.

Once I have a better idea how this is being used (directly for medium/long term, web front end, what?) I'll revisit this and see if a conf file makes sense or what.

> >+    # Did we specify platform? If so, add the platform key
>...
> Same comments as for cluster.

On this one, "don't specify the option to run all platforms" is fully appropriate. Platform is a limiting option; the default is to run everything, but you can address one in the rare case you need to.

It differs from cluster in that your normal case will almost always involve not specifying platform. So, it'll stay an option.

> Please use placeholder descriptors here like %(CLUSTER)s. Please see the
> scraper package.

I switched everything to them (though I use PEP-8 standard of lowercase names :) ). I have mixed feelings--they crap up the code in the "one substitution" case, but they're clearer in the "multiple substitutions" case. I'm not sure what I think we should do long-term, but for now went with it.

> >+    # Disconnect to be nice to the server
> >+    pulse.disconnect()
> 
> Can there be any exceptions? If that's the case we should handle disconnect
> in the same way as in the heartbeat scripts.

Yeah there can. I added a try-finally on the push script.

> 
> >+++ b/updated.py
> 
> We should find a better name. I assume the 'd' suffix should mean a daemon?
> In the repo it's not understandable. What about update-listener, which we
> can later rename to ondemand-listener.

It's release_listener.py and release_push.py now.

> 
> >+    p = subprocess.Popen(args)
> >+    retcode = p.wait()
> >+    # XXX: should do something if failed
> >+    print '  finished, return code: %s' % retcode
> 
> Please mention that we lack this capability right now due to a bug in the
> automation scripts. Please reference its id.

Oh, come on. I'm not going to file a bug for every TODO that I leave as "this might be a good idea" in a script. It's a place to expand later and I'm noting the place for -when- it comes up, not marking a specific defect.

So, I removed the comment to make you happy--the script does what it does, and that's it for now--but this one wasn't an appropriate review remark IMO.

> 
> >+# Define a callback
> >+def got_message(data, message):
> >+    global platform
> >+    global script
> >+    global cluster
> >+    global debug
> 
> Do we really have to redeclare those global variables? I have never seen
> that before.

Then you haven't used global variables very much. You need to put the global <var> in any scope (other than global) which you want to use them in. Those commands pulled them into function scope.

It's necessary to share them between the main() and the callback. There's no other way to do so. So, expect global decls in each of those functions.

> 
> Nit: printing platform twice?

Good catch, fixed.

> >+    parser.add_option('-s', '--script',
> >+                      action='store', type='string', dest='script',
> >+                      default='./release_update.py',
> >+                      help='update script to run [default: %default]')
> 
> Once I have finished my script it will be included in the config file, so we
> will not need that parameter anymore. Also what is '%default'? An user will
> not know about its value.

%default is automatically plugged in by optparse with the value specified as the default one. They'll see the actual default script values.

> 
> >+    # Tell the broker what to listen for and give the callback
> >+    pulse.configure(topic='mozmill.#', callback=got_message)
> >+
> >+    # Block and call the callback function when a message comes in
> >+    print 'Listening for update test requests...'
> >+    pulse.listen()
> 
> What about a try/except/finally clause here? We should clean-up correctly.

Doesn't apply to listener scripts--there's no connection to clean up that's not completely encapsulated in the pulse library. 

There is the need to ack() messages (though this is somewhat optional), and I added a try..finally in the central dispatch routine to do that.

In addition to the above, I made some pretty large changes to the script logic and highly simplified the code away from the mass of if..elif..elses that were handling the listener reporting before. 

Your suggestion of always making sure the keyvals were defined in the message helped simplify things, as did some basic refactoring. You should find it somewhat easier to follow now, but ask me any questions you might have.
Attachment #540267 - Attachment is obsolete: true
Attachment #543889 - Flags: review?(hskupin)
Changing the summary to reflect that the scripting has expanded to updates and BFTs, not just updates.
Summary: Tracking Bug: On-Demand Update Testing → Tracking Bug: On-Demand Release Testing
Gonna eat some late night crow, now.

(In reply to comment #3)
> > >+    p = subprocess.Popen(args)
> > >+    retcode = p.wait()
> > >+    # XXX: should do something if failed
> > >+    print '  finished, return code: %s' % retcode
> > 
> > Please mention that we lack this capability right now due to a bug in the
> > automation scripts. Please reference its id.
> 
> Oh, come on. I'm not going to file a bug for every TODO that I leave as
> "this might be a good idea" in a script. It's a place to expand later and
> I'm noting the place for -when- it comes up, not marking a specific defect.
> 
> So, I removed the comment to make you happy--the script does what it does,
> and that's it for now--but this one wasn't an appropriate review remark IMO.

Just realized you thought the XXX was referring to the fact that the other scripts don't return retcodes, in which case that was an appropriate comment.

It doesn't, it referred to the idea that the scripts should email or something on a failure--really was for expanding these particular scripts. I figured by the time we got around to that, the others would be returning codes anyway.


> > >+# Define a callback
> > >+def got_message(data, message):
> > >+    global platform
> > >+    global script
> > >+    global cluster
> > >+    global debug
> > 
> > Do we really have to redeclare those global variables? I have never seen
> > that before.
> 
> Then you haven't used global variables very much. You need to put the global

...or I haven't. :|

You do have to redeclare in any scope for which you want to modify them. You don't if you only want read-only access to them.

So, I could take these out of the callback, since it doesn't write to the variables. 

I will say that I see an advantage to keeping them, though--you get weird behavior if you -do- write to something that should be global without doing this. It'll create a temporary local variable instead, which hides the global one. 

I had that situation come up when developing these, and it was actually kind of hard to figure out what was going on. It would've been an easier bug to catch with everything having write access.

But I could go either way.
(In reply to comment #5)
> I will say that I see an advantage to keeping them, though--you get weird
> behavior if you -do- write to something that should be global without doing
> this. It'll create a temporary local variable instead, which hides the
> global one. 

Thanks for the update on this. That was really something I haven't used before. I only had read access to global variables. So leave it as you say.
Status: NEW → ASSIGNED
Assignee: nobody → gmealer
Created attachment 543896 [details] [diff] [review]
Scripts for on-demand release testing, v1.1

Realized I'd forgotten a qrefresh anyway, and needed to update. So, I also removed the callback's redundant global statements.
Attachment #543889 - Attachment is obsolete: true
Attachment #543889 - Flags: review?(hskupin)
Attachment #543896 - Flags: review?(hskupin)
Comment on attachment 543896 [details] [diff] [review]
Scripts for on-demand release testing, v1.1

>+# BFT test a build
>+def test_bft(script, branch, platform):
>+    args = ('python', script, branch, platform)
>+    exec_test(args)

BFT is a wording we shouldn't use anymore because a lot of our tests are also FFT tests. So please rename it to test_functional as how the test-run script is named.

>+def screen_request(data, cluster, platform, debug):
>+    # Based on cluster, are we responding to this request?
>+    # We are if they're equal (no cluster will be None = None)
>+    requested_cluster = data['payload']['cluster']
>+    if requested_cluster != cluster:

Thanks for the good explanation here. That makes it really understandable.

>+def perform_update(data, cluster, platform, update_script, debug):
[..]
>+    test_update(script, branch, platform, channel)
[..]
>+def perform_bft(data, cluster, platform, bft_script, debug):
[..]
>+    test_bft(script, branch, platform)

For now it's ok but I don't see a reason why we need test_bft and test_update. We could directly call exec_test with the appropriate number of parameters. We can change this once we read and handle the script name from the config file.

>+    print 'Listening for next BFT test request...'

This should print out "listening for next request". Otherwise it gives a wrong indication.

>+def main():
>+    usage = """%prog [options] platform
>+
>+  Launches a listener for the on-demand testing system.
[..]
>+  If cluster is supplied as an option, the listener will only run tests
>+  for requests on that cluster. If cluster is not supplied, the listener
>+  will only run tests for requests with no cluster."""

So you say that's the preferred way and has been specified in PEP-8. Can you please give me a link to that? I can't find anything like that and as said, it's very hard to read. The Python docs explicitly mention the description parameter (http://docs.python.org/library/optparse.html)

>+    platform = args[0]

I hope that we can get rid of this workaround in the future by correctly identifying the os version. Lets see if I can get to it this quarter.

>+    print 'Listening for requests that include platform=%(platform)s' % \
>+            {'platform': platform}
>+
>+    print 'Will run update tests using script=%(script)s' % \
>+            {'script': update_script}
>+
>+    print 'Will run bft tests using script=%(script)s' % \
>+            {'script': bft_script}

nit: 4 spaces for indentation

r=me with the comments addressed and remaining questions solved.
Attachment #543896 - Flags: review?(hskupin) → review+

Comment 9

7 years ago
A Pivotal Tracker story has been created for this Bug: https://www.pivotaltracker.com/story/show/15490957

Comment 10

7 years ago
Henrik Skupin changed story state to unstarted in Pivotal Tracker

Comment 11

7 years ago
Henrik Skupin changed story state to started in Pivotal Tracker
(In reply to comment #8)
> Comment on attachment 543896 [details] [diff] [review] [review]
> Scripts for on-demand release testing, v1.1
> 
> >+# BFT test a build
> >+def test_bft(script, branch, platform):
> >+    args = ('python', script, branch, platform)
> >+    exec_test(args)
> 
> BFT is a wording we shouldn't use anymore because a lot of our tests are
> also FFT tests. So please rename it to test_functional as how the test-run
> script is named.

OK. Also means I need to change the params, etc. I wish there's something shorter we could use for the parameter though; functional is kind of long to have to type all the time.

> 
> >+def screen_request(data, cluster, platform, debug):
> >+    # Based on cluster, are we responding to this request?
> >+    # We are if they're equal (no cluster will be None = None)
> >+    requested_cluster = data['payload']['cluster']
> >+    if requested_cluster != cluster:
> 
> Thanks for the good explanation here. That makes it really understandable.

Sure! Thanks for raising the clarity issues before. That really made me take a hard look at how to refactor the code to make it clearer. The previous structure was kinda crappy. :)

> 
> >+def perform_update(data, cluster, platform, update_script, debug):
> [..]
> >+    test_update(script, branch, platform, channel)
> [..]
> >+def perform_bft(data, cluster, platform, bft_script, debug):
> [..]
> >+    test_bft(script, branch, platform)
> 
> For now it's ok but I don't see a reason why we need test_bft and
> test_update. We could directly call exec_test with the appropriate number of
> parameters. We can change this once we read and handle the script name from
> the config file.

I'll take a second look at it. I probably anticipated a varying code path based on update vs. functional, but entirely possible I was wrong or at least we don't need it yet. 

If so, I'm happy to simplify unless you think that's too big a change for an r=me.

> 
> >+    print 'Listening for next BFT test request...'
> 
> This should print out "listening for next request". Otherwise it gives a
> wrong indication.

Absolutely, good catch.

> 
> >+def main():
> >+    usage = """%prog [options] platform
> >+
> >+  Launches a listener for the on-demand testing system.
> [..]
> >+  If cluster is supplied as an option, the listener will only run tests
> >+  for requests on that cluster. If cluster is not supplied, the listener
> >+  will only run tests for requests with no cluster."""
> 
> So you say that's the preferred way and has been specified in PEP-8. Can you
> please give me a link to that? I can't find anything like that and as said,
> it's very hard to read. The Python docs explicitly mention the description
> parameter (http://docs.python.org/library/optparse.html)

Sorry, I must have been unclear and we crossed some wires.

The PEP-8 comment was concerning the string placeholders. You were using all-caps. Since they correspond to key names in a dict where the keys are used as fields, you'd normally use variable-style naming.

Where I said I was doing it the "right" way, I was referring to the optparse docs you also reference. I'm following their examples for setting up usage statements exactly.

Regarding description, you're right. I only found it as an option on OptionGroup before, but I see now that I missed it listed under the constructor params.

However, description is also documented as only handling a single paragraph, which it reformats for the terminal. As I'd said before, before I read that I tried using your method of assigning the docstring to description. Trying to run multi-paragraph text spits out malformatted garbage.

It's possible the very top paragraph could be the global docstring and double as description, but what you're doing with it doesn't represent any sort of standard I found; it's fine, but it's just what you've chosen to do with it. I chose something different, and pretty much stand behind that decision.

OTOH, if I missed a standard for that somewhere, please show me where it's independently documented and I'll definitely make the change.

> 
> >+    platform = args[0]
> 
> I hope that we can get rid of this workaround in the future by correctly
> identifying the os version. Lets see if I can get to it this quarter.

Sure. I don't really see it as a workaround though--you might want a 32-bit-platform daemon deployed on a 64-bit OS. I wouldn't assume this should be changed.

> 
> >+    print 'Listening for requests that include platform=%(platform)s' % \
> >+            {'platform': platform}
> >+
> >+    print 'Will run update tests using script=%(script)s' % \
> >+            {'script': update_script}
> >+
> >+    print 'Will run bft tests using script=%(script)s' % \
> >+            {'script': bft_script}
> 
> nit: 4 spaces for indentation

My intention was to indent it under the printed statement. I used four-space indents, and they're at the first tab stop with 4-space indents that lands under that statement.

I guess I could go with aligning the { with the '; that's probably more consistent with PEP-8's instructions for hanging indents. 4 spaces from left margin was pretty cluttered looking when I tried it.

That's going to be a lot of small changes throughout the files though.

Any any rate, I'd like to hear your opinion on how you'd indent this.

> 
> r=me with the comments addressed and remaining questions solved.

I appreciate that! With the scope of the requested changes, though (they're small but touch a lot) I think I'd want a second review.
(In reply to comment #12)
> > BFT is a wording we shouldn't use anymore because a lot of our tests are
> > also FFT tests. So please rename it to test_functional as how the test-run
> > script is named.
> 
> OK. Also means I need to change the params, etc. I wish there's something
> shorter we could use for the parameter though; functional is kind of long to
> have to type all the time.

So lets leave it by using BFT. Once the selection of the test-run is managed by the configuration file, we no longer need this function and could get rid off all the BFT and update names. 

> > For now it's ok but I don't see a reason why we need test_bft and
> > test_update. We could directly call exec_test with the appropriate number of
> > parameters. We can change this once we read and handle the script name from
> > the config file.
> 
> I'll take a second look at it. I probably anticipated a varying code path
> based on update vs. functional, but entirely possible I was wrong or at
> least we don't need it yet. 
> 
> If so, I'm happy to simplify unless you think that's too big a change for an
> r=me.

Keep it as mentioned above. We will have to re-structure the code in any way once the final bits have been landed. The current code doesn't hurt.

> It's possible the very top paragraph could be the global docstring and
> double as description, but what you're doing with it doesn't represent any
> sort of standard I found; it's fine, but it's just what you've chosen to do
> with it. I chose something different, and pretty much stand behind that
> decision.

This was a review comment made by Jeff. So I'm not sure if there is a standard or not. Please talk to him if he can refer us to an useful URL.

> > >+    platform = args[0]
> > 
> > I hope that we can get rid of this workaround in the future by correctly
> > identifying the os version. Lets see if I can get to it this quarter.
> 
> Sure. I don't really see it as a workaround though--you might want a
> 32-bit-platform daemon deployed on a 64-bit OS. I wouldn't assume this
> should be changed.

I see now. There is no way yet that we could handle 32-bit builds for a 64-bit platform in the config file. Worth an enhancement in the future too.

> That's going to be a lot of small changes throughout the files though.
> 
> Any any rate, I'd like to hear your opinion on how you'd indent this.

For a better readability I would have chosen:

> > >+    print 'Will run bft tests using script=%(script)s' % {
> > >+              'script': bft_script}

That would also take care of multi-line situations, so we do not have to indent twice like:

> > >+    print 'Will run bft tests using script=%(script)s' % \
> > >+            {'script': bft_script,
                   'foo': bar}

> I appreciate that! With the scope of the requested changes, though (they're
> small but touch a lot) I think I'd want a second review.

Sure. Just request a review again.
Created attachment 547595 [details] [diff] [review]
Scripts for on-demand release testing, v2

* BFT -> functional			DONE
* test_bft/test_update			DONE
* Remove default cluster		DONE
* Use docstring for description		DONE
* Hanging indents on prints		DONE

Since I needed a re-review anyway, I went ahead and made all requested changes. I don't see a point on putting something in we don't both agree is pretty good.

A little commentary:

* All references to BFT now say functional (including the command line param). If it turns out this is a pain to type, we can loop back and shorten it. But you're right, that run isn't just BFTs anymore.

* I removed the test_bft/test_update routines in favor of just exec'ing. In order to make that work better, exec_test() was breathed on a little to just take a variable arg list (part of what those functions did was build the args list).

* Cluster is no longer optional, and is now a positional parameter. That also let me simplify both the screen_request() function and let me remove the get_cluster_str() function that accounted for None clusters.

* Because that simplified the help string considerably, I moved it all into the docstring and used description=__doc__ as suggested.

I will say, though, I did a fair amount of research on this after your comment. This is a nice shortcut for short descriptions, but really isn't a standard.

It's also not viable for a multi-paragraph help block, because that description will always go at the -bottom- of the usage, not the top. So there's no good way to just carve off the high-level part of the usage and put it in the docstring, since you usually want to see that first, and description can't be multi-paragraph.

The most common way I've found for a long usage statement is to do what I originally did and make a long usage string. The second most common way was to subclass OptParse and rewrite its renderer, which'd be a bit of overkill.

argparse fixes these problems by giving you more options for formatting and allowing documentation of positional args. But we don't get that until 2.7.

The key here was the help getting simpler, and that let me do it the way suggested. But for complex help, I'd expect it to be the way I had it.

* I fixed all hanging indents and also normalized strictly to 80 char width. Neither you nor I really had a solution that reflected PEP-8. Between that and the Google Python style document, I was able to decide on:

+            print 'Received request for cluster=%(cluster)s' % {
+                'cluster': requested_cluster}

...which I used when no value fit on the first line. If any of the values went on the first line, I vertically-aligned under them as specified in PEP-8. 

Where I had strings that went over the 80 char line, I also fell back on Google's guide and split them by putting them in parentheses so they'd implicitly concatenate across lines. So that's what that's about when you see it.

Google's doc is here: http://google-styleguide.googlecode.com/svn/trunk/pyguide.html

This should be in pretty good shape at this point. Take a look and let me know what you think.
Attachment #543896 - Attachment is obsolete: true
Attachment #547595 - Flags: review?(hskupin)
Comment on attachment 547595 [details] [diff] [review]
Scripts for on-demand release testing, v2

Looks fine. I haven't run the scripts to verify the functionality but I think you did a good chunk of it already. Lets get it landed and distributed to our machines for release testing. Thanks Geo.
Attachment #547595 - Flags: review?(hskupin) → review+
Geo, can we get this landed this week? I know we haven't had time last week, but we should get this project finalized soon. Thanks.
Landed as http://hg.mozilla.org/qa/mozmill-automation/rev/ec80e0b806f7

Wasn't a matter of the delay, per se. My hg access was still broken when we went on work-week, and I was trying to get it fixed. I'm OK now.
Status: ASSIGNED → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → FIXED
Great. Thanks for the landing!
Product: Mozilla QA → Mozilla QA Graveyard
You need to log in before you can comment on or make changes to this bug.