Closed Bug 1284900 Opened 8 years ago Closed 8 years ago

Validate releasetasks using voluptuous

Categories

(Release Engineering :: Release Automation: Other, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: rail, Assigned: csheehan)

References

Details

Attachments

(2 files, 1 obsolete file)

It would be great to use https://github.com/alecthomas/voluptuous to validate the tasks.
Would love to take this bug if no one is working/planning to work on it in the near future. I was just reading on voluptuos whilst working on relpro release sanity. However, I've got my hands a little busy at the moment. Is there an ETA for this?
ETA: whenever someone defines one :)
TBH, I'm not sure if this is the best method, but it'd be great to try and evaluate it.
Assignee: nobody → csheehan
After playing with the package for a while this seems like a good fit, it would allow us to check for a lot of potentially breaking values in the arguments (ie balrog_api_root must be a URL, appVersion must match a regex etc), as well as any mutually inclusive/exclusive arguments. One idea I had was to use the schema_builder submodule to start from a base schema of essential kwargs and use individual task schemas to build up a final schema based on if a task will be included in the graph or not. An example of this is that balrog_api_root would be optional in the base schema, unless publish_to_balrog_channels exists as a list of strings, in which case it must exist and map to a URL. The downside to this is a lot of the conditional logic in release_graph.yml.tmpl would be repeated in the validation. Are there any other things we would need for validation of releasetasks? And what do you think of the above? :)
Flags: needinfo?(rail)
I'd really like to replace the imperative way of our tests (generate graph, find task, compare it with something else) with something declarative, what should probably reduce a lot of boilerplate code we have in tests. I'm not that worried about conditionals - if you test a subset of a graph, you may not need to have them. In any case, these are just ideas. It'd be great to try to implement/port a couple of tests to see how we can approach.
Flags: needinfo?(rail)
So you mean to use the library in the tests instead of our current method? My assumption was that we would use voluptuous to either check the inputs to ensure the graph creation will always work, or check the output to make sure all the tasks are correct. In any case, I will work on porting a few tests over to see what it looks like. :)
Flags: needinfo?(rail)
Yeah, that was the idea. Before you fully port things, can you come with some examples, then we can decide if it's worth porting it.
Flags: needinfo?(rail)
The attached diff is an example of the do_common_assertions test ported to voluptuous. A few tests were left in as they were more difficult to implement than others, the most notable being that there is no way to make the validation fail if a key/value is in a schema when it shouldn't be (see https://github.com/alecthomas/voluptuous/issues/193). Let me know what you think :)
Attachment #8791837 - Flags: feedback?(rail)
Comment on attachment 8791837 [details] [diff] [review] do_common_assertions implemented with voluptuous Review of attachment 8791837 [details] [diff] [review]: ----------------------------------------------------------------- In overall it looks great. Very easy to read the requirements! Can you also touch the setUp() methods, we have tons of boilerplate there... Maybe we can create a bunch of yaml files and read them in setUp? Or one generic and override some stuff. Maybe something like this: def setUp(self): ... kwargs = yaml.safe_load(path_to_generic_yaml) # override whatever you want to test kwargs["updates_enabled"] = True kwargs["updates_channels"] = ["beta", "release"] ... self.graph = make_task_graph(**kwargs) ... ::: releasetasks/test/firefox/__init__.py @@ +81,5 @@ > + assert "treeherderEnv" in t['task']["extra"] > + assert len([r for r in t['task']["routes"] if r.startswith("tc-treeherder")]) == 2 > + > + correct_schema = TASK_SCHEMA.extend(generate_signature_schema(t['task'])) > + correct_schema = correct_schema.extend(PROVISIONER_ID_EXTENSIONS.get(t['task']['provisionerId'])) I wonder if you can pass multiple schema bits from different tests and merge them here. Something like `do_common_assertions(graph, [signing_payload_schema, requirements_schema, chunks_in_meta_schema])`. Or maybe create another function, something like `verify(task, *schemas)`. @@ +83,5 @@ > + > + correct_schema = TASK_SCHEMA.extend(generate_signature_schema(t['task'])) > + correct_schema = correct_schema.extend(PROVISIONER_ID_EXTENSIONS.get(t['task']['provisionerId'])) > + > + assert t == correct_schema(t) How the output looks like? Is it human readable?
Attachment #8791837 - Flags: feedback?(rail) → feedback+
(In reply to Rail Aliiev [:rail] from comment #9) > Comment on attachment 8791837 [details] [diff] [review] > do_common_assertions implemented with voluptuous > > Review of attachment 8791837 [details] [diff] [review]: > ----------------------------------------------------------------- > > In overall it looks great. Very easy to read the requirements! > > Can you also touch the setUp() methods, we have tons of boilerplate there... > Maybe we can create a bunch of yaml files and read them in setUp? Or one > generic and override some stuff. Maybe something like this: > > def setUp(self): > ... > kwargs = yaml.safe_load(path_to_generic_yaml) > # override whatever you want to test > kwargs["updates_enabled"] = True > kwargs["updates_channels"] = ["beta", "release"] > ... > self.graph = make_task_graph(**kwargs) > ... Yes, this will be done soon. I did some work on this, using the generic/override style you suggested. :) > > ::: releasetasks/test/firefox/__init__.py > @@ +81,5 @@ > > + assert "treeherderEnv" in t['task']["extra"] > > + assert len([r for r in t['task']["routes"] if r.startswith("tc-treeherder")]) == 2 > > + > > + correct_schema = TASK_SCHEMA.extend(generate_signature_schema(t['task'])) > > + correct_schema = correct_schema.extend(PROVISIONER_ID_EXTENSIONS.get(t['task']['provisionerId'])) > > I wonder if you can pass multiple schema bits from different tests and merge > them here. Something like `do_common_assertions(graph, > [signing_payload_schema, requirements_schema, chunks_in_meta_schema])`. Or > maybe create another function, something like `verify(task, *schemas)`. > So instead of having a large collection of test functions, each test case would have a schema to be added to the common assertions schema and we could just use that to verify? Sounds good to me, just want to make sure I am understanding you fully. :) > @@ +83,5 @@ > > + > > + correct_schema = TASK_SCHEMA.extend(generate_signature_schema(t['task'])) > > + correct_schema = correct_schema.extend(PROVISIONER_ID_EXTENSIONS.get(t['task']['provisionerId'])) > > + > > + assert t == correct_schema(t) > > How the output looks like? Is it human readable? Calling correct_schema(t) will throw an error if t is not valid according to the schema, or return the validated output if it fits the schema. Voluptuous has an option to remove elements of the object if they are present in the schema, but since we don't use them it will return the same object that was passed in.
(In reply to Connor Sheehan [:sheehan] from comment #10) > Yes, this will be done soon. I did some work on this, using the > generic/override style you suggested. :) Wheeee \o/ > So instead of having a large collection of test functions, each test case > would have a schema to be added to the common assertions schema and we could > just use that to verify? Sounds good to me, just want to make sure I am > understanding you fully. :) 100% correct.
As per comment 9. :)
Attachment #8793582 - Flags: review?(rail)
Comment on attachment 8793582 [details] [review] Reduce test setUp() boilerplate keyword arguments Thank you! Looks so much better now!
Attachment #8793582 - Flags: review?(rail)
Attachment #8793582 - Flags: review+
Attachment #8793582 - Flags: checked-in+
Have been working on porting over the tasks to voluptuous, and I have a few remarks/questions to ask. :) First I have noticed that the test functions within the test classes generally either test properties of the full graph or use get_task_by_name to test properties of a specific task. Due to this structure I think it would be easier to have two validation functions, one that validates the graph properties for a test and one to validate task-specific properties. What are your thoughts on this? The other thing I have noticed is that in certain tasks the task/payload/properties/build_number field is sometimes the int representation 3, and sometimes the string representation '3'. Not sure if this is expected but I figured I would mention.
Flags: needinfo?(rail)
(In reply to Connor Sheehan [:sheehan] from comment #14) > Have been working on porting over the tasks to voluptuous, and I have a few > remarks/questions to ask. :) Sure thing! Feel free to ask anytime. > First I have noticed that the test functions within the test classes > generally either test properties of the full graph or use get_task_by_name > to test properties of a specific task. Due to this structure I think it > would be easier to have two validation functions, one that validates the > graph properties for a test and one to validate task-specific properties. > What are your thoughts on this? Yup, this is a correct observation. We usually want to test both the graph and some tasks. The graph props are more or less static and they may go away at some point (see bug 1259627). > The other thing I have noticed is that in certain tasks the > task/payload/properties/build_number field is sometimes the int > representation 3, and sometimes the string representation '3'. Not sure if > this is expected but I figured I would mention. Let's make it integer everywhere.
Flags: needinfo?(rail)
Connor: any update on progress here?
Flags: needinfo?(csheehan)
Hey Coop, so far I have been working on moving tests into voluptuous, with my plan being to move them to Rail's verify() method when they are all implemented. One of the problems I'm having is that to implement certain styles of tests that are common to the test suite (like a test that basically asserts "the scopes key cannot be in the task") is not officially supported by default in voluptuous. The workaround is to create a function and decorate it with "truth", which takes in the data you wish to validate and returns the validated output or throws a validation error. This method only takes in 1 argument, which means it can't be an instance method (due to the self argument) and would have to be a static method. So far the implementation has reduced the number of tests, as well as sped up the test suite (by ~10s on my machine). :) I pushed the changes I've made so far to my GitHub fork under the validate_graph_kwargs branch. Rail, I noticed in the make_task_graph method there is a TODO saying we need some validation of graph kwargs - is that something we want done via voluptuous? So when someone provides arguments to make_task_graph that would create a broken graph, it would throw an error? Just an idea. :)
Flags: needinfo?(csheehan) → needinfo?(rail)
(In reply to Connor Sheehan [:sheehan] from comment #17) > Rail, I noticed in the make_task_graph method there is a TODO saying we need > some validation of graph kwargs - is that something we want done via > voluptuous? So when someone provides arguments to make_task_graph that would > create a broken graph, it would throw an error? Just an idea. :) Not a bad idea actually! Even though we try explicitly pass every single argument, there is no compile time check, we always fail run time. So, yeah, validating that using voluptuous should be easier to read.
Flags: needinfo?(rail)
This PR ports the releasetasks test to use voluptuous for validation. This improves readability and reduces the number of tests by ~2/3 (consequently reducing the testing time by the same margin). Tasks to be tested are passed to verify(data, *validators) along with task schemas and validators. This method will throw an error with humanized output if the test fails. Some tests which cannot be explicitly included in the test schema have had validators created for them using two methods. The first method is to create a staticmethod and then decorate that method with the 'truth' decorator. The method must be static as the validator functions can only take a single argument, and an instance method would require the 'self' argument as well as the task to validate. The second method is to create an instance method that returns a validator (ie a function wrapped with truth). There are some tests for which only the second method will work. test_bb_update_verify.py has an example of each method. Before we merge, I was wondering if it would be worth it to scrap the staticmethod version and use only the 'validator generator' method, for consistency. :)
Attachment #8791837 - Attachment is obsolete: true
Attachment #8804851 - Flags: review?(rail)
Attachment #8804851 - Flags: review?(rail) → review+
Is there anything else left here?
I suppose that depends on whether you would still want the TODO at https://github.com/mozilla/releasetasks/blob/master/releasetasks/__init__.py#L22 handled with voluptuous validation. I started working on it by gathering all the possible kwargs and looking into args that are required when certain flags are set (for example if publish_to_balrog_channels is set, then balrog_api_root must also be set) but I was causing some tests to fail. We could always move that to a new bug if you want to close this one. :)
Flags: needinfo?(rail)
Up to you, I was just curious what left.
Flags: needinfo?(rail)
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: