Closed Bug 1398799 Opened 7 years ago Closed 6 years ago

Stop using patcher configs for update verify

Categories

(Release Engineering :: Release Automation: Other, enhancement)

enhancement
Not set
normal

Tracking

(firefox60 fixed)

RESOLVED FIXED
Tracking Status
firefox60 --- fixed

People

(Reporter: catlee, Assigned: bhearsum)

References

Details

Attachments

(7 files, 9 obsolete files)

1.27 KB, patch
mozilla
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
3.60 KB, patch
mozilla
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
11.91 KB, patch
mozilla
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
45 bytes, text/x-phabricator-request
gps
: review+
bhearsum
: checked-in+
Details | Review
657 bytes, patch
nthomas
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
40.58 KB, patch
mozilla
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
38.83 KB, patch
nthomas
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
      No description provided.
The only reason we keep the patcher configs around is that we use them to generate update verify configs. All other info (partials, URL templates, etc) can be generated on the fly from the input we get from ship-it. The build IDs are not used in the top level Balrog data (because they can be platform specific) and are needed only to generate the update verify configs. If we can optimize the update verification process, we can easily drop the patcher configs. \o/
Depends on: 1398814
In case anyone else in confused: the reason this blocks migrating off of buildbot is because we currently update and push patcher config changes to the "tools" repo as part of release promotion. We can't do this from Taskcluster, so we need to change how these tests work, or at the very least - how they get their data.
I see that this bug currently wants us to stop using Balrog for update verify. Because our current focus is getting off of Buildbot, and because reworking update verify not to depend on Balrog doesn't block that, I'm removing that dependency. I think our best path forward for now is to continue using update verify + update verify configs, but generate and place the configs elsewhere. Once we're done that, we can come back around to reworking update verify itself, if that's still a priority. If anyone disagrees with this approach feel free to speak up.

(In reply to Rail Aliiev [:rail] ⌚️ET (away until Feb 5) from comment #1)
> The only reason we keep the patcher configs around is that we use them to
> generate update verify configs. All other info (partials, URL templates,
> etc) can be generated on the fly from the input we get from ship-it. The
> build IDs are not used in the top level Balrog data (because they can be
> platform specific) and are needed only to generate the update verify
> configs. If we can optimize the update verification process, we can easily
> drop the patcher configs. \o/

I went through the update verify config generation code. Here's a list of all the parts of the patcher-config we currently rely on when generating update verify configs, annotated with alternative sources that are available at graph generation time (where they exist):
- appName (update verify task definition)
- current-update.channel (update verify task definition)
- current-update.from (parameters.yml - release_history)
- current-update.partials (parameters.yml - release_history)
- current-update.testchannel (update verify task definition)
- current-update.to (action input)
- past-update (ship it for most of this, but we also need to know what the last watershed is)
- release.$to_version.exceptions (should be able to find this in l10n-changesets.json)
- release.$to_version.locales (l10n-changesets.json)
- release.$to_version.platforms (update verify task definition)
- release.$to_version.prettyVersion (action input)
- release.$to_version.version (action input)
- release.$from_version.exceptions (ship-it)
- release.$from_version.extension-version (ship-it)
- release.$from_version.locales (ship-it, after we backfill old l10n-changesets)
- release.$from_version.mar-channel-ids (can be guessed based on $from_version pattern, only applicable to RC versions in beta update verify configs)
- release.$from_version.platforms (UNKNOWN)

So, we have an alternative source for almost everything (after we backfill ship it with locale information). The only question is how we find the watershed, to know how far back in time to go. We might be able to hardcode this somewhere for the time being, or we could fix bug 1311001 - which wants Ship It to be the source of truth for watersheds.
Assignee: nobody → bhearsum
No longer depends on: 1398814
See Also: → 1311001
This script is a proof of concept for generating update verify configs without patcher. I managed to regenerate the linux64 configs for each type of release (beta, release, esr, devedition) with (almost) identical results as the current ones. The only differences were around watersheds, and some minor differences because of a recently added manual hack for devedition (https://github.com/mozilla/build-tools/commit/97e98527200ea0d3a280a32c4b350bf98fd307ad#diff-9827c733082717740c4feea3e57f2803). The latter is a thing that if it were to happen post-patcher-config death, we'd probably hack it into this script instead.

This script sources the required information from various places in lieu of patcher-configs:
* Decided whether or not a release should be tested, which we decide based on whether or not it was shipped.
* Locale information and appVersion are pulled from the tree, based on the mozillaRevision from ship it. This is a bit slow, but the tree is the RoR, so I think this is the right place to do it.
* BuildIDs are currently pulled from ftp. This is probably not an acceptable way to find them, because we don't keep them forever. As far as I know, patcher configs are the _only_ place where we store buildids for the long term. We could decide to keep them forever on ftp, or maybe we should post them back to ship it as part of the ship phase. Or something else.

Setting mar-channel-ids and dealing with watersheds (aka, the last version cut-off) are not implemented yet. I still think the former is probably going to be a version-based hardcode in the script, and the latter belongs in ship it.

As its written now the script is very slow, largely because of all the requests it makes to ship-it, hg, and archive. We can improve this with some parallelization, but we can also greatly reduce the number of requests to ship-it by adding some better endpoints. Eg: an endpoint that returns full details of a release, with the ability to filter (and limit to things past the last watershed), would cut us down to one request to ship-it.

I'm also not sure if this is the right place to implement this. I mostly wrote it as an augmented version of the existing update verify config script to expediently write this proof of concept. I can see an argument for moving this in the tree and/or to mozharness, but it would require moving or reimplementing a whole bunch of the helper functions we use. I'm not sure it's worth it at the moment.
I've decided to take the leap and move the update verify config creation into the tree/mozharness. It depends on some code that I don't think belongs in mozharness. After some discussion with RelEng and gps, python/mozrelease seems like the best place for this code right now. At the same time, I've modernized it by removing unused features and using newer style (like {} string formatting). Notable changes:
* getPlatformLocales now operates on one platform at a time. It looks to me like we have no use cases for getting the set of locales for multiple platforms at once (and if we do, callers can call the function multiple times).
* Drop support for ftp_root and nightlyDir in candidates + releases dir construction functions.
* Drop support for unsigned paths and secondary filenames in installer path generation function. I also renamed this from makeReleaseRepackUrls -> getReleaseInstallerPath, which I think is more reflective of what it does (it's not repack specific, nor does it generate URLs).
* Import parts of platforms.py, and use ftp platform as the "base" (instead of buildbot platform).
* Import the UpdateVerifyConfig class (unchanged).

I realize we haven't had a really in depth or wide discussion about this, so feel free to object on a fundamental level.
Attachment #8943005 - Attachment is obsolete: true
Attachment #8944071 - Flags: feedback?(nthomas)
Attachment #8944071 - Flags: feedback?(aki)
This is similar to my earlier patch, except it's a mozharness script and I've solved the mar channel id issue. It pulls its information from the same place as the earlier script (a hodgepodge of ship-it, hg, and ftp). Watersheds are still not dealt with, but I'm more convinced that bug 1311001 is the right fix.

You'll notice that I'm having all necessary information passed in instead of in a mozharness config. I did this because we seem to prefer to put as much information as possible in kind.ymls these days. This has felt like a good direction to me (it's more explicit, and easier to see what's going on with taskgraph-diff), but I'm open to change here.

My intention here is for us to have one task per platform that calls this script, and have that be upstream of the update verify jobs. The new config will be attached as an artifact to that the update verify jobs can download.
Attachment #8944073 - Flags: feedback?(nthomas)
Attachment #8944073 - Flags: feedback?(aki)
Comment on attachment 8944071 [details] [diff] [review]
move some build/tools libraries in tree + modernize them

>diff --git a/python/mozrelease/mozrelease/__init__.py b/python/mozrelease/mozrelease/__init__.py
>new file mode 100644
>index 000000000000..e69de29bb2d1
>diff --git a/python/mozrelease/mozrelease/chunking.py b/python/mozrelease/mozrelease/chunking.py
>new file mode 100644
>index 000000000000..c678647b8e72
>--- /dev/null
>+++ b/python/mozrelease/mozrelease/chunking.py
>@@ -0,0 +1,22 @@
>+from copy import copy
>+
>+
>+class ChunkingError(Exception):
>+    pass
>+
>+
>+def getChunk(things, chunks, thisChunk):
>+    if thisChunk > chunks:
>+        raise ChunkingError("thisChunk (%d) is greater than total chunks (%d)" %
>+                           (thisChunk, chunks))
>+    possibleThings = copy(things)
>+    nThings = len(possibleThings)
>+    for c in range(1, chunks + 1):
>+        n = nThings / chunks
>+        # If our things aren't evenly divisible by the number of chunks
>+        # we need to append one more onto some of them
>+        if c <= (nThings % chunks):
>+            n += 1
>+        if c == thisChunk:
>+            return possibleThings[0:n]
>+        del possibleThings[0:n]

Do we need this outside of mozharness?
We already have ChunkingMixin there:
https://dxr.mozilla.org/mozilla-central/source/testing/mozharness/mozharness/base/parallel.py#12

>diff --git a/python/mozrelease/mozrelease/l10n.py b/python/mozrelease/mozrelease/l10n.py
>new file mode 100644
>index 000000000000..137ffc0c00fd
>--- /dev/null
>+++ b/python/mozrelease/mozrelease/l10n.py
>@@ -0,0 +1,15 @@
>+from .platforms import shippedLocales2ftp
>+
>+
>+def getPlatformLocales(shipped_locales, platform):
>+    platform_locales = []
>+    for line in shipped_locales.splitlines():
>+        entry = line.strip().split()
>+        locale = entry[0]
>+        if len(entry) > 1:
>+            for sl_platform in entry[1:]:
>+                if platform in shippedLocales2ftp(sl_platform):
>+                    platform_locales.append(locale)
>+        else:
>+            platform_locales.append(locale)
>+    return platform_locales

This seems like a dup of https://dxr.mozilla.org/mozilla-central/source/testing/mozharness/mozharness/mozilla/l10n/locales.py#122 , except it can't handle l10n-changesets.json ?

>diff --git a/python/mozrelease/mozrelease/paths.py b/python/mozrelease/mozrelease/paths.py
>new file mode 100644
>index 000000000000..c315983210a5
>--- /dev/null
>+++ b/python/mozrelease/mozrelease/paths.py
>@@ -0,0 +1,86 @@
>+from urlparse import urlunsplit
>+
>+product_ftp_map = {
>+    "fennec": "mobile",
>+}
>+
>+def product2ftp(product):
>+    return product_ftp_map.get(product, product)
>+
>+
>+def getCandidatesDir(product, version, buildNumber, protocol=None, server=None):
>+    if protocol:
>+        assert server is not None, "server is required with protocol"
>+
>+    product = product2ftp(product)
>+    directory = "/{}/candidates/{}-candidates/build{}".format(
>+        product, str(version), str(buildNumber)
>+    )
>+
>+    if protocol:
>+        return urlunsplit((protocol, server, directory, None, None))
>+    else:
>+        return directory
>+
>+
>+def getReleasesDir(product, version=None, protocol=None, server=None):
>+    if protocol:
>+        assert server is not None, "server is required with protocol"
>+
>+    directory = "/{}/releases".format(product)
>+    if version:
>+        directory = "{}/{}".format(directory, version)
>+
>+    if protocol:
>+        return urlunsplit((protocol, server, directory, None, None))
>+    else:
>+        return directory
>+
>+
>+def getReleaseInstallerPath(productName, brandName, version, platform,
>+                            locale='en-US'):
>+    if productName not in ('fennec',):
>+        if platform.startswith('linux'):
>+            filename = '%s.tar.bz2' % productName
>+            return '/'.join([p.strip('/') for p in [
>+                platform, locale, '%s-%s.tar.bz2' % (productName, version)]])
>+        elif platform.startswith('macosx'):
>+            filename = '%s.dmg' % productName
>+            return '/'.join([p.strip('/') for p in [
>+                platform, locale, '%s %s.dmg' % (brandName, version)]])
>+        elif platform.startswith('win'):
>+            filename = '%s.zip' % productName
>+            instname = '%s.exe' % productName
>+            prefix = []
>+            prefix.extend([platform, locale])
>+            return '/'.join(
>+                [p.strip('/') for p in
>+                 prefix + ['%s Setup %s.exe' % (brandName, version)]]
>+            )
>+        else:
>+            raise "Unsupported platform"
>+    else:
>+        if platform.startswith('android'):
>+            filename = '%s-%s.%s.android-arm.apk' % (
>+                productName, version, locale)
>+            prefix = []
>+            prefix.extend([platform, locale])
>+            return '/'.join(
>+                [p.strip('/') for p in
>+                 prefix + [filename]]
>+            )
>+        elif platform == 'linux':
>+            filename = '%s.tar.bz2' % productName
>+            return '/'.join([p.strip('/') for p in [
>+                platform, locale, '%s-%s.%s.linux-i686.tar.bz2' % (productName, version, locale)]])
>+        elif platform == 'macosx':
>+            filename = '%s.dmg' % productName
>+            return '/'.join([p.strip('/') for p in [
>+                platform, locale, '%s-%s.%s.mac.dmg' % (brandName, version, locale)]])
>+        elif platform == 'win32':
>+            filename = '%s.zip' % productName
>+            return '/'.join([p.strip('/') for p in [
>+                platform, locale,
>+                '%s-%s.%s.win32.zip' % (productName, version, locale)]])
>+        else:
>+            raise "Unsupported platform"

In bug 1331141, we were planning on moving the beetmover manifests in-tree, which should describe these paths.

>diff --git a/python/mozrelease/mozrelease/platforms.py b/python/mozrelease/mozrelease/platforms.py
>new file mode 100644
>index 000000000000..099f153c8d18
>--- /dev/null
>+++ b/python/mozrelease/mozrelease/platforms.py
>@@ -0,0 +1,58 @@
>+# ftp -> update platform map
>+update_platform_map = {
>+    "android": ["Android_arm-eabi-gcc3"],
>+    "android-api-11": ["Android_arm-eabi-gcc3"],
>+    "android-api-15": ["Android_arm-eabi-gcc3"],
>+    "android-api-15-old-id": ["Android_arm-eabi-gcc3"],
>+    "android-api-16": ["Android_arm-eabi-gcc3"],
>+    "android-api-16-old-id": ["Android_arm-eabi-gcc3"],
>+    "android-x86": ["Android_x86-gcc3"],
>+    "android-x86-old-id": ["Android_x86-gcc3"],
>+    "android-aarch64": ["Android_aarch64-gcc3"],
>+    "linux-i686": ["Linux_x86-gcc3"],
>+    "linux-x86_64": ["Linux_x86_64-gcc3"],
>+    "mac": ["Darwin_x86_64-gcc3-u-i386-x86_64",  # The main platofrm
>+            "Darwin_x86-gcc3-u-i386-x86_64",
>+            # We don"t ship builds with these build targets, but some users
>+            # modify their builds in a way that has them report like these.
>+            # See bug 1071576 for details.
>+            "Darwin_x86-gcc3", "Darwin_x86_64-gcc3"],
>+    "win32": ["WINNT_x86-msvc", "WINNT_x86-msvc-x86", "WINNT_x86-msvc-x64"],
>+    "win64": ["WINNT_x86_64-msvc", "WINNT_x86_64-msvc-x64"],
>+}
>+
>+# ftp -> shipped locales map
>+sl_platform_map = {
>+    "linux-i686": "linux",
>+    "linux-x86_64": "linux",
>+    "mac": "osx",
>+    "win32": "win32",
>+    "win64": "win64",
>+}
>+
>+# ftp -> info file platform map
>+info_files_platform_map = {
>+    "linux-i686": "linux",
>+    "linux-x86_64": "linux64",
>+    "mac": "macosx64",
>+    "win32": "win32",
>+    "win64": "win64",
>+}
>+
>+def ftp2updatePlatforms(platform):
>+    return update_platform_map.get(platform, platform)
>+
>+def ftp2shippedLocales(platform):
>+    return sl_platform_map.get(platform, platform)
>+
>+def shippedLocales2ftp(platform):
>+    matches = []
>+    try:
>+        [matches.append(
>+            k) for k, v in sl_platform_map.iteritems() if v == platform][0]
>+        return matches
>+    except IndexError:
>+        return [platform]
>+
>+def ftp2infoFile(platform):
>+    return info_files_platform_map.get(platform, platform)


In bug 1331141, we were planning on moving the beetmover manifests in-tree, which should describe these platforms as well.

>diff --git a/python/mozrelease/mozrelease/update_verify.py b/python/mozrelease/mozrelease/update_verify.py
>new file mode 100644
>index 000000000000..c5e3bf8de0f2
>--- /dev/null
>+++ b/python/mozrelease/mozrelease/update_verify.py
>@@ -0,0 +1,188 @@
>+import os
>+import re
>+
>+from .chunking import getChunk
>+
>+
>+class UpdateVerifyError(Exception):
>+    pass
>+
>+
>+class UpdateVerifyConfig(object):
>+    comment_regex = re.compile("^#")
>+    key_write_order = ("release", "product", "platform", "build_id", "locales",
>+                       "channel", "patch_types", "from", "aus_server",
>+                       "ftp_server_from", "ftp_server_to", "to",
>+                       "mar_channel_IDs", "to_build_id", "to_display_version",
>+                       "to_app_version", "updater_package")
>+    global_keys = ("product", "channel", "aus_server", "to", "to_build_id",
>+                   "to_display_version", "to_app_version")
>+    release_keys = ("release", "build_id", "locales", "patch_types", "from",
>+                    "ftp_server_from", "ftp_server_to", "mar_channel_IDs",
>+                    "platform", "updater_package")
>+    first_only_keys = ("from", "aus_server", "to", "to_build_id",
>+                       "to_display_version", "to_app_version")
>+    compare_attrs = global_keys + ("releases",)
>+
>+    def __init__(self, product=None, channel=None,
>+                 aus_server=None, to=None, to_build_id=None,
>+                 to_display_version=None, to_app_version=None):
>+        self.product = product
>+        self.channel = channel
>+        self.aus_server = aus_server
>+        self.to = to
>+        self.to_build_id = to_build_id
>+        self.to_display_version = to_display_version
>+        self.to_app_version = to_app_version
>+        self.releases = []
>+
>+    def __eq__(self, other):
>+        self_list = [getattr(self, attr) for attr in self.compare_attrs]
>+        other_list = [getattr(other, attr) for attr in self.compare_attrs]
>+        return self_list == other_list
>+
>+    def __ne__(self, other):
>+        return not self.__eq__(other)
>+
>+    def _parseLine(self, line):
>+        entry = {}
>+        items = re.findall("\w+=[\"'][^\"']*[\"']", line)
>+        for i in items:
>+            m = re.search(
>+                "(?P<key>\w+)=[\"'](?P<value>.+)[\"']", i).groupdict()
>+            if m["key"] not in self.global_keys and m["key"] not in self.release_keys:
>+                raise UpdateVerifyError(
>+                    "Unknown key '%s' found on line:\n%s" % (m["key"], line))
>+            if m["key"] in entry:
>+                raise UpdateVerifyError("Multiple values found for key '%s' on line:\n%s" % (m["key"], line))
>+            entry[m["key"]] = m["value"]
>+        if not entry:
>+            raise UpdateVerifyError("No parseable data in line '%s'" % line)
>+        return entry
>+
>+    def _addEntry(self, entry, first):
>+        releaseKeys = {}
>+        for k, v in entry.items():
>+            if k in self.global_keys:
>+                setattr(self, k, entry[k])
>+            elif k in self.release_keys:
>+                # "from" is reserved in Python
>+                if k == "from":
>+                    releaseKeys["from_path"] = v
>+                else:
>+                    releaseKeys[k] = v
>+        self.addRelease(**releaseKeys)
>+
>+    def read(self, config):
>+        f = open(config)
>+        # Only the first non-comment line of an update verify config should
>+        # have a "from" and"ausServer". Ignore any subsequent lines with them.
>+        first = True
>+        for line in f.readlines():
>+            # Skip comment lines
>+            if self.comment_regex.search(line):
>+                continue
>+            self._addEntry(self._parseLine(line), first)
>+            first = False
>+
>+    def write(self, fh):
>+        first = True
>+        for releaseInfo in self.releases:
>+            for key in self.key_write_order:
>+                if key in self.global_keys and (first or key not in self.first_only_keys):
>+                    value = getattr(self, key)
>+                elif key in self.release_keys:
>+                    value = releaseInfo[key]
>+                else:
>+                    value = None
>+                if value is not None:
>+                    fh.write(key)
>+                    fh.write("=")
>+                    if isinstance(value, (list, tuple)):
>+                        fh.write('"%s" ' % " ".join(value))
>+                    else:
>+                        fh.write('"%s" ' % str(value))
>+            # Rewind one character to avoid having a trailing space
>+            fh.seek(-1, os.SEEK_CUR)
>+            fh.write("\n")
>+            first = False
>+
>+    def addRelease(self, release=None, build_id=None, locales=[],
>+                   patch_types=['complete'], from_path=None,
>+                   ftp_server_from=None, ftp_server_to=None,
>+                   mar_channel_IDs=None, platform=None, updater_package=None):
>+        """Locales and patch_types can be passed as either a string or a list.
>+           If a string is passed, they will be converted to a list for internal
>+           storage"""
>+        if self.getRelease(build_id, from_path):
>+            raise UpdateVerifyError("Couldn't add release identified by build_id '%s' and from_path '%s': already exists in config" % (build_id, from_path))
>+        if isinstance(locales, basestring):
>+            locales = sorted(list(locales.split()))
>+        if isinstance(patch_types, basestring):
>+            patch_types = list(patch_types.split())
>+        self.releases.append({
>+            "release": release,
>+            "build_id": build_id,
>+            "locales": locales,
>+            "patch_types": patch_types,
>+            "from": from_path,
>+            "ftp_server_from": ftp_server_from,
>+            "ftp_server_to": ftp_server_to,
>+            "mar_channel_IDs": mar_channel_IDs,
>+            "platform": platform,
>+            "updater_package": updater_package,
>+        })
>+
>+    def addLocaleToRelease(self, build_id, locale, from_path=None):
>+        r = self.getRelease(build_id, from_path)
>+        if not r:
>+            raise UpdateVerifyError("Couldn't add '%s' to release identified by build_id '%s' and from_path '%s': '%s' doesn't exist in this config." % (locale, build_id, from_path, build_id))
>+        r["locales"].append(locale)
>+        r["locales"] = sorted(r["locales"])
>+
>+    def getRelease(self, build_id, from_path):
>+        for r in self.releases:
>+            if r["build_id"] == build_id and r["from"] == from_path:
>+                return r
>+        return {}
>+
>+    def getFullReleaseTests(self):
>+        return [r for r in self.releases if r["from"] is not None]
>+
>+    def getQuickReleaseTests(self):
>+        return [r for r in self.releases if r["from"] is None]
>+
>+    def getChunk(self, chunks, thisChunk):
>+        fullTests = []
>+        quickTests = []
>+        for test in self.getFullReleaseTests():
>+            for locale in test["locales"]:
>+                fullTests.append([test["build_id"], locale, test["from"]])
>+        for test in self.getQuickReleaseTests():
>+            for locale in test["locales"]:
>+                quickTests.append([test["build_id"], locale, test["from"]])
>+        allTests = getChunk(fullTests, chunks, thisChunk)
>+        allTests.extend(getChunk(quickTests, chunks, thisChunk))
>+
>+        newConfig = UpdateVerifyConfig(self.product, self.channel,
>+                                       self.aus_server, self.to,
>+                                       self.to_build_id,
>+                                       self.to_display_version,
>+                                       self.to_app_version)
>+        for t in allTests:
>+            build_id, locale, from_path = t
>+            if from_path == "None":
>+                from_path = None
>+            r = self.getRelease(build_id, from_path)
>+            try:
>+                newConfig.addRelease(r["release"], build_id, locales=[],
>+                                     ftp_server_from=r["ftp_server_from"],
>+                                     ftp_server_to=r["ftp_server_to"],
>+                                     patch_types=r["patch_types"], from_path=from_path,
>+                                     mar_channel_IDs=r["mar_channel_IDs"],
>+                                     platform=r["platform"],
>+                                     updater_package=r["updater_package"])
>+            except UpdateVerifyError:
>+                pass
>+            newConfig.addLocaleToRelease(build_id, locale, from_path)
>+        return newConfig

This may be all we need; it could be rewritten as a mozharness script.
Attachment #8944071 - Flags: feedback?(aki) → feedback+
Comment on attachment 8944073 [details] [diff] [review]
mozharness script to create an update verify config

I think this works. And you don't have to make the above changes, though it does appear we may want to obsolete the above module in the not-too-distant future.
Attachment #8944073 - Flags: feedback?(aki) → feedback+
(In reply to Aki Sasaki [:aki] from comment #7)
> Comment on attachment 8944071 [details] [diff] [review]
> move some build/tools libraries in tree + modernize them
> 
> >diff --git a/python/mozrelease/mozrelease/__init__.py b/python/mozrelease/mozrelease/__init__.py
> >new file mode 100644
> >index 000000000000..e69de29bb2d1
> >diff --git a/python/mozrelease/mozrelease/chunking.py b/python/mozrelease/mozrelease/chunking.py
> >new file mode 100644
> >index 000000000000..c678647b8e72
> >--- /dev/null
> >+++ b/python/mozrelease/mozrelease/chunking.py
> >@@ -0,0 +1,22 @@
> >+from copy import copy
> >+
> >+
> >+class ChunkingError(Exception):
> >+    pass
> >+
> >+
> >+def getChunk(things, chunks, thisChunk):
> >+    if thisChunk > chunks:
> >+        raise ChunkingError("thisChunk (%d) is greater than total chunks (%d)" %
> >+                           (thisChunk, chunks))
> >+    possibleThings = copy(things)
> >+    nThings = len(possibleThings)
> >+    for c in range(1, chunks + 1):
> >+        n = nThings / chunks
> >+        # If our things aren't evenly divisible by the number of chunks
> >+        # we need to append one more onto some of them
> >+        if c <= (nThings % chunks):
> >+            n += 1
> >+        if c == thisChunk:
> >+            return possibleThings[0:n]
> >+        del possibleThings[0:n]
> 
> Do we need this outside of mozharness?
> We already have ChunkingMixin there:
> https://dxr.mozilla.org/mozilla-central/source/testing/mozharness/mozharness/
> base/parallel.py#12

If the UpdateVerifyConfig class continues to exist outside of mozharness, we need this (unless we remove its specialized getChunk method).

> >diff --git a/python/mozrelease/mozrelease/l10n.py b/python/mozrelease/mozrelease/l10n.py
> >new file mode 100644
> >index 000000000000..137ffc0c00fd
> >--- /dev/null
> >+++ b/python/mozrelease/mozrelease/l10n.py
> >@@ -0,0 +1,15 @@
> >+from .platforms import shippedLocales2ftp
> >+
> >+
> >+def getPlatformLocales(shipped_locales, platform):
> >+    platform_locales = []
> >+    for line in shipped_locales.splitlines():
> >+        entry = line.strip().split()
> >+        locale = entry[0]
> >+        if len(entry) > 1:
> >+            for sl_platform in entry[1:]:
> >+                if platform in shippedLocales2ftp(sl_platform):
> >+                    platform_locales.append(locale)
> >+        else:
> >+            platform_locales.append(locale)
> >+    return platform_locales
> 
> This seems like a dup of
> https://dxr.mozilla.org/mozilla-central/source/testing/mozharness/mozharness/
> mozilla/l10n/locales.py#122 , except it can't handle l10n-changesets.json ?

Looks like there's some overlap, except the latter doesn't handle platform exceptions in non-json shipped-locales files. We can probably switch to l10n-changesets.json for update verify, and get rid of this. I'll look into that.



> >+def getReleaseInstallerPath(productName, brandName, version, platform,
> 
> In bug 1331141, we were planning on moving the beetmover manifests in-tree,
> which should describe these paths.

We have to generate candidates dir + file path for the current release, and releases dir + file path for all previous releases. Will in-tree beetmover manifests cover both of these cases, and be accessible in Python code?

> >diff --git a/python/mozrelease/mozrelease/platforms.py b/python/mozrelease/mozrelease/platforms.py
> >new file mode 100644
> >index 000000000000..099f153c8d18
> >--- /dev/null
> >+++ b/python/mozrelease/mozrelease/platforms.py
> >@@ -0,0 +1,58 @@
> 
> 
> In bug 1331141, we were planning on moving the beetmover manifests in-tree,
> which should describe these platforms as well.

It might cover some (depending on the answer to the above question), but I don't think beetmover manifests will help us figure out the update platforms (aka build targets)? I could be wrong.

> >diff --git a/python/mozrelease/mozrelease/update_verify.py b/python/mozrelease/mozrelease/update_verify.py

> >+class UpdateVerifyConfig(object):
> This may be all we need; it could be rewritten as a mozharness script.

We talked about this on IRC a bit. I'm not convinced that rewriting this standalone class into a mozharness specific one makes sense. Mozharness can already use other Python modules, and keeping this here means that non-Mozharness scripts can also use it. This might be personal preference.


(In reply to Aki Sasaki [:aki] from comment #8)
> Comment on attachment 8944073 [details] [diff] [review]
> mozharness script to create an update verify config
> 
> I think this works. And you don't have to make the above changes, though it
> does appear we may want to obsolete the above module in the not-too-distant
> future.

This might be a wider discussion we need to have. We have lots of code that is probably moving in tree in the next few months, and we need to find the right place(s) to put it. I don't see python/mozrelease as a short term home - I see it as a long term place to put general purpose Python code that supports release infrastructure.
Responses to questions/comments, but again, if the current patch is the best or wanted approach, let's go for it.

(In reply to Ben Hearsum (:bhearsum) from comment #9)
> (In reply to Aki Sasaki [:aki] from comment #7)
> > Do we need this outside of mozharness?
> > We already have ChunkingMixin there:
> > https://dxr.mozilla.org/mozilla-central/source/testing/mozharness/mozharness/
> > base/parallel.py#12
> 
> If the UpdateVerifyConfig class continues to exist outside of mozharness, we
> need this (unless we remove its specialized getChunk method).

We could possibly pass chunk information in, rather than having the chunk information generated at the UpdateVerifyConfig level.

> > >diff --git a/python/mozrelease/mozrelease/l10n.py b/python/mozrelease/mozrelease/l10n.py
> > >new file mode 100644
> > >index 000000000000..137ffc0c00fd
> > >--- /dev/null
> > >+++ b/python/mozrelease/mozrelease/l10n.py
> > >@@ -0,0 +1,15 @@
> > >+from .platforms import shippedLocales2ftp
> > >+
> > >+
> > >+def getPlatformLocales(shipped_locales, platform):
> > >+    platform_locales = []
> > >+    for line in shipped_locales.splitlines():
> > >+        entry = line.strip().split()
> > >+        locale = entry[0]
> > >+        if len(entry) > 1:
> > >+            for sl_platform in entry[1:]:
> > >+                if platform in shippedLocales2ftp(sl_platform):
> > >+                    platform_locales.append(locale)
> > >+        else:
> > >+            platform_locales.append(locale)
> > >+    return platform_locales
> > 
> > This seems like a dup of
> > https://dxr.mozilla.org/mozilla-central/source/testing/mozharness/mozharness/
> > mozilla/l10n/locales.py#122 , except it can't handle l10n-changesets.json ?
> 
> Looks like there's some overlap, except the latter doesn't handle platform
> exceptions in non-json shipped-locales files. We can probably switch to
> l10n-changesets.json for update verify, and get rid of this. I'll look into
> that.

Switching would mean we would have the same source of truth and logic around all l10n, rather than one for all things except update verify, and one for update verify. If this is going to be a long lived solution, I think standardizing is a win. If we're going for the fastest solution, dropping in the 2nd method works.

> > >+def getReleaseInstallerPath(productName, brandName, version, platform,
> > 
> > In bug 1331141, we were planning on moving the beetmover manifests in-tree,
> > which should describe these paths.
> 
> We have to generate candidates dir + file path for the current release, and
> releases dir + file path for all previous releases. Will in-tree beetmover
> manifests cover both of these cases, and be accessible in Python code?

It's Jinja, which is usable in Python. The candidates path could be moved to an in-tree taskgraph python function. And it's a good point that we have to generate paths for all previous releases, but the tools code doesn't allow for differing path formats for previous releases either; it assumes it will stay the same. If we continue along that path, the Jinja template/python function should work as well. If we start changing paths, I think both solutions have issues.

> > >diff --git a/python/mozrelease/mozrelease/platforms.py b/python/mozrelease/mozrelease/platforms.py
> > >new file mode 100644
> > >index 000000000000..099f153c8d18
> > >--- /dev/null
> > >+++ b/python/mozrelease/mozrelease/platforms.py
> > >@@ -0,0 +1,58 @@
> > 
> > 
> > In bug 1331141, we were planning on moving the beetmover manifests in-tree,
> > which should describe these platforms as well.
> 
> It might cover some (depending on the answer to the above question), but I
> don't think beetmover manifests will help us figure out the update platforms
> (aka build targets)? I could be wrong.

Quite possible.

> > >diff --git a/python/mozrelease/mozrelease/update_verify.py b/python/mozrelease/mozrelease/update_verify.py
> 
> > >+class UpdateVerifyConfig(object):
> > This may be all we need; it could be rewritten as a mozharness script.
> 
> We talked about this on IRC a bit. I'm not convinced that rewriting this
> standalone class into a mozharness specific one makes sense. Mozharness can
> already use other Python modules, and keeping this here means that
> non-Mozharness scripts can also use it. This might be personal preference.

With the above commentary, I think we can modify this class to take inputs like chunk and locale info, so we have less need for the other helper files. I think whether we want to go that route depends entirely on how long we expect this solution to last before we revisit, to justify the work we'd put in to rewrite.
Comment on attachment 8944073 [details] [diff] [review]
mozharness script to create an update verify config

This has changed enough locally that it's probably better to wait for the next version.
Attachment #8944073 - Attachment is obsolete: true
Attachment #8944073 - Flags: feedback?(nthomas)
Comment on attachment 8944071 [details] [diff] [review]
move some build/tools libraries in tree + modernize them

Updated version of this is coming, too.
Attachment #8944071 - Attachment is obsolete: true
Attachment #8944071 - Flags: feedback?(nthomas)
This a combined version of the two original patches. There's been a few necessary tweaks to the mozrelease library, but I mainly changed some data sources in the name of expediency:
* Rely on product-details instead of ship-it to find past releases. This means we don't need to deal with a private API. Doing this means we need to guess at the "branch". This will break if we ever specify staging releases as past releases, but should otherwise be fine.

* Require last watershed to be passed in. This avoids the need to have a source of truth for watersheds. I consider this a temporary measure until such a thing exists.

This version of the patch uses Buildhub for buildids, but I'm waivering on using that now. There's gaps in the data they have, and catlee has convinced me that it's not a great idea to add it as a release automation dependency. For the short term, we can grab buildids from FTP - we just need to be sure not to clean them up in old candidates directories. In the medium/long term, Ship It is probably the right place.
Attachment #8947602 - Flags: feedback?(nthomas)
There's a few things going on here:
* Adding a task for update verify config generation. This can run at the root of the promote graph, and should be run per-platform to avoid unnecessarily blocking, eg: win32 update verify on linux config generation. I'm not sure how hacky my solution for this in the transform is. It almost feels like the kind-dependencies or release_deps transform should be handling this.
* Adjust update verify Docker image to support run-task. Mostly copy/paste from the lint Dockerfile.

I've still not been able to test one of the resulting tasks due to some issues with sparse checkout, but I think the overall shape of this patch is unlikely to change.
Attachment #8947604 - Flags: feedback?(aki)
Comment on attachment 8947604 [details] [diff] [review]
update verify config generator task graph stuff

(In reply to Ben Hearsum (:bhearsum) from comment #14)
> Created attachment 8947604 [details] [diff] [review]
> update verify config generator task graph stuff
> 
> There's a few things going on here:
> * Adding a task for update verify config generation. This can run at the
> root of the promote graph, and should be run per-platform to avoid
> unnecessarily blocking, eg: win32 update verify on linux config generation.
> I'm not sure how hacky my solution for this in the transform is. It almost
> feels like the kind-dependencies or release_deps transform should be
> handling this.

I think so. I still think we should get the dummy task kinds into per-build_platform chunks, and then release_deps could match on build_platform if set. We don't necessarily have to block on that here; we may want that in cleanup.

> * Adjust update verify Docker image to support run-task. Mostly copy/paste
> from the lint Dockerfile.
> 
> I've still not been able to test one of the resulting tasks due to some
> issues with sparse checkout, but I think the overall shape of this patch is
> unlikely to change.

>diff --git a/taskcluster/ci/release-update-verify-config/kind.yml b/taskcluster/ci/release-update-verify-config/kind.yml
>new file mode 100644
>index 000000000000..717a0f78b0bc
>--- /dev/null
>+++ b/taskcluster/ci/release-update-verify-config/kind.yml
>@@ -0,0 +1,127 @@
<snip>
>+   worker:
>+      docker-image:
>+         # TODO: why doesn't this work when we use update-verify?
>+         in-tree: "update-verify"

Did you figure this out?

>diff --git a/taskcluster/taskgraph/transforms/update_verify.py b/taskcluster/taskgraph/transforms/update_verify.py
>index 7000ca4d8d76..873a4499f8dd 100644
>--- a/taskcluster/taskgraph/transforms/update_verify.py
>+++ b/taskcluster/taskgraph/transforms/update_verify.py
>@@ -45,9 +45,16 @@ def add_command(config, tasks):
>                 "UNUSED UNUSED {} {}".format(
>                     total_chunks,
>                     this_chunk,
>                 )
>             ]
>             for thing in ("CHANNEL", "VERIFY_CONFIG", "BUILD_TOOLS_REPO"):
>                 thing = "worker.env.{}".format(thing)
>                 resolve_keyed_by(chunked, thing, thing, **config.params)
>+
>+            # Remove update verify config dependency for non-matching platforms
>+            for dep in chunked["dependencies"].keys():
>+                if "update-verify-config" in dep:
>+                    if not dep.endswith(chunked["attributes"]["build_platform"]):
>+                        del chunked["dependencies"][dep]

It would be better if we set the update-verify-config build_platform, and matched those directly, rather than assuming label structure. If that's a complex change, it can be a followup.
Attachment #8947604 - Flags: feedback?(aki) → feedback+
(In reply to Aki Sasaki [:aki] from comment #15)
> >diff --git a/taskcluster/ci/release-update-verify-config/kind.yml b/taskcluster/ci/release-update-verify-config/kind.yml
> >new file mode 100644
> >index 000000000000..717a0f78b0bc
> >--- /dev/null
> >+++ b/taskcluster/ci/release-update-verify-config/kind.yml
> >@@ -0,0 +1,127 @@
> <snip>
> >+   worker:
> >+      docker-image:
> >+         # TODO: why doesn't this work when we use update-verify?
> >+         in-tree: "update-verify"
> 
> Did you figure this out?

Oh, yeah. Gotta remove this comment.

> >diff --git a/taskcluster/taskgraph/transforms/update_verify.py b/taskcluster/taskgraph/transforms/update_verify.py
> >index 7000ca4d8d76..873a4499f8dd 100644
> >--- a/taskcluster/taskgraph/transforms/update_verify.py
> >+++ b/taskcluster/taskgraph/transforms/update_verify.py
> >@@ -45,9 +45,16 @@ def add_command(config, tasks):
> >                 "UNUSED UNUSED {} {}".format(
> >                     total_chunks,
> >                     this_chunk,
> >                 )
> >             ]
> >             for thing in ("CHANNEL", "VERIFY_CONFIG", "BUILD_TOOLS_REPO"):
> >                 thing = "worker.env.{}".format(thing)
> >                 resolve_keyed_by(chunked, thing, thing, **config.params)
> >+
> >+            # Remove update verify config dependency for non-matching platforms
> >+            for dep in chunked["dependencies"].keys():
> >+                if "update-verify-config" in dep:
> >+                    if not dep.endswith(chunked["attributes"]["build_platform"]):
> >+                        del chunked["dependencies"][dep]
> 
> It would be better if we set the update-verify-config build_platform, and
> matched those directly, rather than assuming label structure. If that's a
> complex change, it can be a followup.

I've already set build_platform there, so it should be a simple change to look at it instead of label.
As we've talked about - we shouldn't add these dependencies in cases where both the current and upstream task define build_platform, and they don't match. As far as I can tell, this doesn't affect any of the existing tasks - just the new update verify config kinds.

This was included in the 59.0b5 build 5 staging relaese on maple.
Attachment #8947604 - Attachment is obsolete: true
Attachment #8949091 - Flags: review?(aki)
This patch gets the update verify Docker image supporting run-task, more or less by copy/pasting the lint image while continuing to install the extract packages we need for update verify. We end up using this image for update verify config generation, too.
Attachment #8949093 - Flags: review?(aki)
Comment on attachment 8949091 [details] [diff] [review]
don't add depedencies when current task and upstream task don't have a matching build platform

>diff --git a/taskcluster/taskgraph/transforms/release_deps.py b/taskcluster/taskgraph/transforms/release_deps.py
>index 2c96e26ef664..bcac16b228e0 100644
>--- a/taskcluster/taskgraph/transforms/release_deps.py
>+++ b/taskcluster/taskgraph/transforms/release_deps.py
>@@ -54,16 +54,21 @@ def add_dependencies(config, jobs):
>                     continue
>                 # Skip old-id
>                 if 'old-id' in dep_task.label:
>                     continue
>             # We can only depend on tasks in the current or previous phases
>             dep_phase = dep_task.attributes.get('shipping_phase')
>             if dep_phase and PHASES.index(dep_phase) > PHASES.index(phase):
>                 continue
>+
>+            if dep_task.attributes.get("build_platform") and \
>+               job.get("attributes", {}).get("build_platform"):

I tend to indent this line 8 spaces in from the `if` line, but if it passes flake8, I'm good.

>+                if dep_task.attributes["build_platform"] != job["attributes"]["build_platform"]:
>+                    continue
>             # Add matching product tasks to deps
>             if _get_product(dep_task.task) == product or \
>                     dep_task.attributes.get('shipping_product') == product:
>                 dependencies[dep_task.label] = dep_task.label
> 
>         job.setdefault('dependencies', {}).update(dependencies)
> 
>         yield job
Attachment #8949091 - Flags: review?(aki) → review+
Comment on attachment 8949093 [details] [diff] [review]
make the update verify docker image support run-task

After grepping a few of the other dockerfiles, it does look like 500:500 is correct. This should hopefully avoid another cache issue.
Attachment #8949093 - Flags: review?(aki) → review+
This is to make it possible to do proper dependencies on the soon-to-be-added update verify config kinds (which will also be split into primary and secondary kinds). taskgraph-diff shows the `kind` changing in a few spots on the task (`.kind`, `.attributes.kind`, and `.task.tags.kind`).
Attachment #8949107 - Flags: review?(aki)
Comment on attachment 8949107 [details] [diff] [review]
split update verify primary/secondary tasks into multiple kinds

I think you may have an out-of-date target_tasks.py (the TODOs at the bottom).
Attachment #8949107 - Flags: review?(aki) → review+
Whiteboard: [keep-open][leave-open]
Pushed by bhearsum@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/3d939862ff91
don't add depedencies when current task and upstream task don't have a matching build platform. r=aki
https://hg.mozilla.org/integration/mozilla-inbound/rev/644169ace322
make the update verify docker image support run-task. r=aki
https://hg.mozilla.org/integration/mozilla-inbound/rev/4a0a54d663e0
split update verify primary/secondary tasks into multiple kinds. r=aki
Attachment #8949834 - Flags: review?(gps)
Attachment #8949091 - Flags: checked-in+
Attachment #8949093 - Flags: checked-in+
Attachment #8949107 - Flags: checked-in+
New in this version:
* Fix some issues generating mac paths
* Get rid of buildhub support; restore info file support
* Add internal_pypi.py so we can pull mozrelease package
* Fix issue where removed locales would still generate tests, causing failures
* Add support for devedition
* Set to display version correctly

Not requesting review yet because I haven't had a full test of this - just wanted to update the bug with the latest. Feel free to have a look if you want.
Attachment #8947602 - Attachment is obsolete: true
Attachment #8947602 - Flags: feedback?(nthomas)
Attached patch add update verify config tasks (obsolete) — Splinter Review
Mostly the same as the previous iteration - just adds support for new args, and fixes some small issues in existing ones.
(In reply to Pulsebot from comment #23)
> https://hg.mozilla.org/integration/mozilla-inbound/rev/4a0a54d663e0
> split update verify primary/secondary tasks into multiple kinds. r=aki

If you merge this up to 59.0 could you please include the changes from bug 1425571 at
https://hg.mozilla.org/integration/autoland/rev/6f856b4b8e27#l31.1. Thanks!
(In reply to Nick Thomas [:nthomas] (UTC+13) from comment #28)
> (In reply to Pulsebot from comment #23)
> > https://hg.mozilla.org/integration/mozilla-inbound/rev/4a0a54d663e0
> > split update verify primary/secondary tasks into multiple kinds. r=aki
> 
> If you merge this up to 59.0 could you please include the changes from bug
> 1425571 at
> https://hg.mozilla.org/integration/autoland/rev/6f856b4b8e27#l31.1. Thanks!

I think I'll let this ride with 60.0 (unless we're planning to uplift all of the no-bbb work), but noted.
This is a small change that allows us to download an update verify config at runtime instead of trying to use one from the repo.
Attachment #8951666 - Flags: review?(nthomas)
I'm pretty sure this is fully working now. I've successfully run all-green betas, deveditions, and RCs with this patch:
https://tools.taskcluster.net/groups/LX8VlEhhRv6biywHRLL7_g
https://tools.taskcluster.net/groups/TQhDOqa6T0O9QQeJ7azUTw
https://tools.taskcluster.net/groups/C_A3RyNsTmK__E2SEnvuow

If you look hard enough you'll see some odd output in some of the mac update verify jobs, which I think is related to my cross platform work. I've filed that as bug 1438906 as I don't think it blocks anything here.
Attachment #8950762 - Attachment is obsolete: true
Attachment #8951668 - Flags: review?(nthomas)
Attached patch final scheduling bits (obsolete) — Splinter Review
New in this version:
* Kill VERIFY_CONFIG in update verify env vars (we use TASKCLUSTER_VERIFY_CONFIG now)
* Always use aurora-localtest as CHANNEL for devedition - no need for by-project
* Add more staging repos to by-projects. I've only added birch to the secondary ones, since that's the only place we run RCs.
* Misc. small fixes and cleanup
Attachment #8951669 - Flags: review?(aki)
Attachment #8950763 - Attachment is obsolete: true
Comment on attachment 8951669 [details] [diff] [review]
final scheduling bits

Thank you! I see some potential followups. The only thing I'd want to block landing is the old-style notifications, which should hopefully be easy to switch out.

>diff --git a/taskcluster/ci/release-secondary-update-verify/kind.yml b/taskcluster/ci/release-secondary-update-verify-config/kind.yml
>similarity index 54%
>copy from taskcluster/ci/release-secondary-update-verify/kind.yml
>copy to taskcluster/ci/release-secondary-update-verify-config/kind.yml
>index becca3c64e92..d243da69d256 100644
>--- a/taskcluster/ci/release-secondary-update-verify/kind.yml
>+++ b/taskcluster/ci/release-secondary-update-verify-config/kind.yml
>@@ -1,169 +1,159 @@
> # This Source Code Form is subject to the terms of the Mozilla Public
> # License, v. 2.0. If a copy of the MPL was not distributed with this
> # file, You can obtain one at http://mozilla.org/MPL/2.0/.
> 
> loader: taskgraph.loader.transform:loader
> 
>-kind-dependencies:
>-   - post-balrog-dummy
>-   - post-beetmover-dummy
>-   - release-updates-builder

Do we really have no deps here? If we ever combine the build and promote graphs, we'll want to make sure this happens after the appropriate build tasks. If there are no appropriate build tasks, this is correct :)

>+      last-watershed:
>+         by-project:
>+            birch: "57.0"
>+            mozilla-release: "57.0"
>+            default: null

Hm, maybe [someday] we can bump this automatically from balrog data?

>    notifications:

We probably want to delete this section, as well as the indexes, now that Nick's work in bug 1425571 has landed.

Involves adding a transform: https://hg.mozilla.org/mozilla-central/rev/6f856b4b8e27
and removing notify/routes/index: https://hg.mozilla.org/mozilla-central/rev/22fa2b14913a

>diff --git a/taskcluster/ci/release-secondary-update-verify/kind.yml b/taskcluster/ci/release-secondary-update-verify/kind.yml
>index becca3c64e92..b61a39121567 100644
>--- a/taskcluster/ci/release-secondary-update-verify/kind.yml
>+++ b/taskcluster/ci/release-secondary-update-verify/kind.yml
<snip>
>-         CHANNEL:
>-            by-project:
>-               mozilla-release: "beta-localtest"
>-               default: "default"
>+         CHANNEL: "beta-localtest"

I could see this as by-project in the future for, say, try, but we don't have to address that now.

>    notifications:

Same here.

>diff --git a/taskcluster/ci/release-update-verify-config/kind.yml b/taskcluster/ci/release-update-verify-config/kind.yml
>new file mode 100644
>index 000000000000..9ccea854e1b2
>--- /dev/null
>+++ b/taskcluster/ci/release-update-verify-config/kind.yml
<snip>
>+      last-watershed:
>+         by-project:
>+            # TODO: add esr here when setting up mozilla-esr60
>+            # let's put mozilla-esr52 in this comment as well, in case
>+            # somebody is grepping the tree for things they need to do.
>+            birch: "57.0"
>+            jamun: "56.0b10"
>+            maple: "56.0b10"
>+            mozilla-beta: "56.0b10"
>+            mozilla-release: "57.0"
>+            default: null

Here too... I think automated bumping could be a good feature.

>+   notifications:

Here too.

>+         mar-channel-id-override:
>+            by-project:
>+               maple: "'^\\d+\\.\\d+(\\.\\d+)?$$,firefox-mozilla-beta,firefox-mozilla-release'"
>+               mozilla-beta: "'^\\d+\\.\\d+(\\.\\d+)?$$,firefox-mozilla-beta,firefox-mozilla-release'"

The various regexes appear complex and non-intuitive. Do we have any docs about them?
That can be a non-blocking followup if we don't.

>+         include-version: "'^((?!58\\.0b1$)\\d+\\.\\d+(b\\d+)?)$'"

I was wondering if we could somehow write a mapping:

    include-version-alias: beta

where `beta` maps to a regex constant in-tree.
The devedition 58.0b1 one could map to a `devedition` maybe. Maybe?
That would probably make the transform more complex, but possibly make the config yaml more readable and less intimidating to edit. And we might notice typos faster... I might not easily notice if the above `include-version` had a typo compared to the other entries.

If we agree this is a good idea, I'm also ok with this being a followup.

>diff --git a/taskcluster/ci/release-update-verify/kind.yml b/taskcluster/ci/release-update-verify/kind.yml
>index 15889da58718..c242a349f04b 100644
>--- a/taskcluster/ci/release-update-verify/kind.yml
>+++ b/taskcluster/ci/release-update-verify/kind.yml
>    notifications:

Here too.
Attachment #8951669 - Flags: review?(aki) → review+
(In reply to Aki Sasaki [:aki] from comment #33)
> >diff --git a/taskcluster/ci/release-secondary-update-verify/kind.yml b/taskcluster/ci/release-secondary-update-verify-config/kind.yml
> >similarity index 54%
> >copy from taskcluster/ci/release-secondary-update-verify/kind.yml
> >copy to taskcluster/ci/release-secondary-update-verify-config/kind.yml
> >index becca3c64e92..d243da69d256 100644
> >--- a/taskcluster/ci/release-secondary-update-verify/kind.yml
> >+++ b/taskcluster/ci/release-secondary-update-verify-config/kind.yml
> >@@ -1,169 +1,159 @@
> > # This Source Code Form is subject to the terms of the Mozilla Public
> > # License, v. 2.0. If a copy of the MPL was not distributed with this
> > # file, You can obtain one at http://mozilla.org/MPL/2.0/.
> > 
> > loader: taskgraph.loader.transform:loader
> > 
> >-kind-dependencies:
> >-   - post-balrog-dummy
> >-   - post-beetmover-dummy
> >-   - release-updates-builder
> 
> Do we really have no deps here? If we ever combine the build and promote
> graphs, we'll want to make sure this happens after the appropriate build
> tasks. If there are no appropriate build tasks, this is correct :)

We have no dependencies, that's right. The only information we need from the current release is the really basic things (version, build number, product - that sort of thing), which comes in via parameters. The rest of the information we gather is old release info.

> >+      last-watershed:
> >+         by-project:
> >+            birch: "57.0"
> >+            mozilla-release: "57.0"
> >+            default: null
> 
> Hm, maybe [someday] we can bump this automatically from balrog data?

This will definitely move elsewhere in the future. Might be Balrog, or maybe Ship It (https://bugzilla.mozilla.org/show_bug.cgi?id=1311001), or maybe something else.

> >    notifications:
> 
> We probably want to delete this section, as well as the indexes, now that
> Nick's work in bug 1425571 has landed.
> 
> Involves adding a transform:
> https://hg.mozilla.org/mozilla-central/rev/6f856b4b8e27
> and removing notify/routes/index:
> https://hg.mozilla.org/mozilla-central/rev/22fa2b14913a

Ack, I think Nick mentioned this to me, and I totally spaced on it. Will do.

> >diff --git a/taskcluster/ci/release-secondary-update-verify/kind.yml b/taskcluster/ci/release-secondary-update-verify/kind.yml
> >index becca3c64e92..b61a39121567 100644
> >--- a/taskcluster/ci/release-secondary-update-verify/kind.yml
> >+++ b/taskcluster/ci/release-secondary-update-verify/kind.yml
> <snip>
> >-         CHANNEL:
> >-            by-project:
> >-               mozilla-release: "beta-localtest"
> >-               default: "default"
> >+         CHANNEL: "beta-localtest"
> 
> I could see this as by-project in the future for, say, try, but we don't
> have to address that now.

Good point. I could see it going either way there, depending on how update verify works at the time, and what we do about a Balrog for try.

> >diff --git a/taskcluster/ci/release-update-verify-config/kind.yml b/taskcluster/ci/release-update-verify-config/kind.yml
> >new file mode 100644
> >index 000000000000..9ccea854e1b2
> >--- /dev/null
> >+++ b/taskcluster/ci/release-update-verify-config/kind.yml
> <snip>
> >+      last-watershed:
> >+         by-project:
> >+            # TODO: add esr here when setting up mozilla-esr60
> >+            # let's put mozilla-esr52 in this comment as well, in case
> >+            # somebody is grepping the tree for things they need to do.
> >+            birch: "57.0"
> >+            jamun: "56.0b10"
> >+            maple: "56.0b10"
> >+            mozilla-beta: "56.0b10"
> >+            mozilla-release: "57.0"
> >+            default: null
> 
> Here too... I think automated bumping could be a good feature.

What kind of automated bumping do you mean? Watersheds are generally a thing that we only adjusted once a year or so.

> >+         mar-channel-id-override:
> >+            by-project:
> >+               maple: "'^\\d+\\.\\d+(\\.\\d+)?$$,firefox-mozilla-beta,firefox-mozilla-release'"
> >+               mozilla-beta: "'^\\d+\\.\\d+(\\.\\d+)?$$,firefox-mozilla-beta,firefox-mozilla-release'"
> 
> The various regexes appear complex and non-intuitive. Do we have any docs
> about them?
> That can be a non-blocking followup if we don't.

No docs - they're mostly extracted from existing configs. I can write some comments to make things a bit easier to read.

> >+         include-version: "'^((?!58\\.0b1$)\\d+\\.\\d+(b\\d+)?)$'"
> 
> I was wondering if we could somehow write a mapping:
> 
>     include-version-alias: beta
> 
> where `beta` maps to a regex constant in-tree.
> The devedition 58.0b1 one could map to a `devedition` maybe. Maybe?
> That would probably make the transform more complex, but possibly make the
> config yaml more readable and less intimidating to edit. And we might notice
> typos faster... I might not easily notice if the above `include-version` had
> a typo compared to the other entries.
> 
> If we agree this is a good idea, I'm also ok with this being a followup.

I like this idea - it might reduce some of the trouble I had around getting the backslashes correct (going from yaml -> python -> json can make things awfully confusing). Since I'm attaching a new patch for notifications anyways, I'll take a crack at this too.
Attachment #8951666 - Flags: review?(nthomas) → review+
Comment on attachment 8951668 [details] [diff] [review]
mozharness update verify config creator

Review of attachment 8951668 [details] [diff] [review]:
-----------------------------------------------------------------

Overall I like what you've done here. I tested generating the linux32 config for 59.0b10 and compared against the copy in tools, and it was almost the same. Your code includes the 58.0 release which was missing before (wasn't aware of that bug!), then a diff from the triangular checks moving down one, so that's all good. If you haven't already checked some other cases it might be worth doing so.

Almost all my comments are possible enhancements, with the exception of a helper function for http requests. At 5 minutes run time on TC, and lots of requests to make, it's probably worth having some retries in cases of servers having a bad day.

::: python/mozrelease/mozrelease/paths.py
@@ +51,5 @@
> +            return '/'.join([p.strip('/') for p in [
> +                platform, locale, '%s %s.dmg' % (brandName, version)]])
> +        elif platform.startswith('win'):
> +            filename = '%s.zip' % productName
> +            instname = '%s.exe' % productName

Nit - filename & instname don't seem to be used.

@@ +56,5 @@
> +            prefix = []
> +            prefix.extend([platform, locale])
> +            return '/'.join(
> +                [p.strip('/') for p in
> +                 prefix + ['%s Setup %s.exe' % (brandName, version)]]

Nit - I see this is code copied from tools, but would be nicer if windows didn't use prefix, similar to mac & linux.

@@ +70,5 @@
> +            return '/'.join(
> +                [p.strip('/') for p in
> +                 prefix + [filename]]
> +            )
> +        elif platform == 'linux':

This might be old B2G code ? Not sure how to get here if productName is 'fennec' and platform doesn't start with 'android'.

::: testing/mozharness/scripts/release/update-verify-config-creator.py
@@ +216,5 @@
> +            "{}/1.0/{}.json".format(
> +                self.config["product_details_server"],
> +                self.config["stage_product"],
> +            ),
> +        ).json()["releases"]

Did you consider a helper function to make the all the requests, incorporating a timeout on requests.get() and retries ? We may have something already you could also import into mozrelease. Or is this handled by a whole-job retry which is configured at the task level ?

@@ +242,5 @@
> +                self.log("Skipping release that doesn't match product name: %s" % release_name,
> +                         level=INFO)
> +                continue
> +            if MozillaVersion(version) < MozillaVersion(self.config["last_watershed"]):
> +                self.log("Skipping release that's behind the last watershed: %s" % release_name,

Nit-ish - I think it's a bit misleading to get log messages like this for a beta config:
14:24:06     INFO - Skipping release that's behind the last watershed: firefox-18.0
14:24:06     INFO - Skipping release that's behind the last watershed: firefox-24.3.0esr

What if the include matching was done first, then product and version checks ? I got the same beta config out when I gave it a go. Also added the reversed(sorted(...., key=LooseVersion) ordering you use when creating the config, and that helps the log readability too.

@@ +309,5 @@
> +                    tag,
> +                    self.config["app_name"],
> +                ),
> +            )
> +            app_version = requests.get(app_version_url).text.strip()

Nit - This is the first of two requests for app_version_url, and the latter one has error handling, so lets zap this one.
Attachment #8951668 - Flags: review?(nthomas) → feedback+
Comment on attachment 8949834 [details]
upgrade update verify docker image to Ubuntu 17.10

Moving review to the shared queue, since gps is out. I'm not 100% sure this is a build config review, but worth a try...
Attachment #8949834 - Flags: review?(gps) → review?(core-build-config-reviews)
Comment on attachment 8949834 [details]
upgrade update verify docker image to Ubuntu 17.10

Gregory Szorc [:gps] has approved the revision.

https://phabricator.services.mozilla.com/D571
Attachment #8949834 - Flags: review+
Attachment #8949834 - Flags: review?(core-build-config-reviews)
Pushed by bhearsum@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/a7a34d672b95
upgrade update verify docker image to Ubuntu 17.10. r=gps
Attachment #8949834 - Flags: checked-in+
Attachment #8951669 - Attachment is obsolete: true
Attachment #8953016 - Flags: review?(aki)
(In reply to Nick Thomas [:nthomas] (UTC+13) from comment #35)
> Comment on attachment 8951668 [details] [diff] [review]
> mozharness update verify config creator
> 
> Review of attachment 8951668 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Overall I like what you've done here. I tested generating the linux32 config
> for 59.0b10 and compared against the copy in tools, and it was almost the
> same. Your code includes the 58.0 release which was missing before (wasn't
> aware of that bug!), then a diff from the triangular checks moving down one,
> so that's all good. If you haven't already checked some other cases it might
> be worth doing so.

I did a lot of checking of various configs from different branches and platforms in early development, and the latest set of staging releases is doing even more - I think we're in pretty good shape here?

> ::: testing/mozharness/scripts/release/update-verify-config-creator.py
> @@ +216,5 @@
> > +            "{}/1.0/{}.json".format(
> > +                self.config["product_details_server"],
> > +                self.config["stage_product"],
> > +            ),
> > +        ).json()["releases"]
> 
> Did you consider a helper function to make the all the requests,
> incorporating a timeout on requests.get() and retries ? We may have
> something already you could also import into mozrelease. Or is this handled
> by a whole-job retry which is configured at the task level ?

Good idea, which made me realize that we can just use Mozharness' _download_file - which handles this.
New in this version:
* Remove dead code from mozrelease/
* Use Mozharness' self._retry_download instead of requests
* Improvements to release processing ordering
Attachment #8951668 - Attachment is obsolete: true
Attachment #8953019 - Flags: review?(nthomas)
Attachment #8953016 - Flags: review?(aki) → review+
Comment on attachment 8953019 [details] [diff] [review]
mozharness update verify config creator with comments addressed

Interdiff looks good, thanks for the extra fixes.
Attachment #8953019 - Flags: review?(nthomas) → review+
Pushed by bhearsum@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/fd83ed92fe25
mozharness script to create update verify configs without relying on patcher configs. r=nthomas
https://hg.mozilla.org/integration/mozilla-inbound/rev/09fd35f1024f
switch to in-tree mozharness update verify config creator. r=aki
Attachment #8953016 - Flags: checked-in+
Attachment #8953019 - Flags: checked-in+
Whiteboard: [keep-open][leave-open]
See Also: → 1348840
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: