Closed Bug 1476372 Opened 3 years ago Closed 2 years ago

Use a 'fetch task' to get the chromium builds for raptor


(Testing :: Raptor, enhancement, P1)

Version 3


(firefox67 fixed)

Tracking Status
firefox67 --- fixed


(Reporter: rwood, Assigned: sparky)


(Blocks 1 open bug)



(2 files, 1 obsolete file)

Currently to get raptor running on google chrome (chromium) we download a specific hard-coded chromium revision, from google's chromium build page i.e. [1] for each platform (win7/win10/linux64/osx).

The revision number (575640) in that URL came from getting the latest revision at the time, i.e. [2].

Instead of downloading a specific chromium revision, use a 'fetch task' that will say grab the latest chromium build (for all platforms) once per week and save the build as an taskcluster artifact. Then have raptor grab the chromium zip from the fetch task instead of the chromium build page directly.



In raptor, downloading chromium is done inside the mozharness 'install_chrome' step, here:
While I don't think this is technically a blocker, there will definitely be some more prior art by the time it's done. So might as well wait.
Depends on: 1468812
Blocks: 1473078
Blocks: 1486893
:rwood, in looking at the code here:

it appears that we are downloading fresh copies of google chrome each time?  Is this true, or do we have a hardcoded version?  Would the work here be to just create a fetch task that downloads chrome in a daily crontab job so we only download it once?  That would depend on fetch tasks working on windows/mac, not just linux.

Assuming we are testing a dynamic version of google chrome, we still need to surface the version number we are testing with- while this doesn't have to be "in-tree", it should be discoverable somewhere.
forgot to ni? rwood
Flags: needinfo?(rwood)
Hey, the link you provided is out of date, try this one:

And you'll see how the chromium revision is specified. Each platform has it's own rev number btw. So the revision number is received and dumped out by the control server - if you open a raptor tp6 chrome job search for 'webext_results' and you'll see at the bottom the tag 'browser': u'Chrome/69.0.3495.0'. That's just the log from when the control server received results from the raptor web extension - yes you're right that needs to be added somewhere in perfherder data so it is surfaced up.
Flags: needinfo?(rwood)
Also - this is how you get the latest revision of chromium - to get the hardcoded rev I just used this link on that day:

So the fetch task could grab that for the platform and then build the URL with that rev to download chromium.

Is there a simple way to make a fetch task like [1] but have a dynamic url instead of a static one? I can't seem to find an example of this if there is.

It's possible to make a dynamic url with the static-url type (by setting sha to null and size to 0), but we would have to add a transform in to change the url which will make the fetch kind transform less general. Another possible option here would be to implement a dynamic-url type for the fetch kind. However, either of these options would still leave us with having to perform http requests during taskgraph creation which would probably slow down the decision task quite a bit.

Instead, I am thinking of making a new task in a new kind (?or the toolchain kind?) for this, having it run with cron daily/weekly, and use it to create a chromium artifact for each OS. This method would be similar to how it's done for grcov.

Does anyone have any thoughts, or suggestions?

:sparky, can we use the LAST_CHANGE as a static url, parse the data and then call a dynamic url?  see comment 5 for the LAST_CHANGE
Yes, but we would have do these requests for the data during the decision task. Another option would be to change the fetch-content script to handle dynamic urls, that way we would perform requests outside the decision task. I think we may be able to have a general method in fetch-content for this.
The hashes are kind of the whole point of fetch tasks (i.e to improve security around downloading arbitrary blobs from the internet). If we don't verify hashes then we leave ourselves open to mitm attacks. So for that reason I think downloading dynamic URLs is explicitly something we should avoid. Why do we need dynamic urls for this case?
Oh I see, you want to update chrome automatically. Hm, yeah a fetch task wouldn't be the greatest for that. Though, if we use a non-deterministic version of chrome, then we might get into situations where we'll see failures that aren't tied to a change in the tree. I don't think anything like what you're trying to do here exists in the tree at the moment.
Oh ok, so maybe making a toolchain-like kind would be the simplest for us, we would be able to use existing tooling to make artifacts public as well.

And as :ahal points out, using a non-deterministic chrome version could cause some problems for us. We could instead add these hardcoded-revision chrome downloads into fetch tasks and modify them when we are sure a newer version can work for us. :rwood, what do you think?
Flags: needinfo?(rwood)
non deterministic builds are a fact of life, I would rather update the builds automatically and have it failing for a day or two rather than gate on humans to update the revision each time.
Yeah I agree. Maybe we could have a way to also 'manually override' the revision in case we get a series of bad Chromium builds and want the option to manually revert back to a stable one?

Also, does the fetch task * have to * run every time a dependent task runs? Or can it be scheduled to only update Chromium say once per week?
Flags: needinfo?(rwood)
Cool, sounds good. So I'll make it dynamic with a manual revisoion override.

I'm not entirely sure how fetch-kind tasks are recheduled, :ahal would you know? Could we set a weekly expiry of the artifact to force a new one to be made?

If we make a toolchain-like fetch task, we should be able to use cron to run it weekly.
Flags: needinfo?(ahal)
Fetch tasks only work with static urls (and I'm fairly sure gps would r- any attempt to make them do otherwise). If you *need* to use fetch tasks and want to update them automatically, you'd need some kind of bot that runs outside of the repo and actually commits the update periodically (a la wpt sync bot).

Toolchain tasks might give you a bit more flexibility to do the kind of things we want to do here, but even those are designed to be deterministic (afaik they all pin the version of the thing that they are building). I'm less familiar with these so it might be worth asking :ted and/or :gps for their opinions here.

(In reply to Joel Maher ( :jmaher ) (UTC-4) from comment #12)
> non deterministic builds are a fact of life

While this is true, we spend an *enormous* amount of effort to get rid of non-determinism, so I don't really like this line of reasoning.

But I do agree that these particular tasks don't need to live up to the same standard as our regular test tasks, so adding this non-determinism seems fine in this case (let's at least make sure they run at tier-2 though).
Flags: needinfo?(ahal)
Gps, we're running performance benchmarks against Chromium (for comparison with Firefox). Currently we use a hardcoded url to download Chromium, but want to try and always run against the latest version so we're comparing apples to apples.

Do you have any thoughts on how this use case might work with fetches or toolchains?
Flags: needinfo?(gps)
Re-reading the comment thread, I think there's some confusion as to what fetch tasks actually are. All they do is:

1. Download a static asset from somewhere on the internet
2. Check the hash/gpg signature to make sure we got what we expect
3. Upload the asset to s3 so tasks can access it without hitting the external network

There is no self-updating mechanism, nor is there any way to "run them periodically" to get the next version. All updates involve manually changing the URL and re-computing the hash. Fetch tasks basically just replace what we previously used tooltool for.
ok, so this means that we should use a toolchain task for this where we have a script that downloads, confirms, and uploads the resource.  I assume we can have tasks across revisions/branches depending on the output of a toolchain task?
(In reply to Joel Maher ( :jmaher ) (UTC-4) from comment #19)
> ok, so this means that we should use a toolchain task for this where we have
> a script that downloads, confirms, and uploads the resource.  I assume we
> can have tasks across revisions/branches depending on the output of a
> toolchain task?

Yes we can. With grcov for example, it makes this artifact [1] in task [2] which is then used in tasks like [3].


(In reply to Joel Maher ( :jmaher ) (UTC-4) from comment #17)
> another thought is we can get binaries from:
> doing a wget on those urls downloads a full browser, similar to what was
> above- we could figure out a way to get the version number.

With that method, we would be using this URL to get the latest revision number but I haven't found a method for getting a version number though:

I am wondering if we should consider finding a way to verify the download with a SHA? Does anyone know where we could find this value?
(In reply to Andrew Halberstadt [:ahal] from comment #16)
> Do you have any thoughts on how this use case might work with fetches or
> toolchains?

This is an unsolved problem in toolchain land too: we want to eventually have tasks against "nightly" or "trunk" versions of Rust, Clang, GCC, etc. But how this jives with our desired model of deterministic-over-time (e.g. if you retrigger a task how do you ensure it is executing using the exact same bits) is TBD.

If you resolve the dynamic bit at decision task time, things are deterministic in TC land. But actually looking up what version of something you are using is hard. And there's no easy way to convert that failure into say a Try push (e.g. you'd need to copy the resolved revision somewhere).

The solution I'm leaning towards is to have an in-tree file contain a pointer to the "latest" version of a thing to download. There would be an out-of-band process that polls for new versions of things and updates the in-tree file when a new version is encountered. Taskgraph or scheduled tasks would simply read this file to determine what to fetch.

I /think/ you could automate the updating of said files using Taskcluster hooks/actions. You would need a special account, etc. But that's something we could do. I seem to recall posing questions about how all this would work to Dustin somewhere. Can't remember where though or what he said (if anything).
Flags: needinfo?(gps)
I've made a toolchain task we can use to get the latest chromium revision which I am integrating into raptor at the moment:

To reschedule this task, can we make something to update files with Taskcluster hooks/actions as :gps mentions or 'a la wpt sync bot' as :ahal mentions in comment 15?
since this hits a URL for the 'latest', there is no need to change the url, instead just fire the job off reguarly.  we have cronjobs ( ) that can schedule this, or jobs that run on the nightly scheduler ( ) - keep in mind nightly scheduler typically runs 2 times/day (I guess there are two nights in a day).

Once we get tasks that can depend on this from the cron/nightly job on m-c, then we could start running this.  We are only planning to run chrome jobs on m-c and try, so the volume will be low, they can remain tier-2 as well to avoid required backouts, etc.
This patch adds fetch tasks for windows, linux, and mac chromium builds. These are also scheduled through cron as well with this patch. Some changes to fetch-related tooling needed to be made to prevent extracting the fetched builds
on mac raptor chrome tasks.

Bug 1476372 - Have Raptor use fetched chromium builds. r?rwood,ahal

This patch enables raptor to use the latest fetched chromium builds.
Here's a try run using these chromium tasks in all raptor tasks which use chrome:
Blocks: 1495903
Assignee: nobody → gmierz2
Sorry this didn't show up in our phabricator list of unreviewed patches since it was already r+'d by someone. I've redirected the review to glandium since none of us are really familiar with Chromium builds and aren't able to properly review.
Priority: -- → P1

:glandium, I plan to add a 'DYNAMIC_FETCH_SCHEMA' to here and proceeding from there to move this work into a 'fetch' kind task.

Before doing this though, and all the other modifications that would be needed for this to work, I want to make sure that this is what you are suggesting (by 'adding a new fetch type' from your review comment)? Or should we be making a new task kind for these dynamic fetches?

Or, were you suggesting something completely different to these two options?

Flags: needinfo?(mh+mozilla)

I was thinking about changing 'fetch' in FETCH_SCHEMA such that it accepts other types and data than 'static-url'. You can use Any() for that.


'type': 'static-url',
}, {
'type': 'chrome-download',

Flags: needinfo?(mh+mozilla)
No longer blocks: 1495903

Here's a try run of raptor chrome using the new patch:

And this is the one with the fetch docker-image and Fetch-URL tasks:

The failures in the OSX tasks are caused by the new chromium build, it is being looked into in this bug:

If we want to, for the time being, we could get around this by passing a '--revision' option to the '' script to set the build to an older version which didn't have this error.

Attachment #9047064 - Attachment description: Bug 1476372 - Add fetch tasks for raptor chromium builds. r?rwood,ahal,glandium → Bug 1476372 - Add fetch tasks for raptor chromium builds. r?rwood,ahal,glandium,tomprince
Attachment #9025122 - Attachment is obsolete: true
Blocks: 1533008

The failures in the try push mentioned in comment 33 are related to the latest mac revision and a bug was filed as a follow-up to this (bug 1533008).

Pushed by
Move Raptor chrome task definitions to separate file. r=glandium
Add fetch tasks for raptor chromium builds. r=rwood,glandium,tomprince
Closed: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla67
Depends on: 1535057
See Also: → 1528731
You need to log in before you can comment on or make changes to this bug.