Add a test for 'mach browsertime setup'
Categories
(Testing :: Raptor, enhancement, P2)
Tracking
(Not tracked)
People
(Reporter: rwood, Assigned: tarek)
References
(Blocks 1 open bug)
Details
Attachments
(1 obsolete file)
As suggested by :davehunt, it would be good to have mach browsertime --setup
running in CI so that we can catch browertime setup issues faster.
Reporter | ||
Comment 1•5 years ago
|
||
Preliminary start to see if I can get something running:
Reporter | ||
Comment 2•5 years ago
|
||
Reporter | ||
Comment 3•5 years ago
|
||
Reporter | ||
Comment 4•5 years ago
|
||
Reporter | ||
Comment 5•5 years ago
|
||
Reporter | ||
Comment 6•5 years ago
|
||
Reporter | ||
Comment 7•5 years ago
|
||
So this is much more complicated than I had hoped. In production we don't run tests from source, so we don't have the full central repo available on the worker (at least the type of worker used for raptor tests). Therefore we can't run mach browsertime --setup
like a dev would on their local machine.
Even so, if we used a different worker etc. to get the full repo, or if we packaged up /tools/browsertime
into a package to be downloaded onto the workers, like other test packages, there's another issue. The mach browsertime script (/tools/browsertime/mach_commands.py) downloads directly from github (browsertime source and supporting tools). These are specific to the platform etc. Since we can't access github directly in CI production then we would need to use multiple fetch tasks for each. Even then I'm not sure if we could run via mach.
So this is probably not impossible however would be quite a time-consuming project so I am closing this bug. If in the future we have time/resources to look into this further we can always re-open this.
Assignee | ||
Comment 8•4 years ago
|
||
I'll give it a shot
Reporter | ||
Comment 9•4 years ago
|
||
Hey Tarek setting to P2 for now (as unsure if you're actively working on this at the moment).
Updated•4 years ago
|
Comment 10•4 years ago
|
||
(In reply to Robert Wood [:rwood] from comment #7)
So this is much more complicated than I had hoped. In production we don't run tests from source, so we don't have the full central repo available on the worker (at least the type of worker used for raptor tests). Therefore we can't run
mach browsertime --setup
like a dev would on their local machine.
This is true of the test
kind -- see https://bugzilla.mozilla.org/show_bug.cgi?id=1432287 and the trail of woe surrounding it -- but not true of other kinds, including the source-test
kind.
Even so, if we used a different worker etc. to get the full repo, or if we packaged up
/tools/browsertime
into a package to be downloaded onto the workers, like other test packages, there's another issue. The mach browsertime script (/tools/browsertime/mach_commands.py) downloads directly from github (browsertime source and supporting tools). These are specific to the platform etc. Since we can't access github directly in CI production then we would need to use multiple fetch tasks for each. Even then I'm not sure if we could run via mach.
It's not true that we can't access GH (or anything else directly); many tasks do clones, fetches, etc. (It is true that we generally block Firefox from accessing the outside internet in tests.) However, we generally try to mirror in our own infra for speed and general "be nice on the internet" reasons.
So this is probably not impossible however would be quite a time-consuming project so I am closing this bug. If in the future we have time/resources to look into this further we can always re-open this.
I agree that it's more time-consuming than it looks at first, although I generally think running across three platforms is the expensive part. It's the same set of reasons that mach bootstrap
is basically untested.
Assignee | ||
Updated•4 years ago
|
Assignee | ||
Comment 11•4 years ago
•
|
||
To test in the CI how we run locally, I can tweak our workers software and make sure we have the right set up and run the unit test I have created.
However for the end user, I am now convinced that we're trying too hard. I would like to propose that for each platform, we just check that we have all the binaries we need around, and if not, print out a helper on how to install stuff manually.
Nick?
Assignee | ||
Comment 12•4 years ago
|
||
This minimal test runs:
- mach browsertime --setup
- mach browsertime --check
And complains if one of these commands fails.
That test will be trigger in the CI if mach_commands.py or package.json
are modified.
Comment 13•4 years ago
|
||
(In reply to Tarek Ziadé (:tarek) from comment #11)
To test in the CI how we run locally, I can tweak our workers software and make sure we have the right set up and run the unit test I have created.
However for the end user, I am now convinced that we're trying too hard. I would like to propose that for each platform, we just check that we have all the binaries we need around, and if not, print out a helper on how to install stuff manually.
Nick?
I don't feel particularly strongly. Obviously I prepare to automate what can be automated, but failing automation is worse than jumping hoops manually. There's no reasonable way for consumers to install Python packages into mach
-managed virtualenvs, so that needs to stay. I'd be quite pleased if we could simplify the visual metrics pieces entirely, 'cuz neither ffmpeg nor IM are easy to distribute. Maybe we can find Python equivalents that work reasonably?
In any case, the approach I implemented hasn't worked well, so I'm happy to simplify and try to be less fancy.
Assignee | ||
Comment 14•4 years ago
|
||
Moving work to ./mach perftest with good test coverage there. ./mach browsertime will likely die in late H2
Updated•4 years ago
|
Description
•