Closed Bug 1595142 Opened 5 years ago Closed 4 years ago

Add a test for 'mach browsertime setup'

Categories

(Testing :: Raptor, enhancement, P2)

Version 3
enhancement

Tracking

(Not tracked)

RESOLVED DUPLICATE of bug 1623321

People

(Reporter: rwood, Assigned: tarek)

References

(Blocks 1 open bug)

Details

Attachments

(1 obsolete file)

As suggested by :davehunt, it would be good to have mach browsertime --setup running in CI so that we can catch browertime setup issues faster.

So this is much more complicated than I had hoped. In production we don't run tests from source, so we don't have the full central repo available on the worker (at least the type of worker used for raptor tests). Therefore we can't run mach browsertime --setup like a dev would on their local machine.

Even so, if we used a different worker etc. to get the full repo, or if we packaged up /tools/browsertime into a package to be downloaded onto the workers, like other test packages, there's another issue. The mach browsertime script (/tools/browsertime/mach_commands.py) downloads directly from github (browsertime source and supporting tools). These are specific to the platform etc. Since we can't access github directly in CI production then we would need to use multiple fetch tasks for each. Even then I'm not sure if we could run via mach.

So this is probably not impossible however would be quite a time-consuming project so I am closing this bug. If in the future we have time/resources to look into this further we can always re-open this.

Status: ASSIGNED → RESOLVED
Closed: 5 years ago
Resolution: --- → WONTFIX

I'll give it a shot

Assignee: rwood → tarek
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---

Hey Tarek setting to P2 for now (as unsure if you're actively working on this at the moment).

Priority: P1 → P2

(In reply to Robert Wood [:rwood] from comment #7)

So this is much more complicated than I had hoped. In production we don't run tests from source, so we don't have the full central repo available on the worker (at least the type of worker used for raptor tests). Therefore we can't run mach browsertime --setup like a dev would on their local machine.

This is true of the test kind -- see https://bugzilla.mozilla.org/show_bug.cgi?id=1432287 and the trail of woe surrounding it -- but not true of other kinds, including the source-test kind.

Even so, if we used a different worker etc. to get the full repo, or if we packaged up /tools/browsertime into a package to be downloaded onto the workers, like other test packages, there's another issue. The mach browsertime script (/tools/browsertime/mach_commands.py) downloads directly from github (browsertime source and supporting tools). These are specific to the platform etc. Since we can't access github directly in CI production then we would need to use multiple fetch tasks for each. Even then I'm not sure if we could run via mach.

It's not true that we can't access GH (or anything else directly); many tasks do clones, fetches, etc. (It is true that we generally block Firefox from accessing the outside internet in tests.) However, we generally try to mirror in our own infra for speed and general "be nice on the internet" reasons.

So this is probably not impossible however would be quite a time-consuming project so I am closing this bug. If in the future we have time/resources to look into this further we can always re-open this.

I agree that it's more time-consuming than it looks at first, although I generally think running across three platforms is the expensive part. It's the same set of reasons that mach bootstrap is basically untested.

Type: task → enhancement
Depends on: 1608030

To test in the CI how we run locally, I can tweak our workers software and make sure we have the right set up and run the unit test I have created.

However for the end user, I am now convinced that we're trying too hard. I would like to propose that for each platform, we just check that we have all the binaries we need around, and if not, print out a helper on how to install stuff manually.

Nick?

Flags: needinfo?(nalexander)

This minimal test runs:

  • mach browsertime --setup
  • mach browsertime --check

And complains if one of these commands fails.
That test will be trigger in the CI if mach_commands.py or package.json
are modified.

(In reply to Tarek Ziadé (:tarek) from comment #11)

To test in the CI how we run locally, I can tweak our workers software and make sure we have the right set up and run the unit test I have created.

However for the end user, I am now convinced that we're trying too hard. I would like to propose that for each platform, we just check that we have all the binaries we need around, and if not, print out a helper on how to install stuff manually.

Nick?

I don't feel particularly strongly. Obviously I prepare to automate what can be automated, but failing automation is worse than jumping hoops manually. There's no reasonable way for consumers to install Python packages into mach-managed virtualenvs, so that needs to stay. I'd be quite pleased if we could simplify the visual metrics pieces entirely, 'cuz neither ffmpeg nor IM are easy to distribute. Maybe we can find Python equivalents that work reasonably?

In any case, the approach I implemented hasn't worked well, so I'm happy to simplify and try to be less fancy.

Flags: needinfo?(nalexander)

Moving work to ./mach perftest with good test coverage there. ./mach browsertime will likely die in late H2

Status: REOPENED → RESOLVED
Closed: 5 years ago4 years ago
Resolution: --- → DUPLICATE
Attachment #9119661 - Attachment is obsolete: true
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: