Open Bug 1257518 Opened 9 years ago Updated 2 years ago

Find a substitute for Linux 32 JavaScript test suites.


(Testing :: General, defect)



(Not tracked)


(Reporter: nbp, Unassigned)


(Blocks 1 open bug)


Currently we have multiple concerns for keeping Linux 32bits tests for the JS engine.  Using cross-compiled (from x64 to x86) builds should work fine for the JS engine.

The major concern is keeping a way to run tests for ARM, by using the ARM simulator (= SM(arm)).
In terms of resource load, I don't think linux32 shell builds cost us enough to worry about. They just build the shell and run the shell tests, which is nothing in comparison to the linux32 browser tests.

The linux32 shell builds are already cross-compiling on x64.

We do still need to figure out what set of shell builds we want to end up with once the linux32 browser builds + tests are gone. Warning: despite the title of this bug, I'm only going to propose *adding* builds.

From what I can tell, we seem to currently be running a single arm-sim build on linux32. On linux64, we are running 1 opt build and 4 debug builds (warnaserrs, which is a plain build now, plus compacting and rootanalysis, and an arm64 simulator build). On win32, we have 3 builds (1 opt + 2 debug). There are more builds available on try, eg win64 shell builds.

I see no reason to turn off the 32-bit arm-sim build. It's cross-compiled on x64, so it doesn't require anything that isn't already used for other things.

We currently have 32 bit testing via the arm-sim build and the win32 builds. I could imagine adding in an additional plain linux32 build, but only if it's helpful in diagnosing whether arm-sim failures are due to ARM vs 32 bits. Given that we've gone this far without having it, my inclination would be to not add it unless arm developers ask for it.

It does seem like it might be useful to add in at least one win64 shell build, since we're now shipping win64.

I'll probably also want to tweak what runs on each of these builds. I like having a fast-fail build mixed in to catch problems soon after a push, and right now even the quickest build is running too much. The options are to run basic tests (check-style etc.), jsapi-tests, jstests, and/or jit-tests, and within jstests and jit-tests we can choose what subset of possible jitflags to iterate over, and even beyond that we have some slow test skipping mechanisms (eg the "skip tests that are slow running the compacting GC zeal mode"). It's a bit of a juggling game.

Finally, I should note that I'm trying to migrate as many of these builds as possible over to taskcluster. Currently, I have all the linux builds except for arm-sim ported and working (and I *think* the arm-sim breakage is a trivial fix). I haven't tried using the Windows taskcluster stuff yet. And there are some scheduling changes I'd like to make before switching over for real. But the future of these builds is definitely on taskcluster.

nbp, was there anything else you were worried about?
(In reply to Steve Fink [:sfink:, :s:] from comment #1)
> nbp, was there anything else you were worried about?

I do not think so.  The only additional concern that I had was the fact that keeping the x86 tests would be interesting for ARM vs x86, as you mentioned, but I am not yet convinced that this is mandatory. (ni? Lars)

I agree with Steve, that in the process we could enhance our test suite such that we can split our jit-test, such that they end in a reasonable time frame (not ending after the mochi-test for example).
Flags: needinfo?(lhansen)
I usually build on Mac, and I've been able to build 32-bit x86 shells just fine, so linux32 going away is not a problem for me, I think.  (I do test on ARM HW from time to time, but I assume those 32-bit builds will continue to work.)
Flags: needinfo?(lhansen)
checking in on this bug- have we changed the tests we run and builds we do such that this is still a bug or maybe there are more issues?
(In reply to Joel Maher ( :jmaher) (UTC-5) from comment #4)
> checking in on this bug- have we changed the tests we run and builds we do
> such that this is still a bug or maybe there are more issues?

I am hoping to have someone who will go through our full set of tests and figure out what's running where, which would make it easier to figure out where we are and what gaps and redundancies we have. Perhaps fill in something like

They'll also go through the individual tests to sort them out too.
:sfink- we now have some tests on android arm64, and the ability to run more.  There will be work to get tests running on windows arm64 in Q1 next year.  What is remaining here and how can I help get us to resolution?
Flags: needinfo?(sphink)
(In reply to Joel Maher ( :jmaher ) (UTC-4) from comment #6)
> :sfink- we now have some tests on android arm64, and the ability to run
> more.  There will be work to get tests running on windows arm64 in Q1 next
> year.  What is remaining here and how can I help get us to resolution?

I'm not sure if that changes anything here -- arm64 != arm, so in terms of 32-bit coverage it doesn't change anything.

The spreadsheet *was* fully filled out, and provided an accurate snapshot of what we had at one point in time, but it hasn't kept up with reality. Now that everything is on taskcluster, it should be vastly easier to automate if we wanted to.

This bug could probably just be closed at this point. I haven't heard complaints about our 32-bit coverage -- though I'm not sure if such complaints would be routed to me; you'd be as likely to hear about them as I would at this point.

I could imagine opening a new bug about automated maintenance of a report on our test coverage (similar to the above spreadsheet), and I'd be happy to help generate/extract the necessary bits of metadata to populate such a report. (eg bit width, which tests are run or skipped in a particular task, etc.) Right now some of that is not visible in the task structure; we could either move it from js/src/devtools/automation/variants/* or produce structured output describing it or whatever.

But only if you think it's worthwhile; I think you own questions of test coverage?
Flags: needinfo?(sphink)

I am looking into this again now that some other projects have completed, I believe the spreadsheet is interesting and probably not too far out of date.

Looking at that spreadsheet, I see some upcoming lost coverage when fennec is turned off in v.69 and stays with 68 to ESR (row 19 and 20):
test-android-4.3-arm7-api-16/opt-jsreftest test/reftest.yml android-4.3-arm7 32 opt browser - default - default
test-android-4.3-arm7-api-16/debug-jsreftest test/reftest.yml android-4.3-arm7 32 debug browser - default - default

and if we were to have linux32 follow the same page (rows 2, 3, and 30):
test-linux32/opt-jsreftest-e10s test/reftest.yml linux32 32 opt browser - default none default
test-linux32/debug-jsreftest-e10s test/reftest.yml linux32 32 debug browser - default none default
spidermonkey-sm-arm-sim/debug spidermonkey/kind.yml linux32 (arm32 simulator) 32 debug shell default default - 1

:sfink, if we were not to have these 5 test configs in the near future, what concerns would you have? Are there any must haves? Other people you can see that could provide additional insight?

Flags: needinfo?(sphink)

little nudge here- this might be outdated

Clearing stale needinfos in my triage components, please re-flag if still needed.

Flags: needinfo?(sphink)
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.