2 years ago
Status: NEW → RESOLVED
Last Resolved: 2 years ago
Resolution: --- → INCOMPLETE
Status: RESOLVED → REOPENED
Resolution: INCOMPLETE → ---
This test is prone to false negatives because it tests that one event happens within a fairly small window of another event. On heavily loaded systems this is likely to fail from time to time. We actually have one more of these tests, in did-timeout.js, though no errors are reported against that. In this particular case it should be possible to instead test that at least some specific amount of time has passed but to allow a greater amount of time to have passed.
Assignee: nobody → lhansen
Status: REOPENED → ASSIGNED
Andre, do we have a process in place for fixing test262 bugs / what is that process? Do we fix locally and upstream, do we report upstream and wait to pull later, or what?
I think we're currently in the process to change the way we pull/push test262 updates (bug 1374290). :sfink and Leo Balter should know more about this. For this bug we could also create our own atomicsHelper.js file with a higher $ATOMICS_MAX_TIME_EPSILON value.
To help improve priority identifying, in my work to get Spidermonkey testing onto taskcluster this failed on windows32-debug 3 out of 10 runs. So ~ 30 % failure rate (not a great sample size to draw big conclusions from)
I have a tentative fix, it just needs to be tested & reviewed.
I'll talk to Steven and Leo about landing and upstreaming, but I'd like to get some eyes on this. Basically it makes the test more robust (and no less correct) by demanding that the sleep time of the helper thread is the expected 1000ms minus at most the epsilon, but allows it to be arbitrarily longer. This is appropriate for a heavily loaded system in any case; wait() only guarantees a minimum wait time - to the extent it really guarantees anything - but has no upper limit on how long the wait might be. Try run ongoing, not that this will be conclusive unless it fails (in which case the patch is wrong): https://treeherder.mozilla.org/#/jobs?repo=try&revision=a987d83794b5cc9ec8ff0c1ee6b9229f03d480a7
Attachment #8899342 - Flags: review?(andrebargull)
Well, at least the test passed with that run, not that it truly is indicative of anything. (no-spurious-wakeup runs as part of jsreftest-1, "J1" here)
Comment on attachment 8899342 [details] [diff] [review] bug1361526-more-robust-spurious-wake-test.patch Review of attachment 8899342 [details] [diff] [review]: ----------------------------------------------------------------- The change looks reasonable to me. A similar change should probably applied to Atomics/wait/did-timeout.js as well, right? (I've changed it to f+ instead of r+, just to make sure there's no confusion that we cannot change our local copy of no-spurious-wakeup.js, but instead need to create a PR upstream and then checkout test262 from upstream after the PR is merged.)
Attachment #8899342 - Flags: review?(andrebargull) → feedback+
Upstreamed at https://github.com/tc39/test262/pull/1212 along with a test for did-timeout.js.
Merged upstream, so it should show up in our repos once we pull.
:lth, this is still failing quite often, the windows2012-32 platform is failing on spidermonkey-plain jobs. here is a recent log: https://treeherder.mozilla.org/logviewer.html#?repo=autoland&job_id=134789819&lineNumber=57353 and related text from it: TEST-PASS | test262\built-ins\Atomics\wait\no-spurious-wakeup.js | (args: "--baseline-eager") [1.4 s] TEST-PASS | test262\built-ins\Atomics\wake\nonshared-int-views.js | (args: "--ion-eager --ion-offthread-compile=off") [0.7 s] TEST-PASS | test262\built-ins\Atomics\wait\no-spurious-wakeup.js | (args: "") [1.6 s] ## test262\built-ins\Atomics\wait\no-spurious-wakeup.js: rc = 3, run time = 1.647 uncaught exception: Test262Error: Expected SameValue(«false», «true») to be true TEST-UNEXPECTED-FAIL | test262\built-ins\Atomics\wait\no-spurious-wakeup.js | (args: "--ion-eager --ion-offthread-compile=off") [1.6 s] TEST-PASS | test262\built-ins\Atomics\wait\was-woken.js | (args: "--ion-eager --ion-offthread-compile=off") [1.6 s] TEST-PASS | test262\built-ins\Atomics\wake\shared-nonint-views.js | (args: "--baseline-eager") [1.3 s] TEST-PASS | test262\built-ins\Atomics\wake\shared-nonint-views.js | (args: "--ion-eager --ion-offthread-compile=off") [1.3 s] TEST-PASS | test262\built-ins\Atomics\wake\wake-all-on-loc.js | (args: "") [1.3 s] can you take another look at this bug- quite possibly this is a new failure, not related to the one you fixed already
Whiteboard: [stockwell needswork]
It looks like test262 has not been pulled to our repos since ... May 18, maybe (comment 9, comment 11). The lag here is a persistent pain point, but it isn't really my department. Let's ask some people whose department it may be.
Pushed by firstname.lastname@example.org: https://hg.mozilla.org/integration/mozilla-inbound/rev/0d2943666412 merge from upstream test262 to kill intermittents. r=me
As there was no information forthcoming about a possible merge date, I have merged the two test cases from upstream test262 myself.
You can update test262 by running test262-update.py from js/src/tests and committing the result. I don't know if we have any sort of policy on when it's ok to update, but it seems like if it doesn't break anything, we can update anytime it's useful.
You need to log in before you can comment on or make changes to this bug.