|Submitter||Diff||Changes||Open Issues||Last Updated|
|Error loading review requests:|
STEPS TO REPRODUCE: 1) Compile from revision 049f8d7499d7 or any later revision (e.g. 723b25eb3dd8 shows the problem). 2) mach wpt testing/web-platform/tests/html/dom/documents/dom-tree-accessors/Document.body.html EXPECTED RESULTS: Run the test ACTUAL RESULTS: I see output like so: 0:02.61 INFO Using 1 client processes 0:02.80 INFO Starting http server on web-platform.test:8000 0:02.80 INFO Starting http server on web-platform.test:8001 0:02.81 INFO Starting http server on web-platform.test:8443 and then nothing. This happens on both Linux64 and Mac. I did a bisect, because this used to work before, and it comes out with: The first bad revision is: changeset: 398399:049f8d7499d7 user: James Graham <email@example.com> date: Wed Jan 03 16:24:44 2018 +0000 summary: Bug 1429043 - Update web-platform-tests to revision 4de5305adf3d33badc23952672bcf28168fea37e, a=testonly On IRC, Hiro said that things work for him, so it's possible that something about my setup is causing this. Although again, it's happening on two different systems... This is currently blocking me finishing up some patches, because I can't run the relevant tests. :( James, any idea what might be going on here? Anything I can log to help debug this?
I can't reproduce this on OSX or Linux :/ bkelley reported problems on Windows if /etc/hosts didn't contain the various web-platform.test hosts. It makes some sense as foolip changed the server entry point in one of the changes in that import, but I tried deleting those on the aforementioned platforms and it didn't reproduce then either. You could try specifying them and seeing it it makes a difference, but it explicitly shouldn't. I guess running with --mach-log-level=debug might produce something useful, or if it's hanging either strace-type output or the python stak from gdb might provide a clue (i.e. the one from py-bt for the various processes; I have no idea how to get that if you have only lldb).
> You could try specifying them Why ones, exactly? Passing --log-mach-level=debug doesn't give me any more output. I don't seem to have a "py-bt" in my gdb ("GNU gdb (GDB) Fedora 8.0-24.fc26").... I'll attach the "working" and "broken" straces, then go poke around a bit, I guess.
So one thing I've found so far is that https://searchfox.org/mozilla-central/rev/38bddf549db0f26d4d7575cf695958ff703a3ed7/testing/web-platform/tests/tools/wptrunner/wptrunner/testrunner.py#318 is apparently not being reached...
So some more data. In https://searchfox.org/mozilla-central/rev/38bddf549db0f26d4d7575cf695958ff703a3ed7/testing/web-platform/tests/tools/wptrunner/wptrunner/environment.py#228 we used to have self.config["host"] set to "127.0.0.1". In the new setup it's set to "web-platform.test". Now my ISP has a lying DNS server that happily resolves "web-platform.test": % dig web-platform.test .... web-platform.test. 0 IN A 188.8.131.52 which is not helpful here. If I map "web-platform.test" to "127.0.0.1" in /etc/hosts, then things work fine. It's not clear to me what the point of test_servers there is.
James, do you think you'll have time to deal with this, or should I?
Right, sorry I should have said. I put a patch upstream for review . So far I have been moderately unsuccessful at getting anyone to actually review it. I could make it against mozilla-central instead, where it might be easier to get review and you could verify it fixes your issue.  https://github.com/w3c/web-platform-tests/pull/9258
I pushed a try push to https://treeherder.mozilla.org/#/jobs?repo=try&revision=d217f09dd395bcbe23c2534775d0dfa22edbcaf6 bz: are you able to pull that tree and verify if it fixes the problem for you?
Sorry for the lag... The patch linked in comment 9 fixes things for me, thank you!. s/repopnsing/responding/ in the commit message, presumably?
Comment on attachment 8948378 [details] Bug 1433361 - Switch external_host to host_ip, https://reviewboard.mozilla.org/r/217846/#review224296
Attachment #8948378 - Flags: review?(mjzffr) → review+
James, is there something holding up this landing, or can I just autoland this?
The holdup is that something else landed upstream which generates merge conflicts. That then caused a whole pile of regressions and once they were fixed I forgot to rebase the upstream version of this patch. So probably better not to land this, but I'll rebase and land the upstream version right now.
You need to log in before you can comment on or make changes to this bug.