I think running tests should be as pleasant of an experience as possible. The current wall of text developers see when invoking the harness does not a pleasant experience make. The mach formatter is slightly better than the TBPL one, but imo it doesn't go nearly far enough. I think when running locally test harnesses should have the following log related goals: 1. Remove all irrelevant clutter when streaming live, imo this is everything except SUITE-START and TEST-START/END (though TEST-START/END should be on the same line). 2. Ability to view full TBPL (or mach) formatted log after run has finished. The structured log should be saved, and we should create a new mach command that can format it post-run to whatever the developer wants (we could potentially create some cool html formatters that would fit in well with this). This is very similar to the "./mach build && ./mach resource-usage" model. 3. Ability to dump parts of the structured log between any two points in time. E.g if a test fails, we should be able to dump just the log output that occurred between the TEST-START and TEST-END. 4. This is kind of unrelated, but for things like |mach test| we need to be able to combine multiple logs into a single one. This will have to be done by the mach command itself instead of the harness. I think the first step is to create a minimalistic formatter that the tests can run while streaming live. I'm partial to something like: Running <SUITE> path/to/test1... PASS path/to/test2... FAIL test output test output test output path/to/test3... Next the test harnesses (or mach) would set things up to stream this new formatter to stdout, and stream the raw log to a known file location. Finally the new |mach dump-log| command (or whatever it's called) reads the log from this known file and prints it to stdout or another file in whatever format the developer specifies. Subsequent test runs would overwrite this known log file. I think there are a lot of ways to improve the current logging situation when running locally, and this is just one approach. So feedback and alternative ideas are much appreciated!
I kept meaning to file a bug on part of this, which is that I have a hard time interpreting test results when running them locally because of the wall of text. Your proposal there sounds sane, although if you're running a lot of tests the failure output is likely to be lost in scrollback. If we could do a consistent summary at the end of the number of passed and failed tests, and the failing test names, that would be a big improvement, and then if we added "run ./mach display-test-log to see detailed test logs" that would be amazing. (We could make that run a webserver and display the results in-browser and share html log generating code with treeherder!)
Yeah, I agree that we should have a summary with failing test names at the end. It would be nice to do that in addition to failure output in the live stream though. That way if a developer happens to notice a failure, they can start debugging it immediately without waiting for the run to finish. In an html version, we can put anchors on the failure output and link to it from the summary (like treeherder does currently). Another thing to consider is that we should even be able to inspect partial logs. I.e if there's a timeout, crash or Ctrl-C.
I agree that the current format isn't ideal. It's worth noting that some harnesses run tests in parallel, so putting things all on one line doesn't work (with simple line based output) without lying about the order in which things happened, and might prevent seeing which test is in progress in the case of a hang or crash or whatever. This also makes the "dump between any two points in time" less useful (but not useless!) of course. Note that many of the other tools you talk about already exist (but aren't wired into mach). In particular you might find  interesting. ted: what's wrong with the summary that we currently have (in the mach formatter)?  https://dxr.mozilla.org/mozilla-central/source/testing/mozbase/mozlog/mozlog/structured/scripts/__init__.py
Good point about the parallel harnesses. I guess in that case it's probably best to ignore TEST-START and just print on TEST-END.
BTW if you want more minimal output there is always --log-unittest
A progress bar would be nice for terse output. If I just want to know all the tests passed and trust that I'm running the tests I think I am, I don't really need to see each path. xpcshell is the only harness running things in parallel I'm aware of, and it always retries failing tests sequentially. There's still something canonical about a "full" log that contains and doesn't interfere with the output coming from various C++ and JS sources that might not be the test harness because it's probably most useful to someone working backwards from a test failure to the error that caused it. It's a somewhat different use from determining whether and which tests failed, but as this bug suggests, we need to provide easy access to it when requested.
wpt also runs tests in parallel in some configurations (e.g. the one that servo uses). So one approach here is to write a web-based log viewer that can also accept streaming input. This could be shared between treeherder and local test runs so that you got one UI for both cases (in the glorious future when we can get real time information about jobs running in automation that would be even more useful). The command line would just show a progress bar and maybe a list of failing tests at the end. But I don't know if people want to load a web page just to watch the progress of a test run.
Personally I like seeing the test paths with a PASS/FAIL, I find it reassuring that things are doing what they're supposed to. But I guess that's the good thing about structured logs, everyone can run with their preferred format :). It strikes me that it might be useful if mozlog could get the default formatter via environment variable, so devs could set it once and forget about it. Progress bars and web viewers both sound like great ideas. Another nicety I'd like, is when running |mach test| or |mach mochitest| to have a list of all the suites at the bottom, with the currently running suite highlighted (the same as how the build highlights the current build step).
You need to log in before you can comment on or make changes to this bug.