Closed Bug 530953 Opened 15 years ago Closed 14 years ago

TM: make trace-test.py and jstests.py more consistent with each other

Categories

(Core :: JavaScript Engine, defect)

x86
Linux
defect
Not set
normal

Tracking

()

RESOLVED DUPLICATE of bug 638219

People

(Reporter: n.nethercote, Assigned: dmandelin)

Details

trace-test.py and jstests.py are both really useful. They're also very similar, but have different options. It would be good to make the options more similar to each other. ------------------------------------- common ------------------------------------- -h, --help show this help message and exit -s, --show-cmd show js shell command run -o, --show-output show output from js shell --no-progress hide progress bar -g, --debug run test in debugger [usage msg differs] --tinderbox Tinderbox-parseable output format These are all fine. ------------------------------------- same/similar name, not quite the same meaning ------------------------------------- --valgrind-all Run all tests with valgrind, if valgrind is in $PATH. --valgrind Enable the |valgrind| flag, if valgrind is in $PATH. --valgrind run tests in valgrind Not sure what to do with this one... maybe in trace-test.py the --valgrind behaviour should be default, and --valgrind-all be renamed as --valgrind? (I thought trace-test.py was running Valgrind on some tests by default, is that not the case?) -x EXCLUDE, --exclude=EXCLUDE exclude given test dir or path -x EXCLUDE_FILE, --exclude-file=EXCLUDE_FILE exclude tests from the given file Hmm, not sure. Making the option name the same makes sense, but they have slightly different meanings. ------------------------------------- trace-test.py only ------------------------------------- -f, --show-failed-cmd show command lines of failed tests Should be added to jstests.py, it's really useful; requires renaming -f in jstests.py. --no-slow do not run tests marked as slow Fine. ------------------------------------- file ones ------------------------------------- -w FILE, --write-failures=FILE Write a list of failed tests to [FILE] -r FILE, --read-tests=FILE Run test files listed in [FILE] -R FILE, --retest=FILE Retest using test list file [FILE] -O OUTPUT_FILE, --output-file=OUTPUT_FILE write command output to the given file -f TEST_FILE, --file=TEST_FILE get tests from the given file -m MANIFEST, --manifest=MANIFEST select manifest file I'm not sure the difference between all these because I'm not sure of the differences between the test selection mechanisms. -f in jstests.py should be renamed, though, as above. ------------------------------------- jstests.py only ------------------------------------- -j WORKER_COUNT, --worker-count=WORKER_COUNT number of worker threads to run tests on (default 2) -t TIMEOUT, --timeout=TIMEOUT set test timeout in seconds -d, --exclude-random exclude tests marked random --run-skipped run skipped tests --run-only-skipped run only skipped tests --args=SHELL_ARGS extra args to pass to the JS shell --valgrind-args=VALGRIND_ARGS extra args to pass to valgrind -c, --check-manifest check for test files not listed in the manifest All fine.
Another thing: you can run trace-test.py from within js/src/, but jstests.py has to be run from within js/src/tests/. It'd be great if they could both be run from within js/src/.
Yet another thing: the output formats are different. For trace-tests.py it's like this: [n_run | n_failed | n_passed] for jstests.py it's like this: [n_run | n_failed | n_skipped] This is a bit confusing (I just ran across it running the two scripts in tandem).
Status: ASSIGNED → RESOLVED
Closed: 14 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.