Open Bug 1157743 Opened 5 years ago Updated 4 years ago
Allow skipping of the file-change detection when running tests (repeated xpcshell test runs are too slow)
I'm trying to write and debug a single xpcshell test (one in browser/components/loop). Currently I'm spending about 18 seconds waiting for each test run - 15 seconds are spent accessing the disk and I assume checking for build config changes, about 2-3 seconds are the actual test. I've got a SSD drive with plenty of space available and I'm running Mac 10.9 on a MacBook Pro. Can we have an option to skip the build system checks that we could specify when we're running tests so that we can avoid the unnecessary disk checking and just run the test?
Detailed steps to reproduce would be appreciated. Consider also throwing in the output of `python -mcProfile -stottime mach ...`.
STR: 1) Run ./mach xpcshell-test browser/components/loop/test/xpcshell/test_looprooms.js 2) Alter browser/components/loop/test/xpcshell/test_looprooms.js 3) Run test in step one again. Today, its taking about 5 seconds for this to appear: From _tests: Kept 35652 existing; Added/updated 0; Removed 0 files and 0 directories. The other day it was taking 10-15 seconds. A git gc has happened since then, so that might have helped - though I'm not sure why (I've also had a restart or two). The test itself completes in about 2 seconds. I'm attaching the profile output as well.
The patch I just uploaded adds some hacky forensics logging for manifest processing. You can get timing by doing something like the following from the objdir: $ time _virtualenv/bin/python -m mozbuild.action.process_install_manifest _tests _build_manifests/install/tests manifest load: 0.0848879814148 resolve time: 0.688506603241 registry pop: 1.12121701241 dir creation: 0.0288000106812 walking: 0.514410972595 install: 0.639070987701 remove files: 0.0183751583099 remove dirs: 0.0011830329895 total copy: 1.20184016228 total copy: 1.44569587708 total: 2.6518008709 From _tests: Kept 35676 existing; Added/updated 0; Removed 0 files and 0 directories. real 0m2.746s user 0m2.195s sys 0m0.547s Playing around some more, this code is CPU bound (on my machine - MBP with SSD). Throwing more I/O threads at the mix doesn't increase performance. In fact, using futures.ThreadPoolExecutor slows down execution by 50+%! There have been other bugs talking about the overhead of install manifest processing. The current code is pretty well optimized (at least for POSIX systems - we may want to have a special code path for Windows). I reckon the best way to make this faster is to split the tests manifests and use multiple processes for processing the files. For reference, the _tests manifest is ~8x larger than the next largest manifest (dist/include). With 4621 entries, dist/include's manifest does no-op processing in 0.2s, which is significantly faster than 2.7s for _tests. Unfortunately, subdirectories of _tests aren't equal sized. "Sharding" the manifests could be painful. Writing an algorithm that intelligently fanned out to multiple cores would be ideal and processed everything as a tree would be ideal. Unfortunately, Python isn't a great language for writing such code. Anyway, the largest _tests sub-directory is 17-18k files. Even if we do that, wall time should drop by >50%. That might just be fast enough.
/r/7879 - Bug XXXXXX - Add futures 2.2.0 /r/7881 - Bug XXXXXXX - Add futures to virtualenv /r/7883 - print times when processing install manifests Pull down these commits: hg pull -r 5200bfbbe11568a0f40cd718799b7a7a4ae86a61 https://reviewboard-hg.mozilla.org/gecko/
You need to log in before you can comment on or make changes to this bug.