Closed Bug 1091505 Opened 5 years ago Closed 5 years ago
Run subconfigures in parallel
No description provided.
On automation, this brings Windows configure time on a clobber from 5:30 to 3:10. Sadly, because make needs to run under intl/icu/host before configuring intl/icu/target, intl/icu/host needs to be configured independently. Fortunately, that's not configured for normal windows builds anyways. Also, having multiple subconfigures sharing the same cache file is dangerously racy. Fortunately, not a lot do. In fact, only js/src and $_subconfigure_subdir do, so force the latter (only used for ldap sdk on comm-central) not to configure in parallel.
Attachment #8514207 - Flags: review?(gps)
Comment on attachment 8514207 [details] [diff] [review] Run subconfigures in parallel Review of attachment 8514207 [details] [diff] [review]: ----------------------------------------------------------------- This gets r+ iff it doesn't break BSD. I have a feeling it does and you'll need to make use of Pool optional. ::: build/subconfigure.py @@ +6,5 @@ > # files and subsequently restore their timestamp if they haven't changed. > > import argparse > import errno > +from multiprocessing import Pool, cpu_count Unless something changed in the last ~18 months, importing Pool will fail on BSDs. You should double check this or ask Landry to confirm. @@ +367,5 @@ > + > + ret = 0 > + # One would think using a ThreadPool would be faster, considering > + # everything happens in subprocesses anyways, but no, it's actually > + # slower on Windows. (20s difference overall!) That difference is surprising. Then again, everything around concurrency in Python is mostly a joke, sadly. Did you try this with mozprocess? I'd be very interested if it performs any better than a thread pool. Then again, I'm pretty sure it is using threads in the background to do I/O polling. Maybe it is smarter than whatever you tried? I dunno.
Attachment #8514207 - Flags: review?(gps) → review+
Landry, would this break you? (In reply to Gregory Szorc [:gps] from comment #2) > Did you try this with mozprocess? I didn't, but then, I thought mozprocess was essentially wrapping subprocess, not managing running multiple subprocesses in parallel.
Right, mozprocess is a subprocess wrapper. But it fires a callback whenever there is a line of output. So there's multiple threads involved doing the polling IIRC. You can make a cheap man's mozprocess-based pool easily enough. You may want to do that if Pool won't work. I wouldn't worry about capping the max worker count - I bet we could operate at 2x or 4x CPU count during configure - it isn't very CPU intensive.
In fact, simply importing multiprocessing fails on openbsd. But using mozprocess without using many more threads would require more rework than I'm ready to put right now, so let's just create a dummy Pool for openbsd.
Attachment #8516335 - Flags: review?(gps)
Attachment #8514207 - Attachment is obsolete: true
Comment on attachment 8516335 [details] [diff] [review] Run subconfigures in parallel Review of attachment 8516335 [details] [diff] [review]: ----------------------------------------------------------------- WFM
Attachment #8516335 - Flags: review?(gps) → review+
A bit late to the party, but multiprocess got fixed a while ago, but thanks for thinking about it and providing a fallback in case the import fails, this is way more portable to other "exotic oses" who might have it broken too. $python2.7 Python 2.7.8 (default, Oct 6 2014, 13:51:42) [GCC 4.2.1 20070719 ] on openbsd5 Type "help", "copyright", "credits" or "license" for more information. >>> from multiprocessing import Pool, cpu_count >>> As for make/icu, i still have https://bugzilla.mozilla.org/show_bug.cgi?id=1064665 which is a bit of a problem for me. I dont know if that issue could be vaguely related.
Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla36
with this fix I've got this error on mer target 0:20.45 checking for posix_fadvise... yes 0:20.52 checking for posix_fallocate... yes 0:20.61 updating cache ./config.cache 0:20.62 creating ./config.status 0:21.19 Traceback (most recent call last): 0:21.19 File "/mozilla-central/build/subconfigure.py", line 423, in <module> 0:21.19 sys.exit(main(sys.argv[1:])) 0:21.19 File "/mozilla-central/build/subconfigure.py", line 407, in main 0:21.19 return subconfigure(args) 0:21.19 File "/mozilla-central/build/subconfigure.py", line 392, in subconfigure 0:21.19 pool = Pool(min(len(subconfigures), cpu_count())) 0:21.19 File "/usr/lib/python2.7/multiprocessing/__init__.py", line 232, in Pool 0:21.19 return Pool(processes, initializer, initargs, maxtasksperchild) 0:21.19 File "/usr/lib/python2.7/multiprocessing/pool.py", line 138, in __init__ 0:21.19 self._setup_queues() 0:21.19 File "/usr/lib/python2.7/multiprocessing/pool.py", line 233, in _setup_queues 0:21.19 self._inqueue = SimpleQueue() 0:21.19 File "/usr/lib/python2.7/multiprocessing/queues.py", line 352, in __init__ 0:21.19 self._rlock = Lock() 0:21.19 File "/usr/lib/python2.7/multiprocessing/synchronize.py", line 147, in __init__ 0:21.19 SemLock.__init__(self, SEMAPHORE, 1, 1) 0:21.19 File "/usr/lib/python2.7/multiprocessing/synchronize.py", line 75, in __init__ 0:21.19 sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue) 0:21.19 OSError: [Errno 2] No such file or directory 0:21.20 *** Fix above errors and then restart with\ 0:21.20 "/usr/bin/gmake -f client.mk build" 0:21.20 gmake: *** [configure] Error 1 0:21.20 gmake: *** [/objdir/Makefile] Error 2 0:21.20 gmake: *** [build] Error 2 0:21.23 0 compiler warnings present. Build failed, exit
Great, so now we have another architecture (Mer) that doesn't support multiprocessing locking in multiprocessing. I bet if you search hard enough, you'll find an issue on the Python issue tracker or the Mer project tracker that documents this. Until we find that, I'm not sure what the best workaround is.
I bet the environment in which that runs doesn't have /dev/shm mounted.
(In reply to Mike Hommey [:glandium] from comment #12) > I bet the environment in which that runs doesn't have /dev/shm mounted. environment is scratchbox2, which has empty /dev directory I've mounted with -obind host /dev directory but it did not help
(In reply to Oleg Romashin (:romaxa) from comment #13) > (In reply to Mike Hommey [:glandium] from comment #12) > > I bet the environment in which that runs doesn't have /dev/shm mounted. > > environment is scratchbox2, which has empty /dev directory > I've mounted with -obind host /dev directory but it did not help -obind doesn't mount recursively, /dev/shm is a separate mount. This should be fixed by bug 1094624 anyways.
You need to log in before you can comment on or make changes to this bug.