set max number of files from within mach prior to exec'ing python
Categories
(Firefox Build System :: General, enhancement, P3)
Tracking
(firefox78 fixed)
Tracking | Status | |
---|---|---|
firefox78 | --- | fixed |
People
(Reporter: froydnj, Assigned: froydnj)
Details
Attachments
(1 file)
This is the same sort of fix we applied to automation to make configure run faster there, and making the fix in mach
means that local builds (well, at least configure) for developers gets faster as well.
Assignee | ||
Comment 1•4 years ago
|
||
Something like this hack seems to work for me; tcampbell (who was noticing massive differences in configure speed on local builds) reports success with it as well:
--- a/mach
+++ b/mach
@@ -104,6 +104,18 @@ py2commands="
"
run_py() {
+ # Set a reasonable limit to the number of open files.
+ # The number of open files affects the performance of closing file descriptors
+ # before forking (as Python's `subprocess.Popen(..., close_fds=True)` does).
+ max=8192
+ soft=`ulimit -n` || `echo $max`
+ hard=`ulimit -Hn` || `echo $max`
+
+ if test "$hard" -gt "$max"; then
+ ulimit -Sn "$max" || true
+ ulimit -Hn "$max" || true
+ fi
+
# Try to run a specific Python interpreter. Fall back to the system
# default Python if the specific interpreter couldn't be found.
py_executable="$1"
I am not completely convinced this handles all the corner cases. I sort of modeled it after bug 1616790, but now that I look at that one, I'm not sure that bug actually solves anything in automation: won't everything continue to use the hard limit of 1M open files with that patch, which isn't what we want?
Updated•4 years ago
|
Comment 2•4 years ago
|
||
Perfherder shows bug 1616790 didn't break anything.
https://treeherder.mozilla.org/perf.html#/graphs?highlightAlerts=1&series=autoland,1582196,1,2&series=autoland,1618746,1,2&series=autoland,1691782,1,2&timerange=2592000
Touching the hard limit is not what matters. In fact, it's better to keep the hard limit high because Firefox does increase its own limit, and touching the hard limit in mach would prevent Firefox from doing it under mach run
.
What matters is the soft limit, which just needs to be downgraded to a "normal" value.
With all that being said, mach is mainly python, why not copy the code from run-task in mach mostly verbatim rather than trying to do it in shell?
Assignee | ||
Comment 3•4 years ago
|
||
(In reply to Mike Hommey [:glandium] from comment #2)
Touching the hard limit is not what matters. In fact, it's better to keep the hard limit high because Firefox does increase its own limit, and touching the hard limit in mach would prevent Firefox from doing it under
mach run
.What matters is the soft limit, which just needs to be downgraded to a "normal" value.
I verified that sysconf(_SC_OPEN_MAX)
, which is what Python uses for its file descriptor closing, does indeed return the soft limit, which is what I wasn't sure about.
Assignee | ||
Comment 4•4 years ago
|
||
We do this to avoid unnecessarily penalizing subprocess invocations from
within mach (and all child processes) on some Linux setups.
Updated•4 years ago
|
Pushed by nfroyd@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/76c9b2477a52 explicitly set the soft ulimit for open files from mach; r=glandium
Comment 6•4 years ago
|
||
Backed out changeset 76c9b2477a52 (bug 1619339) for Gecko Decision task failure. CLOSED TREE
Log:
https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=301208446&repo=autoland&lineNumber=51
Push with Failures:
https://treeherder.mozilla.org/#/jobs?repo=autoland&group_state=expanded&revision=76c9b2477a5253b699e56101aeed9a0d2e1db784
Backout:
https://hg.mozilla.org/integration/autoland/rev/960dbad748a2263d34b6517d2ded221170750da1
Assignee | ||
Updated•4 years ago
|
Pushed by nfroyd@mozilla.com: https://hg.mozilla.org/integration/autoland/rev/462e54ecb423 explicitly set the soft ulimit for open files from mach; r=glandium
Comment 8•4 years ago
|
||
bugherder |
Description
•