Closed Bug 1619339 Opened 4 years ago Closed 4 years ago

set max number of files from within mach prior to exec'ing python

Categories

(Firefox Build System :: General, enhancement, P3)

enhancement

Tracking

(firefox78 fixed)

RESOLVED FIXED
mozilla78
Tracking Status
firefox78 --- fixed

People

(Reporter: froydnj, Assigned: froydnj)

Details

Attachments

(1 file)

This is the same sort of fix we applied to automation to make configure run faster there, and making the fix in mach means that local builds (well, at least configure) for developers gets faster as well.

Something like this hack seems to work for me; tcampbell (who was noticing massive differences in configure speed on local builds) reports success with it as well:

--- a/mach
+++ b/mach
@@ -104,6 +104,18 @@ py2commands="
 "
 
 run_py() {
+    # Set a reasonable limit to the number of open files.
+    # The number of open files affects the performance of closing file descriptors
+    # before forking (as Python's `subprocess.Popen(..., close_fds=True)` does).
+    max=8192
+    soft=`ulimit -n` || `echo $max`
+    hard=`ulimit -Hn` || `echo $max`
+
+    if test "$hard" -gt "$max"; then
+        ulimit -Sn "$max" || true
+        ulimit -Hn "$max" || true
+    fi
+
     # Try to run a specific Python interpreter. Fall back to the system
     # default Python if the specific interpreter couldn't be found.
     py_executable="$1"

I am not completely convinced this handles all the corner cases. I sort of modeled it after bug 1616790, but now that I look at that one, I'm not sure that bug actually solves anything in automation: won't everything continue to use the hard limit of 1M open files with that patch, which isn't what we want?

Flags: needinfo?(mh+mozilla)
Priority: -- → P3

Perfherder shows bug 1616790 didn't break anything.
https://treeherder.mozilla.org/perf.html#/graphs?highlightAlerts=1&series=autoland,1582196,1,2&series=autoland,1618746,1,2&series=autoland,1691782,1,2&timerange=2592000

Touching the hard limit is not what matters. In fact, it's better to keep the hard limit high because Firefox does increase its own limit, and touching the hard limit in mach would prevent Firefox from doing it under mach run.

What matters is the soft limit, which just needs to be downgraded to a "normal" value.

With all that being said, mach is mainly python, why not copy the code from run-task in mach mostly verbatim rather than trying to do it in shell?

Flags: needinfo?(mh+mozilla)

(In reply to Mike Hommey [:glandium] from comment #2)

Touching the hard limit is not what matters. In fact, it's better to keep the hard limit high because Firefox does increase its own limit, and touching the hard limit in mach would prevent Firefox from doing it under mach run.

What matters is the soft limit, which just needs to be downgraded to a "normal" value.

I verified that sysconf(_SC_OPEN_MAX), which is what Python uses for its file descriptor closing, does indeed return the soft limit, which is what I wasn't sure about.

We do this to avoid unnecessarily penalizing subprocess invocations from
within mach (and all child processes) on some Linux setups.

Assignee: nobody → nfroyd
Status: NEW → ASSIGNED
Pushed by nfroyd@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/76c9b2477a52
explicitly set the soft ulimit for open files from mach; r=glandium
Flags: needinfo?(nfroyd)
Pushed by nfroyd@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/462e54ecb423
explicitly set the soft ulimit for open files from mach; r=glandium
Status: ASSIGNED → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla78
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: