Evaluate/consider using shm_open instead of our current shared memory on Mac
Categories
(Core :: IPC, enhancement, P3)
Tracking
()
People
(Reporter: jld, Assigned: jld)
References
Details
Attachments
(1 file, 1 obsolete file)
20.43 KB,
patch
|
Details | Diff | Splinter Review |
Assignee | ||
Updated•7 years ago
|
Assignee | ||
Comment 1•7 years ago
|
||
Assignee | ||
Comment 2•7 years ago
|
||
Assignee | ||
Comment 3•6 years ago
|
||
Comment 4•6 years ago
|
||
Assignee | ||
Comment 5•6 years ago
|
||
(In reply to Jeff Muizelaar [:jrmuizel] from comment #4)
Is this more likely to run into file descriptor exhaustion than the existing
approach?
It will use file descriptors instead of not using them, so strictly speaking the answer is “yes”, but generally we close file descriptors after setting up mappings (ipc::Shmem
does this, and CrossProcess{Semaphore,Mutex}
expose the ability to do so, which graphics uses). Also, MacOS appears to not set a hard limit on RLIMIT_NOFILE
, so we can always raise the descriptor limit as much as we want (which is not the case on many Linux distributions — e.g., Ubuntu sets a hard limit of 4k by default).
Assignee | ||
Comment 6•6 years ago
|
||
Rebased, and fixed to handle with the content sandbox early init (maybe not the ideal way, but it works). The RDD and probably Flash sandboxes are broken but should be relatively easy to fix.
Assignee | ||
Comment 7•6 years ago
|
||
It looks like bug 1474793 is going to just use base::SharedMemory
directly instead of blocking on this (see bug 1474793 comment #45).
Comment 8•6 years ago
|
||
FWIW, Mojo has switched to using mach on macOS for IPC because of file handle exhaustion: https://docs.google.com/document/d/1nEUEAP3pq6T9WgEucy2j02QwEHi0PaFdpaQcT8ooH9E/edit#
Assignee | ||
Comment 9•6 years ago
|
||
(In reply to Jeff Muizelaar [:jrmuizel] from comment #8)
FWIW, Mojo has switched to using mach on macOS for IPC because of file handle exhaustion: https://docs.google.com/document/d/1nEUEAP3pq6T9WgEucy2j02QwEHi0PaFdpaQcT8ooH9E/edit#
That's interesting. The comment about fd limits in the background section is a little off — for setrlimit
/getrlimit
, the default soft limit is 256 but there's no hard limit, so it can be increased arbitrarily (and we do, and they do), as I mentioned in comment #5.
But the part about a system-wide limit led me into sysctl
, where, on 10.13:
kern.maxfiles: 49152
kern.maxfilesperproc: 24576
So that's potentially a problem. And https://crbug.com/714614 reports a value of 12288
for maxfiles
(maybe an older OS version?), which is worse.
We might need to worry about how many fds we're using for IPC channels, too — Chromium's new IPC can multiplex many independent channels over one socket, but ours has a lot of baggage about single-threadedness and message ordering, so various things create their own channels, and empirically (about:memory
and search for ipc-channels
) each content process currently has about 10-12 endpoints each as a result. But that's a separate problem.
We're definitely going to need some way to transport Mach ports (but probably not the current implementation long-term), and Mach shm has some advantages over POSIX just from an API point of view anyway, so this bug might end up being WONTFIX.
Updated•2 years ago
|
Description
•