Implement backpressure for IndexedDB so eviltraps content will OOM crash or slow its content process rather than the parent process
Categories
(Core :: Storage: IndexedDB, enhancement, P3)
Tracking
()
People
(Reporter: asuth, Unassigned)
References
(Blocks 1 open bug)
Details
There's a wide variety of fuzzer and eviltraps bugs filed where buggy or nefarious content code can generate a lot of garbage data which can potentially cause problems for both the content process and the parent process. Because IndexedDB eagerly sends requests to the parent process and they are routed through PBackground right now, ill-behaved content can cause problems for the parent process when these should ideally be confined to the content process.
We've definitely discussed this problem before but I couldn't find an existing bug, so I'm filing this to track the IndexedDB case. It likely makes sense for us to have additional variations of this bug, and potentially even a meta bug.
In general, it seems like a viable approach might be:
- Move to using endpoints for each open database so that IPC data will be routed directly to the servicing IndexedDB TaskQueue for the database, minimizing the spam on the PBackground thread (or its successor).
- Talk with the IPC team about being able to create a backpressure mechanism that can operate on a top-level protocol basis.
- If the IPC team doesn't like the idea, implement our own backpressure mechanism. A sketch:
- We assign a resource cost to each outstanding request we send. We set a per-agent (thread) budget which should be large enough that normal bursty use should not be affected. We maintain a running tally of our "outstanding spend" and we reduce it each time a request is marked as completed by the parent. I think this ends up fairly straightforward for IDB.
- The main complexity is that our queue potentially needs to cover all requests issues from all globals on that thread in order to make sure that someone can't just use a bunch of iframes to sidestep the mitigation.
- Each time we go to send a new request, we check that request's cost against our outstanding spend and our budget. If our existing outstanding spend is nonzero and this new request would put us over that limit, we put the request in a queue. As requests complete and we have more space in our budget, we send the queued requests.
- This is the simplest approach where we're just letting memory usage terminate the process but we don't create additional complexity for ourselves by trying to do things to slow down the content code from being annoying. However, we could also do things like trip the slow script dialog and suspend execution for the global (but spinning a nested event loop so the process remains responsive) to try and give the parent process room to breathe.
- We assign a resource cost to each outstanding request we send. We set a per-agent (thread) budget which should be large enough that normal bursty use should not be affected. We maintain a running tally of our "outstanding spend" and we reduce it each time a request is marked as completed by the parent. I think this ends up fairly straightforward for IDB.
- Show the IPC team our implementation and have them realize they can systematically implement something much nicer!
Notable constants in this space:
- kTooLargeStream = 1024 * 1024 impacts whether we serialize a stream in its entirety or connect it to a pipe (which does have backpressure).
Description
•