It's currently 1, see . So, when we manage to allocate threads because of concurrency, they very quickly die because of this ridiculously low limit adding overhead and stack re-allocations.
I was trying to reproduce the issue with fast.com from bug 1503704 with my local backtrack instrumented build (only opt build at hand). I could see 40 stream trans threads being created during a session I ran a single fast.com test. With an uninstrumented build (apparently faster and causing faster drop off of idle threads) could go up to 140 threads with two tests on fast.com.
I changed the number to 5 idle threads in the bt build and ran again. Two fast.com tests resulted in the total number of stream transport threads at only 7 concurrent threads during the whole browser session.
So, I think it's worth to try this.
Note that we kill idle threads (or - the one lonely left) in 30 seconds anyway, so the memory consumption will raise only for those 30 seconds max. I also think about adding an option to do a relative timeout calculation: max idle time = set idle time / number of currently active threads. With 5 idle threads the first idle thread would go away in 6 seconds (=30 / 5). The other one in next 1.5s (when idle for 7.5=30/4), another one in next 2.5s (when idle for 10).. etc. This will also play nicely when a load is fluctuating: there are 5 idle threads. Before we start killing them, user interacts and starts loads, we start posting to sts, the idle timeout will prolong as the number of idle threads goes down.
This is also somewhat a prerequisite for bug 1527712 that is using sts for I/O by posting an event to that thread. a single test run that loads a lot of local chrome content spawn 400+ threads! Confirmed that with 5 idle threads limit it goes down to only 8.