http://robothaus.org/mozilla/bugs/too-much-video/ The above link will show a page with 150 videos with unique urls. On my OSX machine, it appears to load about 200 of the videos and then stop completely. So, opening more than one tab will attempt to load 300 videos. Closing the first tab (with 150 properly loaded videos) will *eventually* let the second tab load, but only after something like gc runs (hard to tell exactly what happens). On Firefox 14, this is a lot more evident, because it will completely disallow new connections to the server, regardless of the file requested. In Nightly, this problem appears to eventually clear up.
There's a hard limit on the number of active decoding threads that can be going at any time. This is currently set to 25. See bug 691096 for why this limit was added.
I was chatting to Bobby about this bug, so I'll grab it. We discussed the decode thread limit; what seems to be going on is that elements that should've released their decode thread after decoding and going idle don't seem to be doing so after completing (pre-)load. This only starts to happen after 100-150 have loaded--the number depends on how fast they load.
(In reply to Bobby Richter [:bobby] from comment #0) > The above link will show a page with 150 videos with unique urls. Unrelated to the rest of this bug, but the URIs aren't guaranteed to be unique because Date.now()'s resolution is 1ms, and it's possible to create multiple elements in less than a millisecond--for instance, in my (slow) debug build I'm seeing 20-30 elements end up with the same URI.
(In reply to Matthew Gregan [:kinetik] from comment #3) > Unrelated to the rest of this bug, but the URIs aren't guaranteed to be > unique because Date.now()'s resolution is 1ms, and it's possible to create > multiple elements in less than a millisecond--for instance, in my (slow) > debug build I'm seeing 20-30 elements end up with the same URI. Good catch. I've fixed it up here: http://robothaus.org/mozilla/bugs/too-much-video/index2.html
As the testcase pages load, the media loads are being suspended by the element (from FirstFrameLoaded) and by the media cache (for throttling, as the cache runs out of space). Normally, a load suspended by both the element and the cache would need to be resumed the same way (i.e. via calls to Resume() from the element and cache), but if the cache resumes a load via CacheClientSeek (e.g. due to resuming at a different offset due to evicted blocks), it will recreate the channel even when mSuspendCount > 0, which results in the channel being suspended from its OnStartRequest if mSuspendCount is still > 0. Once enough of these loads are resurrected by the cache, the per-server HTTP connection limit is reached and no further loads from that server will ever complete. I think the right thing to do is to avoid recreating the channel in CacheClientSeek if mSuspendCount > 0, and set the MediaResource up so that a subsequent resume will recreate the channel as necessary.
Created attachment 645656 [details] [diff] [review] patch v0 This patch allows me to open 10 copies of the testcase (1000 elements) and have no pending HTTP connections after each tab's load event has fired. I could create more, but each new tab takes longer to load due to scalability issues in the media cache's block management code when dealing with thousands of elements. Mochitests pass locally, let's see how it goes on Try: https://tbpl.mozilla.org/?tree=Try&rev=c12fbcfeb529
Comment on attachment 645656 [details] [diff] [review] patch v0 Review of attachment 645656 [details] [diff] [review]: ----------------------------------------------------------------- Good catch!