Closed Bug 1347174 Opened 7 years ago Closed 7 years ago

Media is loading normally and then Firefox launches repeated GET calls

Categories

(Core :: Audio/Video: Playback, defect, P1)

51 Branch
defect

Tracking

()

RESOLVED FIXED
mozilla55
Tracking Status
firefox54 --- wontfix
firefox55 --- fixed

People

(Reporter: abid.mounir, Assigned: jwwang, NeedInfo)

References

(Blocks 1 open bug)

Details

Attachments

(2 files)

Attached file screenshot and log
User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36

Steps to reproduce:

- Go to an (test) webpage with html5 video tag.
- Start playing a big file (2.2 Gb in my example) and let playing.
- The file must be distributed with range request (206 Partial Content)


Actual results:

Suddenly, Firefox stop the current Get request (at ~480/500 Mo) and starts to make repeated GET calls... On server side, all apache processes are busy and the all plateform becomes unavailable...


Expected results:

Why Firefox panic. It is a real production problem that makes all other sites inaccessible with only one viewing
Component: Untriaged → Networking
Product: Firefox → Core
Is it possible to provide a testpage to reproduce the issue? (on a testing server eg)
Flags: needinfo?(abid.mounir)
Hello, 

Sorry for the delay, you can check this url http://92.90.207.41/index.html for reproduce the issue.
The important thing is to not seek in the video, let it play until the GET request downloads almost 490 Mo.


Thanks.
Flags: needinfo?(abid.mounir)
dragana, can you take a look at this.. I've got another email that talks about multiple GETs for images.. who knows what layer it starts at.
Assignee: nobody → dd.mozilla
Whiteboard: [necko-active]
The transactions are closed with 0x805d0021 - NS_ERROR_PARSED_DATA_CACHED. The code closing them is out of necko. I see this error code in dom.

I noticed for example:

we ask for a range 572358656-
more then 790000 is transfered and read from network, they are partially stored in our streams.
then we ask for 572686336- which is only 320000 advance. Probably the end listener never got the rest of 450000 byte but they are already transfered. Can we make use of them some how, so that we do not need to transfer them again? I really do not know why a transaction in closed with NS_ERROR_PARSED_DATA_CACHED maybe we can not make use of the rest of the data, but it is a waste.
Assignee: dd.mozilla → nobody
Component: Networking → DOM
Sounds more like a audio/video issue...
Component: DOM → Audio/Video
Component: Audio/Video → Audio/Video: Playback
Whiteboard: [necko-active]
(In reply to ABID Mounir from comment #2)
> Hello, 
> 
> Sorry for the delay, you can check this url http://92.90.207.41/index.html
> for reproduce the issue.
> The important thing is to not seek in the video, let it play until the GET
> request downloads almost 490 Mo.
> 
> 
> Thanks.

I can't open the url, http://92.90.207.41/index.html. Can you check it?
Flags: needinfo?(abid.mounir)
Hello,

the server was down. It's ok now.

http://92.90.207.41/index.html
Flags: needinfo?(abid.mounir)
Hi ABID,
Can you open about:config and set "media.cache_resume_threshold" to 10 and "media.cache_readahead_limit" to 30 respectively to see if it fixes the issue for you? Thanks!
Assignee: nobody → jwwang
Status: UNCONFIRMED → ASSIGNED
Ever confirmed: true
Flags: needinfo?(abid.mounir)
See Also: → 1085186
Hello,

the issue seems to be fixed with these parameters.
I even tried with a larger video (3h). The browser played all the file with a normal behaviour.

So it's ok for me.
Flags: needinfo?(abid.mounir)
This is an issue caused by infinite pre-download ("media.cache_readahead_limit" is set to 999999 on desktop).

Since pre-download size is (much) larger than the cache size, it will eventually need discard blocks that are most unlikely to be used in the near future which happen to be the most recently downloaded blocks. Then it will try to download the discarded blocks again to fulfill the requirement of pre-download size (which is way larger than the cache size). This becomes a vicious cycle where we discard and download the same data over and over again by making repeated GET calls to overwhelm the HTTP server.

I will change the pref values as described in comment 8 to mitigate this issue.
Priority: -- → P1
Per bug 1085186 comment 7, a better solution is to only limit the pre-download when we figure that the download speed is faster than the consumption required to playback in real time. However, if the consumption rate is larger than the download rate, you will need to stop for buffering soon or later. So I think it is OK to default "media.cache_readahead_limit" to a small but reasonable value instead of a very large one.

We can open a new bug to optimize the value. The basic idea is to increase the value for slow network connection or when MDSM enters buffering for more than some certain times. Thoughts?
Attachment #8863965 - Flags: review?(cpearce)
I'd feel better if we had a higher cache_resume_threshold value... 10 seconds seems to be a little close to the line for me... How about media.cache_resume_threshold=20 media.cache_readahead_limit=40?

I seem to only get a behaviour change with the download of MP4 video with these prefs changed; with WebM the entire resource is downloaded as fast as possible.

Can we simply use (downloadRate < fileSize/fileDuration) as the test for whether we should not throttle? Or the media element readyState?
(In reply to Chris Pearce (:cpearce) from comment #13)
> I seem to only get a behaviour change with the download of MP4 video with
> these prefs changed; with WebM the entire resource is downloaded as fast as
> possible.

Do you have the WebM link that demonstrates the issue?
How about making media.cache_resume_threshold=30 media.cache_readahead_limit=60 in this bug and we implement no throttling when (downloadRate < fileSize/fileDuration) in a new bug?
Yes, that's fine. Do you want to fix the problem with WebMs not honouring the pref change in this bug? Otherwise, we'll have a behaviour difference between WebM and MP4.
Checking...
(In reply to Chris Pearce (:cpearce) from comment #15)
> https://people-mozilla.org/~cpearce/video/zapatou.webm

Making media.cache_resume_threshold=5 media.cache_readahead_limit=10 will stop downloading when half of the duration is buffered. I think there is something wrong in the playbackRate/downloadRate calculation in MediaCache. I will open a new bug for that.
Hi Chris,
The pref values are changed as suggested by comment 16. Can you review the patch again? Thanks!
Flags: needinfo?(cpearce)
(In reply to JW Wang [:jwwang] [:jw_wang] from comment #19)
> (In reply to Chris Pearce (:cpearce) from comment #15)
> > https://people-mozilla.org/~cpearce/video/zapatou.webm
> 
> Making media.cache_resume_threshold=5 media.cache_readahead_limit=10 will
> stop downloading when half of the duration is buffered. I think there is
> something wrong in the playbackRate/downloadRate calculation in MediaCache.
> I will open a new bug for that.

OK. I see the problem.

[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593679 count=1
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593680 count=1
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593681 count=1
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593682 count=1
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593683 count=1
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593684 count=1
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 3593685 count=295
[MediaPDecoder #2]: I/MediaCache Stream 0x7fa0d71ed8b0 Read at 7285888 count=1048576

The read position jumps from 3593685 to 7285888 dramatically.

This is because WebMBufferedState needs to read the end of the stream to estimate the buffer ranges from time to time. Reading bytes at the end causes MediaCache to think readahead bytes are below the readahead limit and need to download more data.

Call stack:
#0  mozilla::MediaCacheStream::Read
#1  0x00007fa10364c214 in mozilla::MediaCacheStream::ReadAt
#2  0x00007fa1036a9901 in mozilla::ChannelMediaResource::MediaReadAt
#3  0x00007fa1038119e7 in mozilla::WebMBufferedState::UpdateIndex
#4  0x00007fa103811ca1 in mozilla::WebMDemuxer::EnsureUpToDateIndex
#5  0x00007fa103813606 in mozilla::WebMDemuxer::GetBuffered
#6  0x00007fa103813e23 in mozilla::WebMTrackDemuxer::GetBuffered
#7  0x00007fa1036d58c4 in mozilla::MediaFormatReader::DemuxerProxy::Wrapper::UpdateBuffered
#8  0x00007fa1036bc659 in operator() (__closure=<optimized out>)
#9  mozilla::detail::ProxyFunctionRunnable<mozilla::MediaFormatReader::DemuxerProxy::NotifyDataArrived()::<lambda()>, mozilla::MozPromise<bool, mozilla::MediaResult, true> >::Run(void) (

Hi Jya,
Do you have any idea to fix the issue per https://bugzilla.mozilla.org/show_bug.cgi?id=1347174&mark=13-15#c13?
Flags: needinfo?(jyavenard)
See Also: → 1361964
Bug 1361964 opened to track the issue in comment 22.
Comment on attachment 8863965 [details]
Bug 1347174 - per comment 10, limit the readahead size to prevent cache thrashing.

https://reviewboard.mozilla.org/r/135696/#review139124

r+, but we should follow up on the issue with WebM, otherwise we'll not get the full benefit of this change.
Attachment #8863965 - Flags: review?(cpearce) → review+
Thanks! Bug 1361964 opened to follow up on the issue with WebM.
Flags: needinfo?(cpearce)
Pushed by jwwang@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/c612b7357d7b
per comment 10, limit the readahead size to prevent cache thrashing. r=cpearce
(In reply to JW Wang [:jwwang] [:jw_wang] from comment #22)

> Hi Jya,
> Do you have any idea to fix the issue per
> https://bugzilla.mozilla.org/show_bug.cgi?id=1347174&mark=13-15#c13?

This is kinetik code, he would know more than me first hand.
Flags: needinfo?(jyavenard) → needinfo?(kinetik)
(In reply to JW Wang [:jwwang] [:jw_wang] from comment #22)
> This is because WebMBufferedState needs to read the end of the stream to
> estimate the buffer ranges from time to time. Reading bytes at the end
> causes MediaCache to think readahead bytes are below the readahead limit and
> need to download more data.

It probably needs a way to read from the MediaCache without triggering that behaviour, then.

The UpdateIndex() stuff has other issues too (e.g. bug 1234727) so I'm not really a fan of it.  WebMBufferedState was originally designed to be fed data via a side-channel from the MediaCache as it was downloading so it could inspect it without affecting MediaCache state.
Flags: needinfo?(kinetik)
(In reply to Matthew Gregan [:kinetik] from comment #28)
> WebMBufferedState was originally designed to be fed
> data via a side-channel from the MediaCache as it was downloading so it
> could inspect it without affecting MediaCache state.

Bug 1361964 calls MediaCacheStream::ReadFromCache() instead which will not change the states of MediaCache like MediaCacheStream::Read() does.
Depends on: 1361964
https://hg.mozilla.org/mozilla-central/rev/c612b7357d7b
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla55
Comment on attachment 8863965 [details]
Bug 1347174 - per comment 10, limit the readahead size to prevent cache thrashing.

Approval Request Comment
[Feature/Bug causing the regression]:unknown
[User impact if declined]: More data will be downloaded than necessary to waste the bandwidth of the user during media playback. Also the server might be overwhelmed by repeated GET calls from the client.
[Is this code covered by automated tests?]:No.
[Has the fix been verified in Nightly?]:Yes.
[Needs manual test from QE? If yes, steps to reproduce]: Yes. See comment 0 and comment 2.
[List of other uplifts needed for the feature/fix]: None.
[Is the change risky?]: Low.
[Why is the change risky/not risky?]: Just some small changes to the pref values.
[String changes made/needed]: None.
Attachment #8863965 - Flags: approval-mozilla-beta?
JW,
Is bug 1361964 required to be uplift as well? Do we need to add a new test case for this bug or the test case for 1361964 is enough?
Flags: needinfo?(jwwang)
Depends on: 1362852
Uplift for bug 1361964 is already requested. We just need one test case which include all formats we support like MP4, WebM, etc.
Flags: needinfo?(jwwang)
No longer depends on: 1362852
Comment on attachment 8863965 [details]
Bug 1347174 - per comment 10, limit the readahead size to prevent cache thrashing.

Looks like this is not a new regression in 54 and not sure if changing the readahead size will cause other issues. I would like it to bake more no nightly and won't fix in 54. Beta54-.
Attachment #8863965 - Flags: approval-mozilla-beta? → approval-mozilla-beta-
i can't be sure, but this or some other media related change recently has broken video buffering on mp4upload and thevideos/
(In reply to Danial Horton from comment #35)
> i can't be sure, but this or some other media related change recently has
> broken video buffering on mp4upload and thevideos/

I'd expect so. With this change we'll stop buffering after the amount downloaded gets 60 seconds ahead of the playback position, and resume again when it's 30 seconds ahead.
it needs a pref to switch it off.

there are still many video sites which serve content at download rates too low to watch at decent quality in realtime.
(In reply to Danial Horton from comment #37)
> it needs a pref to switch it off.
> 
> there are still many video sites which serve content at download rates too
> low to watch at decent quality in realtime.

You can go to about:config and change both "media.cache_resume_threshold" and "media.cache_readahead_limit" to 999999.

Btw, I don't quite understand your issue. Can you show me the site that demonstrate the issue?
See Also: → 1364001
(In reply to Danial Horton from comment #37)
> it needs a pref to switch it off.
> 
> there are still many video sites which serve content at download rates too
> low to watch at decent quality in realtime.

Bug 1364001 is opened where we try to be smart and only throttle download when it is fast.
(In reply to ABID Mounir from comment #7)
> Hello,
> 
> the server was down. It's ok now.
> 
> http://92.90.207.41/index.html

Hi Abid,
The site is down again. Can you bring it up for we want to do more tests?
Flags: needinfo?(abid.mounir)
Hello,

The test page is now up.
http://92.90.207.41/index.html
Flags: needinfo?(abid.mounir)
(In reply to JW Wang [:jwwang] [:jw_wang] from comment #38)
> (In reply to Danial Horton from comment #37)
> > it needs a pref to switch it off.
> > 
> > there are still many video sites which serve content at download rates too
> > low to watch at decent quality in realtime.
> 
> You can go to about:config and change both "media.cache_resume_threshold"
> and "media.cache_readahead_limit" to 999999.
> 
> Btw, I don't quite understand your issue. Can you show me the site that
> demonstrate the issue?

can we set it to -1 = unlimited?

I ask because in our country speeds fluctuates a lot,
so we basically just open tabs in background and after some time switch to it and watch it so it's not buffering or anything and smooth as our connection is very unreliable.
Flags: needinfo?(jwwang)
(In reply to jigsawmode from comment #42)
> can we set it to -1 = unlimited?
No. Please set it to 999999 to approach unlimited.
Flags: needinfo?(jwwang)
Hi Abid, thanks a lot for helping us out with this bug. Would you also be available to confirm whether this issue is fixed on the latest Firefox 55 build [1]? We'd really appreciate it.


[1] https://archive.mozilla.org/pub/firefox/candidates/55.0b1-candidates/build5/
Flags: needinfo?(abid.mounir)
Depends on: 1373618
No longer depends on: 1373618
Hello,

Seems to be ok for me with the 55.0b1.
The browser plays larges files with a normal behaviour.

Regards
(In reply to ABID Mounir from comment #45)
> Hello,
> 
> Seems to be ok for me with the 55.0b1.
> The browser plays larges files with a normal behaviour.
> 
> Regards
Thanks for your verification.
Depends on: 1375772
Blocks: 1466569
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: