Closed Bug 1056394 Opened 10 years ago Closed 10 years ago

test_play_twice.html current time at end failures with fmp4

Categories

(Core :: Audio/Video, defect)

x86
macOS
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla34

People

(Reporter: rillian, Assigned: jya)

References

Details

Attachments

(1 file, 1 obsolete file)

After the duration fix in bug 1054023 we still have a variation with can_play_twice and media.fragmented-mp4.exposed.

failed | small-shot.m4a current time at end: 0.588916
failed | small-shot-mp3.mp4 current time at end: 0.5656
failed | gizmo.mp4 current time at end: 11.162291
Blocks: 1043696
Anthony, any ideas on this one?
Works on Linux - possibly something Apple specific?
Seems to be related to seeking being more or less broken. If I load gizmo.mp4 locally and hit play again after playback has completed, the playback thumb jumps to the beginning, then back to the end where the duration starts incrementing.

I think you're right. The input sample timestamps look correct but for audio at least my code is returning the too-big values. I'm not resetting something properly.
Assignee: nobody → giles
AppleATDecoder never looks at the input timestamp, it just marks decoded packets based on a counter of how much decoded data it's seen. This works for AppleMP3Reader because it can reset the origin when its explicit Seek() method is called.

To fix this we need to get the actual PTS from the osx decoder, or somehow plumb the input pts through to our callback.
Propagate original input timestamp
Attachment #8477358 - Flags: review?(giles)
Assignee: giles → jyavenard
Status: NEW → ASSIGNED
Comment on attachment 8477358 [details] [diff] [review]
OS X: Properly propagate timestamps

Review of attachment 8477358 [details] [diff] [review]:
-----------------------------------------------------------------

This means we return multiple AudioData objects with the same timestamp. Possibly a timestamp from the future, since AudioFileStream can buffer several samples before returning packetized data. Seems to work in practice though, and fixes seeking and the duration problem, so r=me.

::: content/media/fmp4/apple/AppleATDecoder.h
@@ +48,5 @@
>    RefPtr<MediaTaskQueue> mTaskQueue;
>    MediaDataDecoderCallback* mCallback;
>    AudioConverterRef mConverter;
>    AudioFileStreamID mStream;
> +  int64_t mCurrentAudioTimestamp;

s/int64_t/Microseconds/
Attachment #8477358 - Flags: review?(giles) → review+
Apply review comment, carrying forward r=me.
Attachment #8477358 - Attachment is obsolete: true
Attachment #8477714 - Flags: review+
Comment on attachment 8477714 [details] [diff] [review]
propagate timestamps

int64_t is what is passed from the MP4Frame as well as AudioData later on. 

In regards to the asynchronous call, it's already a problem with the use of mSamplePosition, so if it's wrong now, it was wrong before. 

Having said that, I tested it for both mp3 and AAC and it appears that the callback is called immediately. It is not queued. 

I will ask the question to Apple's core audio team and confirm with them. 

I also noticed another potential issue in the call to the decoding routine. The flags 0 is always passed. When there's a discontinuity, it should be set to another value.
(In reply to Jean-Yves Avenard [:jya] from comment #9)

> In regards to the asynchronous call, it's already a problem with the use of
> mSamplePosition, so if it's wrong now, it was wrong before.

Right. It used to increment every AudioData, but the offset was wrong. Now the offset is mostly right, but it doesn't increment every AudioData.

> Having said that, I tested it for both mp3 and AAC and it appears that the
> callback is called immediately. It is not queued. 

I don't remember what test file that was, unfortunately.

> I will ask the question to Apple's core audio team and confirm with them.

Thanks!

> I also noticed another potential issue in the call to the decoding routine.
> The flags 0 is always passed. When there's a discontinuity, it should be set
> to another value.

Bug 1057612.
What I did in the WMF backend is I pass the pts through to WMF, but I ignore the pts that it gives us back since sometimes the ptr+duration_decoded don't line up. When we Flush (before a seek), and for the first sample in the stream, I set a isDiscontinuity flag. When a sample comes out of WMF, if the isDiscontinuity flag is true we record the pts that WMF assigned to the sample. I then keep track of the number of audio frames that have come out of WMF, and calculate the timestamp as the timestamp captured after the last discontinuity, plus the number of audio frames produced.

You could probably do something like that here, except that you can't rely on CoreAudio to plumb the pts through for you, you may have to assume the pts the demuxer assigns to the first sample after a discontinuity is the correct time. Or maybe you need to do something tricker, since IIRC AAC requires half of the previous sample to decode the current sample.
https://hg.mozilla.org/mozilla-central/rev/78fc4d778b8f
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla34
You need to log in before you can comment on or make changes to this bug.