Video app memory growth during updating thumbnails of different videos

VERIFIED FIXED

Status

Firefox OS
Gaia::Video
P1
blocker
VERIFIED FIXED
3 years ago
3 years ago

People

(Reporter: tkundu, Assigned: djf)

Tracking

({footprint, perf})

unspecified
ARM
Gonk (Firefox OS)
footprint, perf

Firefox Tracking Flags

(blocking-b2g:2.0+, b2g-v2.0 verified, b2g-v2.1 verified, b2g-v2.2 verified)

Details

(Whiteboard: [caf priority: p2][CR 690932][MemShrink])

Attachments

(3 attachments, 2 obsolete attachments)

STR :

1) Load 20 HD videos on device
2) Reboot device and launch video app.
3) Run below command :
adb shell "b2g-info && cat /sys/class/kgsl/kgsl/page_alloc && cat /proc/<video app pid>/status | grep VmSwap && cat /d/ion/iommu | grep 'total ' "
4) Play any video , pause it and return video list window on video app. Video app will update video list with new thumbnail where playback has stopped.
5) Run Command from step (3)
6) You will see Video app's total USS (USS in b2g-info and Vmswap value from |cat /proc/pid/status| ), kgsl memory usage and iON usage is increasing for each new thumbnail
7) Repeat step 3,4,5,6 for each of the 20 HD video files on sdcard. 
8) We will soon see video app started using at least additional 10MN memory.

Device used : Flame 256MB JB build. (You need to use mem=271m for FLAME device) 

it is 100% reproducible on FLAME 256MB device.

I think that video app is not clearing thumbnail memory of previous videos while storing new thumbnails after playing each video file.

Video app memory growth causes it to be killed by LMK in stability test and it is one of the reason why it is getting killed although it is ONLY foreground app on device.


FFOS v2.0 version:

https://www.codeaurora.org/cgit/quic/lf/b2g/mozilla/gaia/commit/?h=mozilla/v2.0&id=4627014cc5c5eeec894183866d4c57291302f8b8
https://www.codeaurora.org/cgit/quic/lf/b2g/mozilla/gecko/commit/?h=mozilla/v2.0&id=08a5d8c66bcacb07df1e9125ff24f9d0924ebe68


Please note if I try to use |get_about_memory.py| then video app memory usage comes down sometime but video app should release previous thumbnail caches even if we don't use |get_about_memory.py| as it affects device performance and video app is getting killed by LMK (in stability test) although it is only foreground app in device. 


I am not attaching any memory logs as it is reproducible on FLame always and I provided commands to see growth in step(3)
[Blocking Requested - why for this release]:
blocking-b2g: --- → 2.0?
Whiteboard: [CR 690932]
Flags: needinfo?(khuey)

Updated

3 years ago
Whiteboard: [CR 690932] → [caf priority: p2][CR 690932]
On Firefox OS, at most only one OMXCodec can be instantiated at the same time. Therefore, kgsl memory and ion should be related to gfx layers and canvas elements.
Flags: needinfo?(khuey)

Updated

3 years ago
Assignee: nobody → sotaro.ikeda.g

Updated

3 years ago
Keywords: footprint, perf
Whiteboard: [caf priority: p2][CR 690932] → [caf priority: p2][CR 690932][MemShrink]

Comment 3

3 years ago
Blocking Reason: As stated in the original desc, "Video app memory growth causes it to be killed by LMK in stability test and it is one of the reason why it is getting killed although it is ONLY foreground app on device."
blocking-b2g: 2.0? → 2.0+
Please note that I used 20 video files and each of the video file has 640x480 resolution and all video files is in MP4 format.

This issue is very easily reproducible on FLAME device (256MB). I verified it myself as I said in Comment 0.
I tried to reproduce the crash with 854*480 resolution vides on v2.0 flame JB(271MB). But I did not see the video app's crash until now.

I used a video of Bug 889238.
Tapas can you reproduce the problem by using video of Bug 889238?
Flags: needinfo?(tkundu)
(In reply to Sotaro Ikeda [:sotaro] from comment #6)
> Tapas can you reproduce the problem by using video of Bug 889238?

Can you see the crash by using the video?
By the way, the video thumbnail is rendered with some shear. This seems like a bug.

Updated

3 years ago
See Also: → bug 889238
I am going to try again with HD videos.
(In reply to Sotaro Ikeda [:sotaro] from comment #9)
> I am going to try again with HD videos.

I saw the crash by using HD video. When sd card have a lot of HD videos, the video app becomes very unresponsive. It seem a problem that related to Bug 1013756.

Updated

3 years ago
See Also: → bug 1013756

Updated

3 years ago
Flags: needinfo?(tkundu)
> 
> I think that video app is not clearing thumbnail memory of previous videos
> while storing new thumbnails after playing each video file.

video app creates and uses canvas only within captureFrame(). The captureFrame() is called only when OMXCodec is loaded. 
https://github.com/mozilla-b2g/gaia/blob/v2.0/apps/video/js/metadata.js#L300
canvas is destroyed during GC. It seems to defer canvas's resources release.
(In reply to Sotaro Ikeda [:sotaro] from comment #12)
> canvas is destroyed during GC. It seems to defer canvas's resources release.

It seems like you found root cause of memory growth. Do you still need any help from me ? You don't need to use HD video. Use any video of (640x380) resolution and don't try to crash video app. Please try to run STR from Comment 0 and see memory growth by executing command from Comment 0. 

Please make NI on me if you still need any help from me :) Thanks a lot for help
Flags: needinfo?(sotaro.ikeda.g)
(In reply to Tapas[:tkundu on #b2g/gaia/memshrink/gfx] (always NI me) from comment #13)
> Use any video of (640x380)
> resolution and don't try to crash video app.

it should be 640x480 .. Sorry for the mistake.
(In reply to Tapas[:tkundu on #b2g/gaia/memshrink/gfx] (always NI me) from comment #13)
> 
> Please make NI on me if you still need any help from me :) Thanks a lot for
> help

I seems to found it. Help is not needed now. Thank!
Flags: needinfo?(sotaro.ikeda.g)
canvas's native side implementation is CanvasRenderingContext2D. When CanvasRenderingContext2D allocates a rendering target, it is alive until CanvasRenderingContext2D's destructor. Other case of releasing the rendering target is "Change canvas dimension" or "Change moz-opaque attribute".
Created attachment 8487363 [details] [diff] [review]
temporay log patch

The patch that I used to check number of canvas instances.
When canvas is rendered by SkiaGL, SurfaceStream is used for rendering to TextureClient. The SurfaceStream allocates ion(TextureClient).
By using attachment 8487363 [details] [diff] [review], I confirmed that the numbers of canvas and SurfaceStream were risen up to 8.
gaia side does not have a api to request canvas resource release. "Change canvas dimension" or "Change moz-opaque attribute" could be a workaround.
Created attachment 8487371 [details] [diff] [review]
patch - Change canvas size when end of use
Created attachment 8487375 [details] [diff] [review]
patch - Change canvas size when end of use

Remove unnecessary change.

Updated

3 years ago
Attachment #8487371 - Attachment is obsolete: true
attachment 8487375 [details] [diff] [review] decreases the max number of SurfaceStream from 8 to 2.
hema, can you assign a gaia engineer? Just update attachment 8487375 [details] [diff] [review] correctly seems enough. Though I am not sure about gaia side.
Flags: needinfo?(hkoka)
Tapas, can you check if attachment 8487375 [details] [diff] [review] fix the problem for you?
Flags: needinfo?(tkundu)

Comment 26

3 years ago
David/Russ, 

Can one of you please take this up today and try Sotaro's fix suggestion?

Thanks
Hema
Flags: needinfo?(rnicoletti)
Flags: needinfo?(hkoka)
Flags: needinfo?(dflanagan)
(Assignee)

Comment 27

3 years ago
Thanks for investigating this, Sotaro. In Gallery there are a number of places where we set the canvas width to 0 after using it to release memory. I guess we forgot to do that in the Video app. I'll take this and see if I can finish it up.
Assignee: sotaro.ikeda.g → dflanagan
Flags: needinfo?(dflanagan)
(Assignee)

Comment 28

3 years ago
Created attachment 8487546 [details] [review]
link to patch on github

Russ,

This is my variant of the basic patch Sotaro proposed. Note that I change it so that the canvas size is not cleared until after toBlob() is done, which seems more robust to me.

I've basically copied what we do in Gallery here and have included a couple of other changes to the way the canvas is initialized. These are things I learned about for gallery after this captureFrame() method had been written. It is better to set the canvas size before creating the context because it prevents memory reallocations, for example.  And the willReadFrequently flag is a hint to keep the canvas off of the GPU.

If you are able to reproduce the bug, please check the code and verify that it fixes the bug.
Attachment #8487375 - Attachment is obsolete: true
Attachment #8487546 - Flags: review?(rnicoletti)
Comment on attachment 8487546 [details] [review]
link to patch on github

Without applying any patches, the total memory from following the STR in the original description was 10072064.

After applying the patch from dfj, the total memory following the same steps was 6307840. 

Therefore, I'm concluding the patch is properly releasing memory from the canvas after viewing a video.
Attachment #8487546 - Flags: review?(rnicoletti) → review+
Flags: needinfo?(rnicoletti)
(Assignee)

Comment 30

3 years ago
Landed on master: https://github.com/mozilla-b2g/gaia/commit/a5949ec98cf3ed7bb46d8c7b13678ff3c767c71c
Status: NEW → RESOLVED
Last Resolved: 3 years ago
status-b2g-v2.0: --- → affected
status-b2g-v2.1: --- → affected
status-b2g-v2.2: --- → fixed
Resolution: --- → FIXED
(Assignee)

Comment 31

3 years ago
Uplifted to v2.1: https://github.com/mozilla-b2g/gaia/commit/7f8fcee7c4e57370b176125ef734d76b5ddcd234
status-b2g-v2.1: affected → fixed
(Assignee)

Comment 32

3 years ago
Uplifted to v2.0: https://github.com/mozilla-b2g/gaia/commit/cd198d77669c9d283b0e2afe3f13e294b2561ac5
status-b2g-v2.0: affected → fixed
(In reply to David Flanagan [:djf] from comment #32)
> Uplifted to v2.0:
> https://github.com/mozilla-b2g/gaia/commit/
> cd198d77669c9d283b0e2afe3f13e294b2561ac5

This definitely saves a lot memory but I am still very small growth. I will run automation for 24 hours to see if it increases more or not. Thanks for help !
Flags: needinfo?(tkundu)
This issue has been successfully verified on Flame 2.0,2.1, 2.2
See attachment:verify.png
Reproducing rate: 0/5

Flame 2.1 new versions:
Gaia-Rev        ccb49abe412c978a4045f0c75abff534372716c4
Gecko-Rev       https://hg.mozilla.org/releases/mozilla-b2g34_v2_1/rev/18fb67530b22
Build-ID        20141130001203
Version         34.0
Flame 2.0 new versions:
Gaia-Rev        8d1e868864c8a8f1e037685f0656d1da70d08c06
Gecko-Rev       https://hg.mozilla.org/releases/mozilla-b2g32_v2_0/rev/c756bd8bf3c3
Build-ID        20141130000204
Version         32.0
Flame 2.2 new version:
Gaia-Rev        7119da7a86cd803840678ca3a6067e5622adc481
Gecko-Rev       https://hg.mozilla.org/mozilla-central/rev/df3fc7cb7e80
Build-ID        20141130040207
Version         37.0a1
(In reply to Tapas[:tkundu on #b2g/gaia/memshrink/gfx] (always NI me) from comment #33)
> (In reply to David Flanagan [:djf] from comment #32)
> > Uplifted to v2.0:
> > https://github.com/mozilla-b2g/gaia/commit/
> > cd198d77669c9d283b0e2afe3f13e294b2561ac5
> 
> This definitely saves a lot memory but I am still very small growth. I will
> run automation for 24 hours to see if it increases more or not. Thanks for
> help !

Hi tapas,
The growth is small,and uss fluctuate in 4M,sometimes it reduce to the same with first time playing,So I think it is right,We do not test it for 24 hours,if there is problem please contact with me.
Flags: needinfo?(tkundu)
Created attachment 8530818 [details]
flame 2.2
(In reply to Alissa from comment #35)
> (In reply to Tapas[:tkundu on #b2g/gaia/memshrink/gfx] (always NI me) from
> comment #33)
> > (In reply to David Flanagan [:djf] from comment #32)
> > > Uplifted to v2.0:
> > > https://github.com/mozilla-b2g/gaia/commit/
> > > cd198d77669c9d283b0e2afe3f13e294b2561ac5
> > 
> > This definitely saves a lot memory but I am still very small growth. I will
> > run automation for 24 hours to see if it increases more or not. Thanks for
> > help !
> 
> Hi tapas,
> The growth is small,and uss fluctuate in 4M,sometimes it reduce to the same
> with first time playing,So I think it is right,We do not test it for 24
> hours,if there is problem please contact with me.

Yeah agreed.. This bug can be closed if fixes has already landed :) . Sorry for delayed update.
Flags: needinfo?(tkundu)
Flags: needinfo?(huayu.li)
Status: RESOLVED → VERIFIED
status-b2g-v2.0: fixed → verified
status-b2g-v2.1: fixed → verified
status-b2g-v2.2: fixed → verified
Flags: needinfo?(huayu.li)
You need to log in before you can comment on or make changes to this bug.