Closed
Bug 1074310
Opened 10 years ago
Closed 9 years ago
[e10s] 5mb to 29mb of VP8 leaks with e10s
Categories
(Core :: WebRTC: Audio/Video, defect, P2)
Core
WebRTC: Audio/Video
Tracking
()
RESOLVED
DUPLICATE
of bug 1198107
Tracking | Status | |
---|---|---|
e10s | + | --- |
backlog | webrtc/webaudio+ |
People
(Reporter: mccr8, Unassigned)
References
(Blocks 1 open bug)
Details
(Whiteboard: [MemShrink:P1])
Attachments
(1 file, 1 obsolete file)
20.07 KB,
text/plain
|
Details |
This is the largest single stack, but there are some other substantial ones, too:
Indirect leak of 9216046 byte(s) in 2 object(s) allocated from:
#0 0x7f25dfe474f1 in malloc /builds/slave/moz-toolchain/src/llvm/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:74
#1 0x7f25cbf70dd6 in vpx_memalign build/media/libvpx/vpx_mem/vpx_mem.c:125
#2 0x7f25cbf70dd6 in vpx_calloc build/media/libvpx/vpx_mem/vpx_mem.c:144
#3 0x7f25cc04eadf in vp8_alloc_compressor_data build/media/libvpx/vp8/encoder/onyx_if.c:1194
#4 0x7f25cc0525c7 in vp8_change_config build/media/libvpx/vp8/encoder/onyx_if.c:1713
#5 0x7f25cc0535e5 in init_config build/media/libvpx/vp8/encoder/onyx_if.c:1349
#6 0x7f25cc0535e5 in vp8_create_compressor build/media/libvpx/vp8/encoder/onyx_if.c:1806
Some other stacks have webrtc::VP8EncoderImpl::InitEncode in them, so I suspect WebRTC is to blame.
We don't report indirect leaks like this on TBPL, which is why we didn't know about this earlier. I guess some global variable ends up entraining all of this data.
Reporter | ||
Comment 1•10 years ago
|
||
Comment 2•10 years ago
|
||
Why don't you report indirect leaks? There are legitimate situations where you can have indirectly leaked blocks without any directly leaked blocks (e.g. reference cycles).
In this case though it seems more likely that you've suppressed the directly leaked block that owns those.
Reporter | ||
Comment 3•10 years ago
|
||
(In reply to Sergey Matveev from comment #2)
> Why don't you report indirect leaks? There are legitimate situations where
> you can have indirectly leaked blocks without any directly leaked blocks
> (e.g. reference cycles).
We have a tiny persistent indirect leak on a few test suites that will be a pain to track down, and with a pretty generic leak, so I decided it would be better to ignore indirect leaks than add a suppression for it. At the time, I hadn't seen any large indirect leaks like this. I filed bug 1074317 for enabling reporting of indirect leaks, once the extant indirect leaks have been fixed.
> In this case though it seems more likely that you've suppressed the directly
> leaked block that owns those.
I guess that could be the case. There are a few remaining WebRTC suppressions that could be related, or one of the library suppressions could be catching the direct leak.
I just looked at another 4 or so M3 logs, and only the first one I looked at had this leak, so it seems like it is an intermittent failure. Awesome...
Summary: 22MB of indirect VP8 leaks in Mochitest-3 reported by LSan → intermittent 22MB of indirect VP8 leaks in Mochitest-3 reported by LSan
Comment 4•10 years ago
|
||
Also, what precisely do you mean by "entraining"? If those were referenced from a global variable, they wouldn't be reported as leaked.
Reporter | ||
Comment 5•10 years ago
|
||
(In reply to Sergey Matveev from comment #4)
> Also, what precisely do you mean by "entraining"? If those were referenced
> from a global variable, they wouldn't be reported as leaked.
Oops, right, I just got confused. Ignore that, then!
Comment 6•10 years ago
|
||
Is this particular leak new since the patches on bug 1063356 landed on Thursday? I'm curious if the follow-up patches there will fix it, if it's inside the library.
Reporter | ||
Comment 7•10 years ago
|
||
(In reply to Ralph Giles (:rillian) from comment #6)
> Is this particular leak new since the patches on bug 1063356 landed on
> Thursday? I'm curious if the follow-up patches there will fix it, if it's
> inside the library.
Unfortunately, I have no idea when this started happening, or even how often it happens.
Updated•10 years ago
|
Whiteboard: [MemShrink] → [MemShrink:P2]
Reporter | ||
Comment 8•9 years ago
|
||
While this seems to be intermittent without e10s, with e10s it seems to leak every run, though the size varies. This is a pretty huge leak.
https://treeherder.mozilla.org/#/jobs?repo=try&revision=9aabd4f8f0e4
tracking-e10s:
--- → ?
Summary: intermittent 22MB of indirect VP8 leaks in Mochitest-3 reported by LSan → [e10s] 5mb to 29mb of VP8 leaks with e10s
Reporter | ||
Comment 9•9 years ago
|
||
This look similar to the intermittent non-e10s stacks I was seeing before. It looks like WebRTC is involved, though I don't know where things are going wrong.
Attachment #8496966 -
Attachment is obsolete: true
Reporter | ||
Comment 10•9 years ago
|
||
Putting this up for re-triage given that it is non-intermittent with e10s.
Whiteboard: [MemShrink:P2] → [MemShrink]
Reporter | ||
Comment 11•9 years ago
|
||
Anthony, is there somebody who can look at this large e10s leak? It looks like rillian is on PTO. Thanks.
Flags: needinfo?(ajones)
Comment 12•9 years ago
|
||
This seems like a pretty severe leak, now that it's reproducible lets P1 it.
Whiteboard: [MemShrink] → [MemShrink:P1]
Maire - do you want to look at this? I don't have anyone available to investigate.
Flags: needinfo?(mreavy)
Comment 14•9 years ago
|
||
Paul (Kerr) can look at this, but he has higher priority bugs ahead of this. This appears to be a shutdown leak under e10s -- so I'm making this a P2 for WebRTC. If there's something I'm missing here and someone feels it should be a higher priority for my team, please explain. Thanks.
backlog: --- → webRTC+
Rank: 21
Component: Audio/Video → WebRTC: Audio/Video
Flags: needinfo?(mreavy)
Priority: -- → P2
Updated•9 years ago
|
Flags: needinfo?(ajones)
Updated•9 years ago
|
Comment 15•9 years ago
|
||
It's surprising that users aren't reporting problems due to this. Generally people complain about leaks that are this big and frequent.
Maybe WebRTC usage is low enough that people aren't noticing it in practice.
Comment 16•9 years ago
|
||
I think this is a dup of bug 1198107, which landed last week. And that only happened when the browser believes the CPU is overloaded and has to reduce resolution (or if the bandwidth is too low, maybe). And then the amount lost depends on how many times it transitions back and forth.
Can you retry with a current Nightly?
Flags: needinfo?(continuation)
Reporter | ||
Comment 17•9 years ago
|
||
I'm not running it locally, this is on LSan builds. I'll check the TreeHerder logs tomorrow and see if the supressions I added for this are still being triggered.
Reporter | ||
Comment 18•9 years ago
|
||
(In reply to Randell Jesup [:jesup] from comment #16)
> Can you retry with a current Nightly?
Great, I don't see these suppressions being used any more. I'll file a bug about removing the suppression. I also don't see the suppression for bug 982111 being used any more, so maybe that was fixed, too.
Flags: needinfo?(continuation)
Reporter | ||
Updated•9 years ago
|
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → DUPLICATE
Reporter | ||
Comment 20•9 years ago
|
||
I filed bug 1201096 for removing the suppressions for this and some other leaks.
You need to log in
before you can comment on or make changes to this bug.
Description
•