Closed Bug 1262549 Opened 8 years ago Closed 8 years ago

High memory usage triggered by a gif

Categories

(Core :: Graphics: ImageLib, defect)

defect
Not set
critical

Tracking

()

RESOLVED FIXED
mozilla50
Tracking Status
firefox50 --- fixed

People

(Reporter: tsmith, Assigned: seth)

References

(Blocks 1 open bug)

Details

(Keywords: crash, csectype-oom, testcase, Whiteboard: [gfx-noted])

Attachments

(3 files, 1 obsolete file)

Attached file test_case.html
Not 100% sure what is happening here but it appears to be an OOM crash. There is nothing in the logs (debug, ASan, etc)
Attached image test_case.gif
I think this is "just" an OOM. It's pretty bad though. On mac I don't oom, my firefox process goes up to 9GB, and if I hit about:memory at the right time it tells me I have 34GB of heap-unclassified. We should understand why this is happening.
The image is ~ 50000 x 50000. All the memory is from the DeinterlacingFilter here

http://mxr.mozilla.org/mozilla-central/source/image/SurfaceFilters.h#100

We get the same kind of memory use on release, so whatever the gif decoder did before SurfacePipe also needed to allocate a fullframe sized buffer even if we are downscaling.

Even though this memory is relatively short lived since it lives on a decoder, do we want to report it for about:memory?

If we tried to create a full sized final output frame of this size we wouldn't because the SurfaceCache::CanHold check in Decoder::AllocateFrameInternal. So perhaps for other allocations that a decoder might make (like this one) we should check if the allocation can fit in the surface cache, and fail to proceed if it won't. This doesn't make total sense since we won't be putting that amount in the surface cache and the allocation will be short lived. Or maybe allow twice the image cache size because it is a shortlived allocation? Something to limit ridiculously huge allocations.

This bug can be opened up, but I don't have the ability to do that.
Group: gfx-core-security
Keywords: csectype-oom
Summary: Potential OOM crash trigger by a gif → Potential OOM crash triggered by a gif
> Even though this memory is relatively short lived since it lives on a decoder, do we want to report it for about:memory?

Maybe? I guess it depends how likely it'll show up in practice, which may be hard to determine.
From a fuzzing perspective this bug is a bit of a pain because no logs are generated to help bucket/identify the crash. There is no way to tell if it is this bug I am seeing or another that may have the same unfortunate behavior.
(In reply to Tyson Smith [:tsmith] from comment #5)
> From a fuzzing perspective this bug is a bit of a pain because no logs are
> generated to help bucket/identify the crash. There is no way to tell if it
> is this bug I am seeing or another that may have the same unfortunate
> behavior.

That's odd, this allocation is specifically marked as fallible, so it should crash. Maybe it doesn't fail, but instead linux decides to oom kill it?
Whiteboard: [gfx-noted]
Summary: Potential OOM crash triggered by a gif → High memory usage triggered by a gif
See Also: → 1277397
(In reply to Timothy Nikkel (:tnikkel) from comment #3)
> Even though this memory is relatively short lived since it lives on a
> decoder, do we want to report it for about:memory?

That's probably way more work than it's worth, as a lot of plumbing would be required and decoders are pretty ephemeral.

> If we tried to create a full sized final output frame of this size we
> wouldn't because the SurfaceCache::CanHold check in
> Decoder::AllocateFrameInternal. So perhaps for other allocations that a
> decoder might make (like this one) we should check if the allocation can fit
> in the surface cache, and fail to proceed if it won't. This doesn't make
> total sense since we won't be putting that amount in the surface cache and
> the allocation will be short lived.

It's reasonable as a heuristic, sure. DeinterlacingFilter is the only SurfaceFilter that allocates enough memory to store the whole image, rather than just a few rows, so I think that's the only place we need to do it right now.
(In reply to Timothy Nikkel (:tnikkel) from comment #6)
> That's odd, this allocation is specifically marked as fallible, so it should
> crash. Maybe it doesn't fail, but instead linux decides to oom kill it?

That's overcommit for you.
Here's the patch. The actual code is just a few lines in SurfaceFilters.h. This
change does mean that the SurfaceCache must be initialized for any test that
uses DeinterlacingFilter, so I added setup and teardown methods for the
appropriate GTests. (This requires replacing TEST with TEST_F, alas.) I also
added a new GTest that ensures that we can't create a SurfacePipe that would
allocate a huge deinterlacing buffer.
Attachment #8764498 - Flags: review?(edwin)
See Also: → 1281725
Thanks for the review!

This modified version of the patch does the TEST -> TEST_F dance in a couple
more of the test files that I forgot last night. We have to do it everywhere to
make sure that we can run any test individually; we don't want implicit
dependencies between tests.
Attachment #8764498 - Attachment is obsolete: true
Pushed by mfowler@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/51afa8a10978
Refuse to allocate huge deinterlacing buffers. r=edwin
https://hg.mozilla.org/mozilla-central/rev/51afa8a10978
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla50
Blocks: grizzly
Assignee: nobody → seth.bugzilla
Flags: in-testsuite+

Added crashtest in bug 1633121.

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: