Store all frames of animated images in the ImageLib SurfaceCache

RESOLVED FIXED in mozilla37

Status

()

Core
ImageLib
RESOLVED FIXED
2 years ago
2 years ago

People

(Reporter: seth, Assigned: seth)

Tracking

(Blocks: 1 bug)

unspecified
mozilla37
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(1 attachment, 2 obsolete attachments)

(Assignee)

Description

2 years ago
At this point all non-animated image frames are stored in the SurfaceCache. However, we still need to start storing animated image frames there. With that change made, we'd complete the requirements for bug 977459 - RasterImage would be able to simultaneously contain multiple decoded versions of the same image. This is a requirement for downscale-during-decode.

Comment 1

2 years ago
Question: wouldn't this cause issues for the common practice of having very large animated images (usually gifs) as "pseudo video", often with many hundreds of frames and many MBs in size?
(Assignee)

Updated

2 years ago
Depends on: 1116719
(Assignee)

Comment 2

2 years ago
Created attachment 8542907 [details] [diff] [review]
Store animated frames in the SurfaceCache

Here's the patch.

There are a few things going on here:

- The changes in imgFrame add a new method, imgFrame::SetRawAccessOnly, which
  acts the same as if we kept a RawAccessFrameRef permanently alive for the
  given frame - it prevents the frame from ever being optimized. That's exactly
  what we need for animated frames.

- In SurfaceCache, I just updated RasterSurfaceKey to include a frame number,
  like VectorSurfaceKey has. I also removed a couple of pointless |const|
  qualifiers on variables that were passed by value anyway, which made sticking
  to 80 columns a little easier.

- In RasterImage, a lot of the changes are just updates to include the frame
  number in RasterSurfaceKey wherever we use it, and to store a frame count
  (|mFrameCount|) instead of just whether we have the first frame. The
  substantial change is in LookupFrameInternal, which now calls
  FrameBlender::GetCompositedFrame when the lookup is for an animated image -
  but only for frames after the first, since after all there's nothing special
  about the first frame of an animated image! To make sure this is handled
  correctly, I added an assertion that we're never returning a paletted frame
  from LookupFrame. (In case it's not clear why, composited frames aren't
  paletted, but the underlying frames they're composited from often are.) In
  addition to this, InternalAddFrame is simplified, since the frames we allocate
  there are now always stored in the SurfaceCache and never in the FrameBlender.

- FrameBlender is now simplified since it doesn't actually hold frames anymore,
  it just looks them up in the SurfaceCache. (It does still hold the special
  frames it uses for compositing, though.) I took the opportunity to clean
  things up a bit, especially since bigger changes are coming in upcoming bugs.

- FrameAnimator just needed some minor updates because of the changes in
  RasterImage and FrameBlender.
Attachment #8542907 - Flags: review?(tnikkel)
(Assignee)

Comment 3

2 years ago
(In reply to Mark Straver from comment #1)
> Question: wouldn't this cause issues for the common practice of having very
> large animated images (usually gifs) as "pseudo video", often with many
> hundreds of frames and many MBs in size?

I think what you're asking about is, will memory usage be excessive if we end up storing multiple versions of large animated images?

The answer is yes, but that's really a matter of policy. I have no intention of decoding multiple versions of animated images in situations that normal web content will encounter, and I don't see that changing any time soon.

This bug is about removing technical issues that make downscale-during-decode hard to implement. Only once those technical issues are solved can we even begin to worry about the policy issues, after all.
(Assignee)

Updated

2 years ago
Blocks: 1116737

Comment 4

2 years ago
(In reply to Seth Fowler [:seth] from comment #3)
> > Question: wouldn't this cause issues for the common practice of having very
> > large animated images (usually gifs) as "pseudo video", often with many
> > hundreds of frames and many MBs in size?
> 
> I think what you're asking about is, will memory usage be excessive if we
> end up storing multiple versions of large animated images?

Not necessarily just raw memory consumption (although having everything decoded in memory might be excessive, especially considering where exactly this surface cache is stored).
What I worry about is it being a cache - will it cause issues having many elements stored in it and/or bump other elements out in favor of one animated image, slowing things down considerably? Or doesn't it have any cache maintenance at all to get rid of stale elements in the surface cache - meaning: how is cache management actually done in a practical sense? I have to admit I haven't looked at the code involved yet, but gather it'd be quicker if the one with everything fresh in memory answers that ;)

I don't think that discarding this particular usage scenario off-hand is wise. Sure, an "average" web page with static content for a company presentation (what you call "normal web content") might not have this kind of animated images, but it is a very very common practice in some circles, especially social media and community sites. Even just having animated avatars on fora might inflate the surface cache to silly proportions if you store decoded versions of all frames in it.
I think policy really is important to think about *before* something gets implemented. Is it an acceptable trade-off to push everything to the surface cache for all animated images to be able to scale them on decode? When would animated images be scaled where scale on decode is important? etc.

Not trying to criticize, but I think these are important questions.
(Assignee)

Comment 5

2 years ago
(In reply to Mark Straver from comment #4)
> What I worry about is it being a cache

It's a cache for decoded versions of images (and things similar to that), yes. The solution we had before the SurfaceCache got introduced, the DiscardTracker, was also a cache. The fundamental difference between them is that with DiscardTracker, the Image objects still held the owning references to the decoded surfaces of the image, while with SurfaceCache, the SurfaceCache is the owner. This simplifies a lot of things in practice.

(And yes, every aspect of cache management has been accounted for.)

> I don't think that discarding this particular usage scenario off-hand is
> wise. Sure, an "average" web page with static content for a company
> presentation (what you call "normal web content")

You misinterpreted my words. By "normal web content", I mean web content that does not use certain obscure features of the web platform (in practice, this basically means certain WebGL texture formats). To support those obscure features, we might need to decode a second copy of the image. However, that's already the case now; it's just that right now, we have to throw away the original copy. This can cause a cycle of repeatedly redecoding large images that can cause 100% CPU usage in some situations; we don't want to keep doing that.

> Even just having animated avatars on fora might inflate the surface cache to silly proportions if you store decoded versions of all frames in it.

We already store decoded versions of *all* animated image frames, just in a different place in memory. Absolutely nothing is changing with respect to that.

> I think policy really is important to think about *before* something gets
> implemented.

I have thought, very hard, about our policy in these areas.

I want to make something clear though: there are no policy changes being made in this bug, other than the fact that we will avoid the potential 100% CPU usage issue with certain WebGL texture formats mentioned above. For *virtually all web content*, we will store at most one copy of each decoded frame in the SurfaceCache, which is precisely what we do now, except that the memory is managed by a different object. That's all.

Comment 6

2 years ago
Perfectly clear - thanks for clarifying :)
(Assignee)

Comment 7

2 years ago
Try job here:

https://tbpl.mozilla.org/?tree=Try&rev=fe5d7df854a4
(Assignee)

Comment 8

2 years ago
Those try failures are just hitting an assertion that we never discard animated image frames. Pretty sure we just need to call SurfaceCache::LockImage when we get our second frame, which it looks like I've failed to do in this patch. I'll upload a revised version tomorrow.

Still, it's encouraging that the release builds are all totally green! It looks like, once this patch lands, we have the option of flipping on discarding for animated images, if we want to. I actually think that might be a good idea, but I want to give it some further thought, since animated images can be quite expensive to redecode. (And we certainly don't want to make a change like that in this bug.)
(Assignee)

Comment 9

2 years ago
Created attachment 8544229 [details] [diff] [review]
Store animated frames in the SurfaceCache

Alright, so it wasn't visible in the diff, but we actually *were* locking the image when we got our second frame. The problem may be that we did it by calling RasterImage::LockImage, not SurfaceCache::LockImage, and RasterImage::LockImage can fail in certain situations that don't apply to SurfaceCache::LockImage. Before doing any further debugging, I'd like to verify that that's not the cause, so here's an updated patch. A new try job will follow.
Attachment #8544229 - Flags: review?(tnikkel)
(Assignee)

Updated

2 years ago
Attachment #8542907 - Attachment is obsolete: true
Attachment #8542907 - Flags: review?(tnikkel)
(Assignee)

Comment 10

2 years ago
Here's the new try job:

https://tbpl.mozilla.org/?tree=Try&rev=f1f630fdc6dc
(Assignee)

Comment 11

2 years ago
Created attachment 8544296 [details] [diff] [review]
Store animated frames in the SurfaceCache

I take it back; it's important that we increment mLockCount, so calling RasterImage::LockImage is the right thing to do. (Failure to do that is what caused the failures in the last try job.)

Since my initial guess wasn't correct, I debugged this locally, and it looks like the problem was that FrameBlender::GetCompositedFrame was too restrictive. It needs to be willing to return a raw frame, since APNG frames are RGBA and if they take up the entire frame we don't copy them to the compositing frame. I've loosened its behavior so it will do that now, with an added assertion that the returned frame is not paletted.
Attachment #8544296 - Flags: review?(tnikkel)
(Assignee)

Updated

2 years ago
Attachment #8544229 - Attachment is obsolete: true
Attachment #8544229 - Flags: review?(tnikkel)
(Assignee)

Comment 12

2 years ago
Here's the new try job:

https://tbpl.mozilla.org/?tree=Try&rev=f7a2aee65a6f
(Assignee)

Comment 13

2 years ago
Alright, looks like this is now 100% green on try.
(Assignee)

Updated

2 years ago
Blocks: 1116359
Attachment #8544296 - Flags: review?(tnikkel) → review+
(Assignee)

Comment 14

2 years ago
Thanks for the review!

Pushed:

https://hg.mozilla.org/integration/mozilla-inbound/rev/6e4dce410c27
(Assignee)

Comment 15

2 years ago
Pushed a followup to fix a missing #include that was masked by unified builds:

https://hg.mozilla.org/integration/mozilla-inbound/rev/839f6e4f322a
https://hg.mozilla.org/mozilla-central/rev/6e4dce410c27
https://hg.mozilla.org/mozilla-central/rev/839f6e4f322a
Status: NEW → RESOLVED
Last Resolved: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla37

Updated

2 years ago
Depends on: 1145496

Updated

2 years ago
Depends on: 1150089

Updated

2 years ago
Depends on: 1153527
No longer depends on: 1153527
You need to log in before you can comment on or make changes to this bug.