Open Bug 944571 Opened 10 years ago Updated 2 years ago

Convert gfxAlphaBoxBlur to use moz2d

Categories

(Core :: Graphics, defect)

x86
macOS
defect

Tracking

()

People

(Reporter: mattwoodrow, Unassigned)

References

Details

(Whiteboard: [leave open])

Attachments

(5 files, 3 obsolete files)

      No description provided.
Attachment #8340185 - Flags: review?
Attachment #8340185 - Flags: review? → review+
Attached patch Part 2: Pass the shadow color in (obsolete) — Splinter Review
Attachment #8340211 - Attachment is obsolete: true
Attachment #8342048 - Flags: review?(roc)
Attachment #8340213 - Attachment is obsolete: true
This should let us use h/w accelerated blurring for direct2d without regressing anything else.

There's a few reasons why using the filters API for all cases isn't possible/desirable yet:

* No support for spreading
* No support for different x/y blur sigma's (unless we use directional blur and make two passes)
* Blurring allocates a new surface, rather than blurring in-place on the input surface
* We have to draw the filter to another temporary so that we can use it as a mask

I think given these, we should only be using filters for direct2d (where the h/w acceleration win should outweight the cost of the extra copies).
Attachment #8342054 - Flags: review?(mstange)
Comment on attachment 8342054 [details] [diff] [review]
Part 5: Use the filters API for blurring when possible and preferable

Review of attachment 8342054 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/thebes/gfxBlur.cpp
@@ +11,5 @@
>  
>  #include "mozilla/gfx/Blur.h"
>  #include "mozilla/gfx/2D.h"
> +#include "mozilla/gfx/Filters.h"
> +#include "../../../minpng.h"

Oops, removed this line locally.
Comment on attachment 8342054 [details] [diff] [review]
Part 5: Use the filters API for blurring when possible and preferable

Review of attachment 8342054 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good, but performance would probably be a lot better if you cached the FilterNode. In my experience, with D2D creating the blur filter is much slower than the actual blurring. (You'd probably need one cached FilterNode per DrawTarget backend that you encounter.)

::: gfx/thebes/gfxBlur.cpp
@@ +56,4 @@
>  
> +        // Even though the filters API creates a new surface for blurring, it uses
> +        // the input surfaces dimensions, so pad the size out enough to include all
> +        // pixels that will be blurred.

This shouldn't be necessary. The blur filter automatically has a larger output than input, and it treats anything around the input surface as transparent black. Did you see problems without this padding?

@@ +94,5 @@
> +
> +        IntSize size = mBlur->GetSize();
> +
> +        // Make an alpha-only surface to draw on. We will play with the data after
> +         // everything is drawn to create a blur effect.

whitespace

@@ +139,5 @@
> +        mask =
> +            mDestination->CreateSourceSurfaceFromData(mData,
> +                                                      mBlur->GetSize(),
> +                                                      mBlur->GetStride(),
> +                                                     FORMAT_A8);

whitespace

@@ +155,5 @@
> +
> +        IntSize size = source->GetSize();
> +
> +        // Draw the blur onto an intermediate surface so we can use it
> +        // as a mask and apply the shadow color.

Hmm. Instead of a mask, you could add a FILTER_COMPOSITE with COMPOSITE_OPERATOR_IN at the end that composites the blur result with a FILTER_FLOOD with the shadow color. Then you could replace the DrawMask call with the DrawFilter call and you wouldn't need this snapshot. Make sense? But it should probably be done in a different patch / bug.
Attachment #8342054 - Flags: review?(mstange) → review+
(In reply to Matt Woodrow (:mattwoodrow) from comment #11)
> Created attachment 8342054 [details] [diff] [review]
> Part 5: Use the filters API for blurring when possible and preferable
> 
> This should let us use h/w accelerated blurring for direct2d without
> regressing anything else.
> 
> There's a few reasons why using the filters API for all cases isn't
> possible/desirable yet:
> 
> * No support for spreading

The morphology dilate filter does spreading, doesn't it?

> * No support for different x/y blur sigma's (unless we use directional blur
> and make two passes)

Yes. If we cached the FilterNodes, I see no problem using two directional blurs. With D2D it should still be much faster than doing it in software (assuming we don't recreate the FilterNodes each time).

> * Blurring allocates a new surface, rather than blurring in-place on the
> input surface

Or rather, blurring needs a target surface that's different from the input surface. Yes.

> * We have to draw the filter to another temporary so that we can use it as a
> mask

When we use flood + composite filters instead of a mask, we should actually end up with the same number of surfaces as before: One input surface and one surface that contains the colored blur output.
(In reply to Markus Stange [:mstange] from comment #13)
> ::: gfx/thebes/gfxBlur.cpp
> @@ +56,4 @@
> >  
> > +        // Even though the filters API creates a new surface for blurring, it uses
> > +        // the input surfaces dimensions, so pad the size out enough to include all
> > +        // pixels that will be blurred.
> 
> This shouldn't be necessary. The blur filter automatically has a larger
> output than input, and it treats anything around the input surface as
> transparent black. Did you see problems without this padding?

The only place where you need to pay attention is the call DrawFilter; the filter render rect needs to include the whole shadow, otherwise it will be cut off. And the target point needs to be shifted down and to the right such that the top left part of the blur is not clipped.
Input surfaces are positioned at (0, 0) in the filter space. Blurring will make the blur extend into the negative ranges, so the filter render rect will need to have a negative TopLeft point.
Depends on: 946501
(In reply to Markus Stange [:mstange] from comment #13)

> Looks good, but performance would probably be a lot better if you cached the
> FilterNode. In my experience, with D2D creating the blur filter is much
> slower than the actual blurring. (You'd probably need one cached FilterNode
> per DrawTarget backend that you encounter.)

Should we try implement this caching inside DrawTargetD2D then?

Having to have a cache for each consumer of filters is painful, and it's only necessary for direct2d.
(In reply to Matt Woodrow (:mattwoodrow) from comment #16)
> (In reply to Markus Stange [:mstange] from comment #13)
> 
> > Looks good, but performance would probably be a lot better if you cached the
> > FilterNode. In my experience, with D2D creating the blur filter is much
> > slower than the actual blurring. (You'd probably need one cached FilterNode
> > per DrawTarget backend that you encounter.)
> 
> Should we try implement this caching inside DrawTargetD2D then?

Maybe. I'm not sure yet what this caching should look like. We can't simply have one global instance per filter type because two filters of the same type can be present in the same filter graph.
We could have a recycling FilterNode wrapper that puts unneeded filter nodes back into a central recycling cache in its destructor, for example.

> Having to have a cache for each consumer of filters is painful, and it's
> only necessary for direct2d.

It will also be necessary for accelerated GL filters, once we have them. And for those it would make sense to bake multiple connected filter nodes into the same shader if possible, and then we'd need to cache the whole filter structure if we want to avoid shader recompilation. Just avoiding calls to CreateFilter in the consumer seems like a simpler solution.
Depends on: 953324
Depends on: 958363
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.