Last Comment Bug 944571 - Convert gfxAlphaBoxBlur to use moz2d
: Convert gfxAlphaBoxBlur to use moz2d
Status: NEW
[leave open]
:
Product: Core
Classification: Components
Component: Graphics (show other bugs)
: unspecified
: x86 Mac OS X
-- normal with 1 vote (vote)
: ---
Assigned To: Nobody; OK to take it and work on it
:
: Milan Sreckovic [:milan]
Mentors:
Depends on: 946501 953324 958363
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-28 18:56 PST by Matt Woodrow (:mattwoodrow)
Modified: 2014-11-21 16:31 PST (History)
8 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---


Attachments
Part 1: Use moz2d surfaces (7.25 KB, patch)
2013-11-28 18:57 PST, Matt Woodrow (:mattwoodrow)
bas: review+
Details | Diff | Splinter Review
Part 2: Pass the shadow color in (13.13 KB, patch)
2013-11-28 20:27 PST, Matt Woodrow (:mattwoodrow)
no flags Details | Diff | Splinter Review
Part 3: Pass the destination surface in early so we can know what type it is (9.09 KB, patch)
2013-11-28 20:27 PST, Matt Woodrow (:mattwoodrow)
no flags Details | Diff | Splinter Review
Part 4: Use the filters API for blurring when possible (WIP) (9.99 KB, patch)
2013-11-28 20:28 PST, Matt Woodrow (:mattwoodrow)
no flags Details | Diff | Splinter Review
Part 2: Pass the shadow color in (12.92 KB, patch)
2013-12-03 16:46 PST, Matt Woodrow (:mattwoodrow)
roc: review+
Details | Diff | Splinter Review
Part 3: Pass the destination surface in early so we can know what type it is (9.03 KB, patch)
2013-12-03 16:47 PST, Matt Woodrow (:mattwoodrow)
roc: review+
Details | Diff | Splinter Review
Part 4: Pass the blur standard deviation in instead of the radius (7.48 KB, patch)
2013-12-03 16:47 PST, Matt Woodrow (:mattwoodrow)
roc: review+
Details | Diff | Splinter Review
Part 5: Use the filters API for blurring when possible and preferable (9.86 KB, patch)
2013-12-03 16:52 PST, Matt Woodrow (:mattwoodrow)
mstange: review+
Details | Diff | Splinter Review

Description User image Matt Woodrow (:mattwoodrow) 2013-11-28 18:56:21 PST

    
Comment 1 User image Matt Woodrow (:mattwoodrow) 2013-11-28 18:57:40 PST
Created attachment 8340185 [details] [diff] [review]
Part 1: Use moz2d surfaces
Comment 2 User image Matt Woodrow (:mattwoodrow) 2013-11-28 20:27:26 PST
Created attachment 8340211 [details] [diff] [review]
Part 2: Pass the shadow color in
Comment 3 User image Matt Woodrow (:mattwoodrow) 2013-11-28 20:27:52 PST
Created attachment 8340212 [details] [diff] [review]
Part 3: Pass the destination surface in early so we can know what type it is
Comment 4 User image Matt Woodrow (:mattwoodrow) 2013-11-28 20:28:14 PST
Created attachment 8340213 [details] [diff] [review]
Part 4: Use the filters API for blurring when possible (WIP)
Comment 5 User image Matt Woodrow (:mattwoodrow) 2013-11-29 11:21:12 PST
https://hg.mozilla.org/integration/mozilla-inbound/rev/853f18dd3c3a
Comment 6 User image Benoit Jacob [:bjacob] (mostly away) 2013-11-30 18:27:23 PST
Fixed the --disable-unified-compilation build:
http://hg.mozilla.org/integration/mozilla-inbound/rev/d0187315ef02
Comment 8 User image Matt Woodrow (:mattwoodrow) 2013-12-03 16:46:49 PST
Created attachment 8342048 [details] [diff] [review]
Part 2: Pass the shadow color in
Comment 9 User image Matt Woodrow (:mattwoodrow) 2013-12-03 16:47:13 PST
Created attachment 8342049 [details] [diff] [review]
Part 3: Pass the destination surface in early so we can know what type it is
Comment 10 User image Matt Woodrow (:mattwoodrow) 2013-12-03 16:47:47 PST
Created attachment 8342050 [details] [diff] [review]
Part 4: Pass the blur standard deviation in instead of the radius
Comment 11 User image Matt Woodrow (:mattwoodrow) 2013-12-03 16:52:27 PST
Created attachment 8342054 [details] [diff] [review]
Part 5: Use the filters API for blurring when possible and preferable

This should let us use h/w accelerated blurring for direct2d without regressing anything else.

There's a few reasons why using the filters API for all cases isn't possible/desirable yet:

* No support for spreading
* No support for different x/y blur sigma's (unless we use directional blur and make two passes)
* Blurring allocates a new surface, rather than blurring in-place on the input surface
* We have to draw the filter to another temporary so that we can use it as a mask

I think given these, we should only be using filters for direct2d (where the h/w acceleration win should outweight the cost of the extra copies).
Comment 12 User image Matt Woodrow (:mattwoodrow) 2013-12-03 16:53:22 PST
Comment on attachment 8342054 [details] [diff] [review]
Part 5: Use the filters API for blurring when possible and preferable

Review of attachment 8342054 [details] [diff] [review]:
-----------------------------------------------------------------

::: gfx/thebes/gfxBlur.cpp
@@ +11,5 @@
>  
>  #include "mozilla/gfx/Blur.h"
>  #include "mozilla/gfx/2D.h"
> +#include "mozilla/gfx/Filters.h"
> +#include "../../../minpng.h"

Oops, removed this line locally.
Comment 13 User image Markus Stange [:mstange] (away until Feb 22) 2013-12-04 03:52:53 PST
Comment on attachment 8342054 [details] [diff] [review]
Part 5: Use the filters API for blurring when possible and preferable

Review of attachment 8342054 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good, but performance would probably be a lot better if you cached the FilterNode. In my experience, with D2D creating the blur filter is much slower than the actual blurring. (You'd probably need one cached FilterNode per DrawTarget backend that you encounter.)

::: gfx/thebes/gfxBlur.cpp
@@ +56,4 @@
>  
> +        // Even though the filters API creates a new surface for blurring, it uses
> +        // the input surfaces dimensions, so pad the size out enough to include all
> +        // pixels that will be blurred.

This shouldn't be necessary. The blur filter automatically has a larger output than input, and it treats anything around the input surface as transparent black. Did you see problems without this padding?

@@ +94,5 @@
> +
> +        IntSize size = mBlur->GetSize();
> +
> +        // Make an alpha-only surface to draw on. We will play with the data after
> +         // everything is drawn to create a blur effect.

whitespace

@@ +139,5 @@
> +        mask =
> +            mDestination->CreateSourceSurfaceFromData(mData,
> +                                                      mBlur->GetSize(),
> +                                                      mBlur->GetStride(),
> +                                                     FORMAT_A8);

whitespace

@@ +155,5 @@
> +
> +        IntSize size = source->GetSize();
> +
> +        // Draw the blur onto an intermediate surface so we can use it
> +        // as a mask and apply the shadow color.

Hmm. Instead of a mask, you could add a FILTER_COMPOSITE with COMPOSITE_OPERATOR_IN at the end that composites the blur result with a FILTER_FLOOD with the shadow color. Then you could replace the DrawMask call with the DrawFilter call and you wouldn't need this snapshot. Make sense? But it should probably be done in a different patch / bug.
Comment 14 User image Markus Stange [:mstange] (away until Feb 22) 2013-12-04 03:58:16 PST
(In reply to Matt Woodrow (:mattwoodrow) from comment #11)
> Created attachment 8342054 [details] [diff] [review]
> Part 5: Use the filters API for blurring when possible and preferable
> 
> This should let us use h/w accelerated blurring for direct2d without
> regressing anything else.
> 
> There's a few reasons why using the filters API for all cases isn't
> possible/desirable yet:
> 
> * No support for spreading

The morphology dilate filter does spreading, doesn't it?

> * No support for different x/y blur sigma's (unless we use directional blur
> and make two passes)

Yes. If we cached the FilterNodes, I see no problem using two directional blurs. With D2D it should still be much faster than doing it in software (assuming we don't recreate the FilterNodes each time).

> * Blurring allocates a new surface, rather than blurring in-place on the
> input surface

Or rather, blurring needs a target surface that's different from the input surface. Yes.

> * We have to draw the filter to another temporary so that we can use it as a
> mask

When we use flood + composite filters instead of a mask, we should actually end up with the same number of surfaces as before: One input surface and one surface that contains the colored blur output.
Comment 15 User image Markus Stange [:mstange] (away until Feb 22) 2013-12-04 04:10:12 PST
(In reply to Markus Stange [:mstange] from comment #13)
> ::: gfx/thebes/gfxBlur.cpp
> @@ +56,4 @@
> >  
> > +        // Even though the filters API creates a new surface for blurring, it uses
> > +        // the input surfaces dimensions, so pad the size out enough to include all
> > +        // pixels that will be blurred.
> 
> This shouldn't be necessary. The blur filter automatically has a larger
> output than input, and it treats anything around the input surface as
> transparent black. Did you see problems without this padding?

The only place where you need to pay attention is the call DrawFilter; the filter render rect needs to include the whole shadow, otherwise it will be cut off. And the target point needs to be shifted down and to the right such that the top left part of the blur is not clipped.
Input surfaces are positioned at (0, 0) in the filter space. Blurring will make the blur extend into the negative ranges, so the filter render rect will need to have a negative TopLeft point.
Comment 16 User image Matt Woodrow (:mattwoodrow) 2013-12-08 16:05:18 PST
(In reply to Markus Stange [:mstange] from comment #13)

> Looks good, but performance would probably be a lot better if you cached the
> FilterNode. In my experience, with D2D creating the blur filter is much
> slower than the actual blurring. (You'd probably need one cached FilterNode
> per DrawTarget backend that you encounter.)

Should we try implement this caching inside DrawTargetD2D then?

Having to have a cache for each consumer of filters is painful, and it's only necessary for direct2d.
Comment 17 User image Markus Stange [:mstange] (away until Feb 22) 2013-12-08 17:23:27 PST
(In reply to Matt Woodrow (:mattwoodrow) from comment #16)
> (In reply to Markus Stange [:mstange] from comment #13)
> 
> > Looks good, but performance would probably be a lot better if you cached the
> > FilterNode. In my experience, with D2D creating the blur filter is much
> > slower than the actual blurring. (You'd probably need one cached FilterNode
> > per DrawTarget backend that you encounter.)
> 
> Should we try implement this caching inside DrawTargetD2D then?

Maybe. I'm not sure yet what this caching should look like. We can't simply have one global instance per filter type because two filters of the same type can be present in the same filter graph.
We could have a recycling FilterNode wrapper that puts unneeded filter nodes back into a central recycling cache in its destructor, for example.

> Having to have a cache for each consumer of filters is painful, and it's
> only necessary for direct2d.

It will also be necessary for accelerated GL filters, once we have them. And for those it would make sense to bake multiple connected filter nodes into the same shader if possible, and then we'd need to cache the whole filter structure if we want to avoid shader recompilation. Just avoiding calls to CreateFilter in the consumer seems like a simpler solution.

Note You need to log in before you can comment on or make changes to this bug.