It would make sense for gradient backgrounds to be drawn by the GPU where possible. Radial gradients especially are very expensive, but could be drawn (somewhat) trivially on the GPU. We could do this in several ways, including: layout side; 1- Add an nsDisplayGradient to handle this specifically 2- Add code to special-case gradients to nsDisplayBackground gfx side; 1- Extend ColorLayer to be able to draw gradients 2- Extend ImageLayer to be able to draw gradients 3- Add a new GradientLayer type shader side; 1- Write a general-purpose gradient shader that takes stops/colours via uniforms 2- Generate geometry and use simple vertex colours I prefer 2, 3, 2 - I see this being the simplest and least intrusive way of doing this. Initially, I'd see this as supporting linear gradients first (which are much easier) and extending this to radial gradients (which will be slightly more difficult, with support for offsets and multiple stops).
DrawTargetD2D has D3D10 shaders for radial gradients. Can mobile GPUs actually run radial gradients fast? Is there an urgent need for this? There are a few things that might pay off more: 1) Use general accelerated 2D drawing in more places. E.g. Skia's GL support. 2) Extending the "cache background in a layer" code to layerize CPU-rendered gradients as well as scaled images 3) Concentrate resources on nrc et al.'s layers refactoring so the complexity of adding new layer types is reduced.
So :jwatt just pointed out that I never actually followed up on this bug, though I'm sure there was a lot of offline conversation at the time... I believe that mobile GPUs can run gradients fast, though it depends on the method you employ (which will depend on the desired accuracy). I'd expect that the fastest way to do radial gradients, at least at low-resolution/small screens would be to just send geometry over and use vertex colour. The slow part of this would just be the geometry upload, but that could be done once and shared. This will only scale so far though, as you'd want a high enough density of vertices that you wouldn't see obvious quantisation. This also doesn't scale that well if you have multiple stops. The most accurate way, I think, that would scale well, would be to use a pixel shader and a clamped distance function that indexes into an array of stops (you could well do this with a 1d/1-pixel tall texture, this would save you having to compile multiple shaders for different amounts of stops). This is also probably the easier method to implement, but how fast it is may vary quite wildly. Both of these methods would be hugely more memory-efficient than rendering in software and uploading to a texture, but unless the gradient is changing frequently, may not compare well speed-wise (there may be surprisingly little in it though). Unless someone wants to take this on, I expect we'll get round to comment #1 point 1 before exploring this, and that's probably a better use of resources anyway.
You need to log in before you can comment on or make changes to this bug.