Closed Bug 181917 Opened 22 years ago Closed 1 month ago

We don't need gfx_format & gfxIFormats

Categories

(Core Graveyard :: Image: Painting, defect)

defect
Not set
minor

Tracking

(Not tracked)

RESOLVED INCOMPLETE

People

(Reporter: paper, Assigned: paper)

Details

Right now, all (most) the decoders create a local gfx_format variable, set it to 
gfxIFormats::BGR* (#if defined(XP_PC) || defined(XP_BEOS) ||
defined(MOZ_WIDGET_PHOTON)), or gfxIFormats::RGB* if it isn't.  Then the decoder
calls gfxIImageFrame::Init and passes that format in.  gfxIImageFrame::Init
stores the format.

This is backwards. nsImage* (which are platform dependent) know what format they
need.

I suggest the following:
1) change gfxIImageFrame::Init from:

   Init(nscoord aX, nscoord aY, nscoord aWidth, nscoord aHeight, 
        gfx_format aFormat,gfx_depth aDepth);

   To:
   Init(nscoord aX, nscoord aY, nscoord aWidth, nscoord aHeight, 
        gfx_depth aDepth, gfx_depth aAlphaDepth);
=========
2) Remove Alpha Depth part from gfxIFormats.  So all that's left would be RGB, BGR.
=========
3) Create a #define somewhere in a platform depended file (nsImage*.h, for
example) like this:
   #define GFX_USEFORMAT gfxIFormats::RGB
   or
   #define GFX_USEFORMAT gfxIFormats::BGR
   depending on the platform.
=========
3) Change code in decoders so that it checks GFX_USEFORMAT and writes in the
order that it says (instead of writing in the order based on "#if defined(XP_PC)
|| defined(XP_BEOS) || defined(MOZ_WIDGET_PHOTON)")
=========
4) Change any code that calls GetFormat in order to get the AlphaDepth to use
nsIImage::GetAlphaDepth instead.


There's also the matter of imgIContainer::GetPreferredAlphaChannelFormat.. it's
not used, doesn't make sense, and should be removed.  I'm not sure if that
belongs in this bug or a new bug..
QA Contact: tpreston → image.gfx
This can now be taken up, because Cairo reduced the number of actual format used to two anyway: ARGB32 and RGB24. 
So, we have then always 32 bits depth (for both ARGB32 and RGB24 the pixels are 32bit). alphaDepth is then therefor always either 0 or 8.
All the different format encodings between gfxImageFrame, nsThebesImage, gfxImageSurface, and _cairo_format_bpp makes things confusing.

I would propose to reduce to two actual formats ARGB32 and RGB24, replacing the current depth and format. This will make clear that only those formats are supported, but leaves room to add new formats later... 

This would make format handling much simpler, and will make GetPreferredAlphaChannelFormat redundant.

Note that all image decoders already presume that only ARGB32 and RGB24 are used, all with 32bit pixels.


Pavlov and others, what is your opinion on this?
Simple approach is to remove first the unused formats: BGR, BGRA, RBGA, and BGR_A1, BGR_A8. So that only RGB, RGB_A1, and RGB_A8 remain.
Note, Ciaro actually doesn't distinguish betwee A1 and A8, so that can also be removed.

So, delete all formats, except RGB, and RGB_A8, and worry about rename/restructuring the API later.
In the area of format, there is warning that could be cleaned:
/builds/tinderbox/SeaMonkey-Trunk/Linux_2.6.9-42.ELsmp_Depend/mozilla/gfx/src/shared/gfxImageFrame.cpp: In member function ‘virtual nsresult gfxImageFrame::Init(PRInt32, PRInt32, PRInt32, PRInt32, gfx_format, gfx_depth)’:
/builds/tinderbox/SeaMonkey-Trunk/Linux_2.6.9-42.ELsmp_Depend/mozilla/gfx/src/shared/gfxImageFrame.cpp:105: warning: ‘maskReq’ may be used uninitialized in this function
Product: Core → Core Graveyard
Status: NEW → RESOLVED
Closed: 1 month ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.