Closed Bug 186293 Opened 20 years ago Closed 8 years ago
Fix up GFX interfaces
This is sort of a tracking bug but I'm reserving the right to stow patches here if convenient :-). nsIDrawingSurface is useless. No-one uses it. nsDrawingSurface is stupid because it doesn't have any methods. The interfaces to it are mostly in nsIRenderingContext. Let's make nsIDrawingSurface a real interface to offscreen drawing surfaces, and get rid of nsDrawingSurface. Creation of rendering contexts and drawing surfaces is pretty horked. It makes no sense to create a rendering context from scratch, and in fact this leads to serious bugs (see bug 162024). Also, there is no real need for rendering contexts to change their drawing surfaces; it's simpler and easier (on the Mac, for example) to force a rendering context to be associated with one drawing surface for its whole lifetime. Thus a rendering context should always be associated with some drawing target: an offscreen pixmap, a widget, or a print job. Thus I believe that the only way to create rendering contexts should be via a method on nsIWidget, a method on nsIDrawingSurface, or a method on nsIPrintingContext. (Also, one can be handed to views/layout in an NS_PAINT event.) SelectOffscreenDrawingSurface and friends should go away. Similarly offscreen drawing surfaces should be created via a single method in nsIDeviceContext. ======= Double buffering management should be moved deeper into GFX. Ideally views wouldn't worry about double buffering at all; the rendering context passed in the NS_PAINT event would already be set up for double-buffering. We'd need methods in nsIRenderingContext to disable double-buffering and to say when we're ready to flip. This would really help with targets like OpenGL which can support their own double-buffering, and would eliminate GetBackbuffer and friends from nsIRenderingContext. Could also simplify some platform code like nsRenderingContextGTK which has to track two drawing surfaces per rendering context. ======= Coordinates need to be fixed up (bug 177805). This should include making all APIs take "app units" in integers so that GFX does all scaling. This includes coordinates in regions! Currently SetClipRect is in app units and SetClipRegion is in device pixels. gaaaah! *maybe* GFX should take regions in the form of nsRegion and convert to internal regions when necessary. nsIRegion is a horrible interface but nsRegion is nice and clean. Furthermore some nsIRegion implementations really suck (16-bit coordinates). Font sizes should be in app units too. ======= We can de-COMify and standardize a lot of the API signatures. For example, we should pass-by-reference using const references not "foo*". We should return real results where that makes sense instead of using NS_IMETHOD. ======= We need to add APIs for platform alpha blending so that nsIBlender turns into a method or two in nsIDrawingSurface or nsIRenderingContext. This could speed up translucency tremendously on platforms with the right support. ======= That's all I can think of off the top of my head.
> Let's make nsIDrawingSurface a real interface to offscreen > drawing surfaces, and get rid of nsDrawingSurface. Why do you need a different interface for an onscreen vs. an offscreen drawing destination? On Mac at least, the distinction between offscreen and onscreen drawing is blurry, and varies depending on whether the OS does it own window buffering. I think the device context should be the thing that "knows" about the actual drawing destination (screen/offscreen/printer). I have one thing to add to your list (which we discussed on irc): That API that wraps a specific drawing destination (nsIRC or nsIDS, depending on how the above resolves) needs API entry points that specify that drawing is about to begin, and that drawing is done. In other words, we need to be explicit about when drawing starts and stops for a given destination. This will allow for optimization in the gfx code. We should either allow nesting of start/stops and maintain a stack of drawing targets, or we should explicitly disallow nested Starts by asserting.
An offscreen drawing surface supports extra features: -- copying (or blending) its contents to another nsIRenderingContext (currently achieved using SelectDrawingSurface+CopyOffscreenBits) -- retrieving the actual pixel values drawn (equivalent to our current Lock/Unlock) (although really we should be moving any logic that requires actual pixel values, such as blending, down into GFX) In general we can't do these operations for a nsIRenderingContext that's backed by a widget or a printer. Also, given that we have nsIWidget and nsIPrintingContext objects representing other rendering targets, it seems logical to have objects representing offscreen buffers. This makes resource management clearer too, especially when we want to cache and otherwise reuse buffers for different rendering operations. > I think the device context should be the thing that "knows" about the > actual drawing destination (screen/offscreen/printer). Did you mean "rendering context"? If so then we're in agreement. > That API that wraps a specific drawing destination (nsIRC or nsIDS, depending > on how the above resolves) needs API entry points that specify that drawing is > about to begin, and that drawing is done. My idea is that drawing starts when someone calls nsIDrawingSurface::CreateRenderingContext(). It stops when that context is destroyed. (There is no need to allow multiple rendering contexts to access the same drawing surface at the same time.)
You need to be very careful with X. A drawing surface must be created in the context of a specific rendering context. This is because the drawing surface must be located on the same physical display as a rendering context (which is usually created with a specific window in mind). Also, the depth of the drawing surface must be the same as any windows that will be used to copy data to or from. This is one of the reasons why they have been so closely linked in the past.
> > I think the device context should be the thing that "knows" about the > > actual drawing destination (screen/offscreen/printer). > > Did you mean "rendering context"? If so then we're in agreement. I actually meant device context. I was thinking that the RC could delegate all device-specific functions to the DC. The RC "knows" the destination in the sense that it has a DC, but doesn't have to run different code for different types of devices. However, I haven't looked at device-specific code enought to say if this is realistic. > My idea is that drawing starts when someone calls > nsIDrawingSurface::CreateRenderingContext(). It stops when that context is > destroyed. (There is no need to allow multiple rendering contexts to access > the same drawing surface at the same time.) I don't think this is enough. For one thing, you'll only have an nsIDrawingSurface around when rendering to an offscreen buffer, but for Mac, onscreen rendering still needs to do Start/Stop calls. Secondly, relying on destruction order for cleanup is not good; someone just has to change an addref/release pattern for things to get messed up. I think Start/StopDrawing methods need to go on nsIRenderingContext.
blizzard: sure. Isn't it enough to just associate the drawing surface with a device context? That's why I want to create drawing surfaces with nsIDeviceContext::CreateOffscreenDrawingSurface or somesuch. > someone just has to change an addref/release pattern for things to get messed > up We shouldn't be keeping around long-lived references to nsIRenderingContexts. Maybe they shouldn't even be refcounted. Where we are keeping rendering contexts around, we should probably be holding nsIDeviceContexts instead. Anyway I guess BeginRendering/EndRendering methods on nsIRenderingContext would be nice and explicit and also help with double-buffering.
As long as we always use windows that use the same visuals, sure.
I suggest that we use the QueryInterface mechanism to support "advanced" device and rendering contexts... nsIRenderingContext would include the "basic" drawing methods supported by all DC's including PostScript, and could QI to nsIBitmapRenderingContext for advanced painting such as transparency. > We can de-COMify and standardize a lot of the API signatures. For example, we I suggest that we should NOT de-COMify GFX interfaces that might be useful to script in the future... nsIRenderingContext, in particular. AFAIK we currently don't do rendering from script, but if we ever wanted to support script-implemented plugins (in XUL, for example) having a (partially) scriptable nsIRenderingContext would be necessary.
Benjamin Smedberg wrote: > I suggest that we use the QueryInterface mechanism to support "advanced" > device and rendering contexts... nsIRenderingContext would include the "basic" > drawing methods supported by all DC's including PostScript, and could QI to > nsIBitmapRenderingContext for advanced painting such as transparency. |QueryInterface| looks like an overkill and may add more headaches than neccesary. I think it may be more efficient to create a new method which returns a value representing a set of flags - one flag for each capability ("supports alpha blending", "supports alpha mask", "supports offscreen rendering surfaces" etc.).
I want an nsIDrawingSurface for offscreen buffers because the buffer is logically different from the rendering context. Just like a widget is logically different from a rendering context. We will need an nsIRenderingContext2 at some point (to support SVG and similar stuff --- e.g. paths). QI'ing for that would make sense. Capabilities flags also make sense, although much of the time just detecting NS_ERROR_UNIMPLEMENTED is good enough (e.g., when we try to create an offscreen buffer for a PS device context). If you want scriptable rendering then we should have a separate nsIScriptableRenderingContext which we can freeze which delegates to nsIRenderingContext. I definitely don't think we should freeze nsIRenderingContext *ever*, and I don't think we should eat the XPCOM overhead for all our rendering.
Robert O'Callahan wrote: > Capabilities flags also make sense, although much of the time just detecting > NS_ERROR_UNIMPLEMENTED is good enough (e.g., when we try to create an > offscreen buffer for a PS device context). Doing trial&error-testing using dummy calls and checking for NS_ERROR_UNIMPLEMENTED should be avoided since this costs time and may cause further problems. We can't assume that a specific type of device like a printer does not support offscreen surfaces - PCL5 drivers and Xprint on Unix/Linux support them - and the same issue exist with alpha blending and alpha masks.
And a (future) PDF driver would support alpha blending, but not offscreen drawing surfaces. The capabilities matrix can get quite complex.
email@example.com wrote: > And a (future) PDF driver would support alpha blending, but not offscreen > drawing surfaces. If I recall it correctly not all PDF versions support alpha blending, therefore this depends on the PDF version which should be created (which means we can't make the |GetCapabilityFlags|-method a |static| one - first we have to create and |Init| a devicecontext instance before we can query it... ;-/) ... > The capabilities matrix can get quite complex yeah... =:-)
20 years ago
Depends on: 193849
> Anyway I guess BeginRendering/EndRendering methods on nsIRenderingContext would > be nice and explicit and also help with double-buffering. I'm debugging a clipping problem in an embedding app, and I've run into a case where the lack of begin/end drawing calls are preventing me from fixing a serious bug. The problem is that the rendering context code on Mac messes with the clipping region (which, if you recall, can interfere with drawing anywhere in the window, since mac doesn't have the benefit of the native window sandbox). Worse, this happens in reflow, which in turn happens in PLEvent handling. This means that we have little control over when it occurs. Because the RC has no begin/end calls, I have no place where I can save/restore the clip region, so that we don't mess with it for the embedder. Suckage.
13 years ago
Assignee: roc → nobody
Most of the discussion on this tracking bug appears to be over a decade out of date; the graphics layer has been through multiple major overhauls since, and there's only one remaining open dependency which is *itself* a tracking bug. I don't think there's much point keeping this one open any longer.
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.