Closed Bug 664066 Opened 13 years ago Closed 13 years ago

mGLMaxTextureUnits is used uninitialized with OpenGL < 2.0 in WebGLContext::InitAndValidateGL to set array lengths

Categories

(Core :: Graphics: CanvasWebGL, defect)

x86_64
Linux
defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: karlt, Unassigned)

Details

Attachments

(4 files, 5 obsolete files)

GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is only available if the GL version is 2.0
or greater.  Otherwise glGetIntegerv leaves mGLMaxTextureUnits uninitialized.

http://hg.mozilla.org/mozilla-central/annotate/84189d94f01a/content/canvas/src/WebGLContextValidate.cpp#l495
http://www.opengl.org/sdk/docs/man/xhtml/glGet.xml

If we are lucky there is a "GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is < 8!"
warning and WebGLContext::InitAndValidateGL() fails.  If we are unlucky, the
value is used to set lengths of arrays.

The reinterpret cast is a bit scary.  It should be OK as all platforms that
we care about have 32-bit ints, but it would be preferable to let the compiler
notice incompatibilities.
Using glGetError to check for a GL error would be a little awkward because
fGetIntegerv may have already reset the error in AfterGLCall().

Further, indirect Mesa doesn't even seem to bother to set GL_INVALID_ENUM.
(Indirect Mesa may well have other issues, so that doesn't necessarily need to
influence how this is fixed.)
(In reply to comment #0)
> GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is only available if the GL version is
> 2.0
> or greater.

But if the GL version is less than 2.0, then we fail to even load the GL library, since we load symbols that are only found in GL >= 2.0, such as glCompileShader.

>  Otherwise glGetIntegerv leaves mGLMaxTextureUnits uninitialized.

So, this 'otherwise' should never happen, but if you want, we can still initialize mGLMaxTextureUnits and friends, just for extra safety.

> The reinterpret cast is a bit scary.  It should be OK as all platforms that
> we care about have 32-bit ints,

GLint is always 32bit, per the OpenGL spec, regardless of int. Here we're just casting from unsigned to signed. I guess we should just let mGLMaxTextureUnits be a GLint since 31 bits are more than enough here. This is just an example of how unsigned types are just a useless pain for anything else than bit-fields.
(In reply to comment #1)
> Using glGetError to check for a GL error would be a little awkward because
> fGetIntegerv may have already reset the error in AfterGLCall().

AfterGLCall() is only called in GL debug mode i.e. when you set MOZ_GL_DEBUG. It then saves the actual GL error code (mGLError) and GetError is overridden to return that. So AfterGLCall() and generally the GL debug mode preserve 100% compliant GL behavior.

Also notice that we do already call GetError at the end of InitAndValidateGL, so if a GL error happened during initialization, we fail the WebGLContext.
(In reply to comment #2)
> (In reply to comment #0)
> > The reinterpret cast is a bit scary.  It should be OK as all platforms that
> > we care about have 32-bit ints,
> 
> GLint is always 32bit, per the OpenGL spec, regardless of int. Here we're
> just casting from unsigned to signed.

Oh! Here I was wrong on both accounts:
 - Per section 2.4 in the GLES 2.0.25 spec, GLint has a _minimum_ size of 32bits but may be larger. In our code (gfx/thebes/GLDefs.h) we typedef it as int. I think we should rather make it PRint32.
 - mGLMaxTextureUnits is actually signed, not unsigned as I thought.
Summary: mGLMaxTextureUnits is used uninitialized with OpenGL < 2.0 in WebGLContext::InitAndValidateGL to set array lengths → Some WebGL code robustness improvements
Note that that patch also removes some dead code.
Attached patch initialize some GL values (obsolete) — Splinter Review
Attachment #539212 - Flags: review?(karlt)
Attached patch initialize some GL values (obsolete) — Splinter Review
Attachment #539212 - Attachment is obsolete: true
Attachment #539212 - Flags: review?(karlt)
Attachment #539213 - Flags: review?(karlt)
Sorry, some junk made it into the previous patch.
Attachment #539213 - Attachment is obsolete: true
Attachment #539213 - Flags: review?(karlt)
Attachment #539252 - Flags: review?(karlt)
(In reply to comment #3)
> It then saves the actual GL error code (mGLError) and GetError
> is overridden to return that. So AfterGLCall() and generally the GL debug
> mode preserve 100% compliant GL behavior.

Ah.  Thanks.  I didn't look carefully enough.
Comment on attachment 539211 [details] [diff] [review]
remove useless C casts

Thanks.

I wonder why nsCAutoString::BeginWriting() has the (char*) cast.
Attachment #539211 - Flags: review?(karlt) → review+
(In reply to comment #2)
> But if the GL version is less than 2.0, then we fail to even load the GL
> library, since we load symbols that are only found in GL >= 2.0, such as
> glCompileShader.

libGL.so.1 is just a front end for the implementation.  The presence of
symbols in libGL.so.1 does not indicate support for any particular GL
specification.  The implementation can depend on the environment
(e.g. LIBGL_ALWAYS_SOFTWARE) and can even be on another system.

Perhaps we should be checking for GL 2.0 or the ARB_shader_objects extension
(I guess) before using CreateShader.
Comment on attachment 539252 [details] [diff] [review]
initialize some GL values

This is OK if you want to do this, but for mGLMaxTextureImageUnits and mGLMaxVertexTextureImageUnits this is protection from more than just GL bugs AFAICS, so the comment should be updated.

The use of mPixelStorePackAlignment and mPixelStoreUnpackAlignment needs a
non-zero number, and the initial value of 4 would be sensible.
Attachment #539252 - Flags: review?(karlt) → review+
(In reply to comment #12)
> (In reply to comment #2)
> > But if the GL version is less than 2.0, then we fail to even load the GL
> > library, since we load symbols that are only found in GL >= 2.0, such as
> > glCompileShader.
> 
> libGL.so.1 is just a front end for the implementation.  The presence of
> symbols in libGL.so.1 does not indicate support for any particular GL
> specification.  The implementation can depend on the environment
> (e.g. LIBGL_ALWAYS_SOFTWARE) and can even be on another system.
> 
> Perhaps we should be checking for GL 2.0 or the ARB_shader_objects extension
> (I guess) before using CreateShader.

I didn't realize that. I would try to avoid as much as possible relying on GL version numbers that we have to parse from GL strings; likewise the extension might not be present; instead I'll make a patch testing some basic shader-related calls.

(In reply to comment #13)
> Comment on attachment 539252 [details] [diff] [review] [review]
> initialize some GL values
> 
> This is OK if you want to do this, but for mGLMaxTextureImageUnits and
> mGLMaxVertexTextureImageUnits this is protection from more than just GL bugs
> AFAICS, so the comment should be updated.
> 
> The use of mPixelStorePackAlignment and mPixelStoreUnpackAlignment needs a
> non-zero number, and the initial value of 4 would be sensible.

OK. for mPixelStorePackAlignment and mPixelStoreUnpackAlignment, actually the spec defines clearly what their default value is, so we should remove the glGetIntegerv calls for them.
This patch:
 - makes us use glXGetProcAddress instead of dlsym to get GL functions, as is in principle necessary (but in practice since we only use GL1 functions here, dlsym just worked)
 - checks that some GL2 constant exists: GL_MAX_VARYING_VECTORS
 - checks for GL errors
Attachment #539610 - Flags: review?(karlt)
Forgot to check that glXGetProcAddress pointer was non null.
Attachment #539610 - Attachment is obsolete: true
Attachment #539610 - Flags: review?(karlt)
Attachment #539611 - Flags: review?(karlt)
This adds a

glXMakeCurrent(dpy, None, NULL);

before destroying the context, to avoid false positives due to the phenomenon described in bug 659842 comment 76.

Also, only write into the pipe at the very end, after all error checking is done.
Attachment #539641 - Flags: review?(karlt)
Question: is the XSync call still useful? (in glxtest.cpp) or does the XCloseDisplay do that for us?

if it's still useful then I guess it should be moved to just before the CloseDisplay.
Comment on attachment 539588 [details] [diff] [review]
initialize some GL values, updated

Is this the patch you want reviewed?

(In reply to comment #14)
> for mPixelStorePackAlignment and mPixelStoreUnpackAlignment, actually
> the spec defines clearly what their default value is, so we should remove
> the glGetIntegerv calls for them.

Looks fine to me.

And perhaps GL_CURRENT_PROGRAM is really a GLuint (I don't know), but at least the rest looks like a mismerge.
Comment on attachment 539611 [details] [diff] [review]
use glXGetProcAddress, and check some GL2 constant

>+  typedef void* (* PFNGLXGETPROCADDRESS) (const char*);
>+  PFNGLXGETPROCADDRESS glXGetProcAddress = cast<PFNGLXGETPROCADDRESS>(dlsym(libgl, "glXGetProcAddress"));
> 
>   if (!glXQueryExtension ||

>+      !glXGetProcAddress)
>   {
>-    fatal_error("Unable to find required symbols in libGL.so.1");
>+    fatal_error("Unable to find required GLX symbols in libGL.so.1");
>   }

This is requiring GLX 1.4.

GLXLibrary considers glXGetProcAddressARB, apparently trying to support 1.3.

>+  ///// Get some OpenGL >= 2.0 constant to make sure that we have OpenGL >= 2.0.
>+  ///// That's better than trying to parse version strings here.
>+  GLint maxVaryingFloats = 0;
>+  glGetIntegerv(GL_MAX_VARYING_FLOATS, &maxVaryingFloats);
>+  if (glGetError() || !maxVaryingFloats)
>+    fatal_error("GL_MAX_VARYING_FLOATS doesn't exist or is zero, this seems to be OpenGL 1");

I'm not comfortable with this test.

I'm not sure this guarantees OpenGL >= 2.0.

I'm not sure it should be requiring 2.0.
Perhaps WebGL requires OpenGL >= 2.0, but this test would also affect OpenGL
layers.  Radeon chips up to R500 don't support ARB NPOT textures fully, and so
don't support 2.0 in hardware.  ISTR we have code to handle such hardware.
I don't think classic drivers provide support for NPOT textures.  I think
gallium drivers do, so requiring 2.0 would effectively require gallium drivers
on such hardware.

Is GL_MAX_VARYING_FLOATS a parameter necessary for use of shaders?
If so, then label the test as a test for shader support rather than a test for
2.0.  But I don't see us using this parameter, so I'm not clear that is
necessary for shaders.

Do we really want to demand shader support to have OpenGL layers?
Should the test instead be in WebGL code?

Does one of the parameter checks (with the new initialization) already test
for shader support?
Comment on attachment 539641 [details] [diff] [review]
properly destroy context in glxtest and only write to the pipe at the end

You've hijacked the bug I filed!
Please no more patches for different issues in this bug.

glXMakeCurrent(dpy, None, NULL) looks good but, if you want to use an existing bug, it belongs with the corresponding change in ~GLContextGLX.

Moving "write(write_end_of_the_pipe, buf, length)" is a separate issue and deserves at least a separate patch.  I would have thought we want to send the GL implementation/version information as soon as possible because it may be useful even if the glxtest fails.
(In reply to comment #19)
> Question: is the XSync call still useful? (in glxtest.cpp) or does the
> XCloseDisplay do that for us?

XCloseDisplay does that for us.  I see no need for it.
(In reply to comment #21)
> Comment on attachment 539611 [details] [diff] [review] [review]
> use glXGetProcAddress, and check some GL2 constant
> (snip)
> This is requiring GLX 1.4.

Oops. I don't want to do that. I'm dropping this patch, instead I'll do a GL version check in GfxInfoX11.cpp. After all it's similar to and easier than the driver version parsing that we're already doing.

> I'm not comfortable with this test.
> 
> I'm not sure this guarantees OpenGL >= 2.0.

Indeed this test could be satisfied by a GL1 implementation implementing the right set of extensions.

> 
> I'm not sure it should be requiring 2.0.
> Perhaps WebGL requires OpenGL >= 2.0, but this test would also affect OpenGL
> layers.

GL Layers require OpenGL >= 2.1 already. For example, GL layers need to do BGRA->RGBA conversion, which is trivial with a shader but impossible in GL1, at least without certain extensions.

> don't support 2.0 in hardware.  ISTR we have code to handle such hardware.

Do we?
Attachment #539611 - Attachment is obsolete: true
Attachment #539611 - Flags: review?(karlt)
Comment on attachment 539641 [details] [diff] [review]
properly destroy context in glxtest and only write to the pipe at the end

OK. I'm moving the

+  ///// possible. Also we want to check that we're able to do that too without generating X errors.
+  glXMakeCurrent(dpy, None, NULL); // must release the GL context before destroying it

to the other patch, and dropping the rest of this patch.

Sorry about the bug hijacking.
Attachment #539641 - Flags: review?(karlt)
Oops, sorry about the junk in the previous patch.
Attachment #539588 - Attachment is obsolete: true
Attachment #539588 - Flags: review?(karlt)
Attachment #540062 - Flags: review?(karlt)
Comment on attachment 540062 [details] [diff] [review]
initialize some GL values, updated again

>+    // initialize some GL values: we're going to get them from the GL and use them as the sizes of arrays,
>+    // so in case glGetIntegerv leaves them uninitialized because of a GL bug, we would have very weird crashes.

I agree this is a sensible thing to do.
With current code, a GL bug is not the only cause, so "because of a GL bug" would be better removed.
Can we have this within 80 columns?
Bugzilla diffs, for one example, are typically side-by-side, and get hard to read when wrapping happens.
Attachment #540062 - Flags: review?(karlt) → review+
This is only about NPOT textures. If we had OpenGL 1 support, we'd also, and most importantly, have rendering paths that use the fixed-function pipeline instead of shaders.
> This patch initializes by zero the GL max values before we query them, just in case a buggy OpenGL implementation's glGetIntegerv function would fail to set them.

Review comment 27 was ignored.
Summary: Some WebGL code robustness improvements → mGLMaxTextureUnits is used uninitialized with OpenGL < 2.0 in WebGLContext::InitAndValidateGL to set array lengths
Sorry about that!
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: