Integrate WebGL conformance suite as a mochitest

RESOLVED FIXED

Status

()

defect
RESOLVED FIXED
9 years ago
7 years ago

People

(Reporter: bjacob, Assigned: bjacob)

Tracking

(Depends on 1 bug)

Trunk
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(blocking2.0 betaN+)

Details

(Whiteboard: [january 14] [hardblocker])

Attachments

(10 attachments, 11 obsolete attachments)

25.80 KB, patch
cmtalbert
: review+
Details | Diff | Splinter Review
32.56 KB, patch
vlad
: review+
cmtalbert
: review+
Details | Diff | Splinter Review
32.65 KB, patch
bjacob
: review+
Details | Diff | Splinter Review
33.88 KB, patch
bjacob
: review+
Details | Diff | Splinter Review
33.85 KB, patch
bjacob
: review+
Details | Diff | Splinter Review
16.95 KB, patch
vlad
: review+
Details | Diff | Splinter Review
19.50 KB, patch
bjacob
: review+
Details | Diff | Splinter Review
1.15 KB, patch
bjacob
: review+
Details | Diff | Splinter Review
1.64 KB, patch
vlad
: review+
Details | Diff | Splinter Review
17.07 KB, patch
vlad
: review+
Details | Diff | Splinter Review
The attached patch adds a new mochitest that runs the WebGL conformance suite with default GL and with OSMesa.

It also makes some changes in WebGLContext.cpp:
 - rename the software_render config option to force_osmesa and make it actually take effect;
 - move context-lost-event placeholder a bit

It also improves on the output of the test runner so that we know what's going on when some test takes long to complete.

Questions:
 - Do you want us to call todo() or ok() for each test page? Currently we only call it once at the very end, i.e. the mochitest only 'knows' whether or not everything succeeded.
 - The OSMesa lib has different names on different platforms. How should we handle that? The current file is just trying libOSMesa.so.6 which is what I need on my system.
Attachment #460320 - Flags: review?(vladimir)
Comment on attachment 460320 [details] [diff] [review]
Integrate WebGL conformance suite as a mochitest

Adding ctalbert to review list for test integration.

Unfortunately a bunch of the stuff is test_webgl_conformance_test_suite.html is not correct -- the prefs should be set by the test runner suite, not by the virtual test itself.  Similarily for finding OSMesa -- that'll be set external to actually executing the test, and is where you need ctalbert's help...
Attachment #460320 - Flags: review?(ctalbert)
A little cleaner patch.

Clint: I am available on IRC at any time to discuss this and I need to ask you questions. My nick is bjacob, I'm in the Toronto timezone.
Attachment #460539 - Flags: review?(vladimir)
Attachment #460539 - Flags: review?(ctalbert)
Attachment #460320 - Attachment is obsolete: true
Attachment #460320 - Flags: review?(vladimir)
Attachment #460320 - Flags: review?(ctalbert)

Comment 3

9 years ago
(In reply to comment #2)
> Created attachment 460539 [details] [diff] [review]
> Integrate WebGL conformance suite as a mochitest
> 
> A little cleaner patch.
> 
> Clint: I am available on IRC at any time to discuss this and I need to ask you
> questions. My nick is bjacob, I'm in the Toronto timezone.
Sorry, yesterday I had to get my bugmail under control.  I'll be taking a look at this review this afternoon (Pacific timezone).  I'm ctalbert on IRC, feel free to fire away either here or in IRC with questions.
Comment on attachment 460539 [details] [diff] [review]
Integrate WebGL conformance suite as a mochitest

This looks great to me, just need to work out the pref details -- also initially we should probably disable the "native GL" test running, since it'll fail on practically all of our test machines for now.
Attachment #460539 - Flags: review?(vladimir) → review+
Well, notice how I called it "default GL" not "native GL" and set up webgl.osmesalib before hand -- so it will fall back to OSMesa if OSMesa is available and no other GL is available. It's true that it's still not great to have the OSMesa test run twice in that case.

Comment 6

9 years ago
Comment on attachment 460539 [details] [diff] [review]
Integrate WebGL conformance suite as a mochitest

>diff --git a/content/canvas/test/webgl/Makefile.in b/content/canvas/test/webgl/Makefile.in
>+# The Initial Developer of the Original Code is
>+# Mozilla Corporation.
>+# Portions created by the Initial Developer are Copyright (C) 2007
Nit: 2010
>+# the Initial Developer. All Rights Reserved.
>+#
>+# Contributor(s):
Nit: include yourself.

>+DEPTH		= ../../../..
>+topsrcdir	= @top_srcdir@
>+srcdir		= @srcdir@
>+VPATH		= @srcdir@
>+relativesrcdir  = content/canvas/test/webgl
>+
>+include $(DEPTH)/config/autoconf.mk
>+include $(topsrcdir)/config/rules.mk
>+_TEST_FILES_0 = \
You can just use _TEST_FILES since I don't think we'll need to be splitting these up to deal with command line limitations like we have to do on canvas and dom tests.

>+	test_webgl_conformance_test_suite.html \
>+	test_list.txt \
These files ^ are going to cause you headaches. Because test_list.txt conforms to mochitest's definition of a 'test', it will attempt to load your text file and hang.  We have two ways forward here - rename these to testlist.txt or we can file a bug so that mochitest will skip over .txt files.  We'll need to rename the other two test_list.txt files deeper in the directory hierarchy as well, and change the references in the JS code too.

>+
>+# split up into groups to work around command-line length limits
Nit: remove comment, we're not doing that here.
>+libs:: $(_TEST_FILES_0)
>+	$(INSTALL) $(foreach f,$^,"$f") $(DEPTH)/_tests/testing/mochitest/tests/$(relativesrcdir)

Also, since we're not splitting up the blocks of test files, please use the _TEST_FILES variable (as opposed to _TEST_FILES_0) for the mochitest list. The variable name is just a convention but all the other mochitest makefiles use it...

>new file mode 100644
>--- /dev/null
>+++ b/content/canvas/test/webgl/test_webgl_conformance_test_suite.html
>@@ -0,0 +1,338 @@
>+<script>
>+
>+  SimpleTest.waitForExplicitFinish();
>+  //SimpleTest.requestLongerTimeout(10);
>+
>+
>+  var statusElem = document.getElementById("status");
>+  var statusTextNode = document.createTextNode('');
>+  statusElem.appendChild(statusTextNode);
Any reason why we can't put this code down at the bottom of start() after all the function declarations? 

>+  function doRunTests() {
>+    var reporter = new Reporter();
>+    var iframe = document.getElementById("testframe");
>+    var testHarness = new WebGLTestHarnessModule.TestHarness(
>+        iframe,
>+        'test_list.txt',
>+        function(type, msg, success) {
>+            return reporter.reportFunc(type, msg, success);
>+        });
>+    window.webglTestHarness = testHarness;
>+    if (reporter.noWebGL) {
>+        // XXX
Is this "XXX" a carry-over from the webgl test code, or is this a question you have?

>+  Reporter.prototype.reportFunc = function(type, msg, success) {
>+    switch (type) {
>+      case reportType.ADD_PAGE:
>+        return this.addPage(msg);
>+      case reportType.START_PAGE:
>+        return this.startPage(msg);
>+      case reportType.TEST_RESULT:
>+        return this.addResult(msg, success);
>+      case reportType.FINISH_PAGE:
>+        return this.finishPage(success);
>+      case reportType.FINISHED_ALL_TESTS:
>+        todo_is(this.totalFailed(), 0, 'Test failures with ' + GLImplementationName());
>+        todo_is(this.totalTimeouts, 0, 'Tests timeouts with ' + GLImplementationName());
>+        timesTestSuiteAlreadyRun++;
As discussed (see below), I think we want a much more granular reporting here.

So, Benoit and I talked quite a bit about the preferences that need to be set here.  In the end, it just makes the most sense for the webgl test harness to set the preferences itself.  The one oddity here is that it will need to set the value of the osMesaLib preference to the name of the library on the system.  This really isn't much different from hardcoding system fonts that we do in the reftests, so I'm ok with that, but we'll need to ensure that if the library is not found, very clear failure messages are logged to point to the error explicitly.  This way when that library changes, we will quickly understand why the tests go orange.  Another thought I just had - do we link to that library?  If so, we could possibly detect it at make-time and make it available to the tests through some pre-processing tricks.

I noticed that we already have the original webgl conformance test file in our tree.  The current diff between that file and test_webgl_conformance_test_suite.html is pretty extreme.  Is there a way we can make that simpler, perhaps by changing the order of the code or factoring some of our js into our own files?  I'd like to see the platform library detection stuff of osMesaLib broken out for sure.  I'd like to make it as easy as possible to port these changes to a new version of webgl-conformance-tests.html in the future.

= Logging =
Not sure how familiar you are with mochitest.  The only items that are going to get logged are the items in the ok(), is(), todo() statements.  So, you'll need to make those as granular as possible.  Ideally, I'd like to see a 1:1 ratio between the webgl 'PASS' statements and these mochitest asserts.  Failing that, we need at least a mochitest ok() per webgl test file.  The more granular you can make these, the easier it will be to debug failures in the future.

As you're going to run the webgl tests twice, once with hardware rendering and once with software rendering, I think we want to call out the drawing mode in the mochitest log. You can do that pretty easily if you add a function that queries the current drawing pref and returns a string.  Something like this pseudocode:
function getWebGlLogMsg(aMsg) {
  var prefix = ''
  if (pref == 'hardwaremode')
    prefix = 'native_gl_mode::';
  else
    prefix = 'osmesa_mode::';
  return prefix + aMsg;
}

Then in your mochitest assertions be sure to use it:
ok(false, getWebGlLogMsg('oh no this failed'));

But please choose something more readable than 'getWebGlLogMsg' ;)  But that way, it will be quickly obvious to any developer reading the mochitest log file that we failed in hardware mode versus software mode or vice-versa.

= Future Looking =
I'm looking down the road to getting these tests running on Fennec.  It sounds like we would *not* want to run the tests in "software" mode on Fennec platforms.  If that is indeed the case, I'd like the test harness to read a preference at the beginning of the test that controls whether or not we perform the software rendering sweep of tests.  It should be something like pref('webgl.test.DoNotUseOSMesaLib', 'true'); This way, the harness can add that preference to the fennec testing profile and disable that behavior. If this preference does not exist, then the software rendering tests should be run as normal.  (i.e. the preference will not exist on Firefox).

Thanks so much for taking the first stab at getting these tests integrated, I really appreciate your work here.  I'm looking forward to your next patch version.  Let me know if you have any questions.
Attachment #460539 - Flags: review?(ctalbert) → review-
Ok; if you're ok with the test setting the prefs itself, I'm ok with it.  Figured you wanted that in a central location.

For osmesa, we don't link with it, and it's not system provided -- it's a DLL/.so that will have to be made available in some known location on the test machines.  So we can put it wherever we need to -- we can even check in the binaries directly into the tree if we have an easy way to get at that while running the tests.
Only answering part of this today; will send the new patch tomorrow:

> You can just use _TEST_FILES since I don't think we'll need to be splitting
> these up to deal with command line limitations like we have to do on canvas and
> dom tests.

ok sure.

> 
> >+	test_webgl_conformance_test_suite.html \
> >+	test_list.txt \
> These files ^ are going to cause you headaches. Because test_list.txt conforms
> to mochitest's definition of a 'test', it will attempt to load your text file
> and hang.  We have two ways forward here - rename these to testlist.txt or we
> can file a bug so that mochitest will skip over .txt files.  We'll need to
> rename the other two test_list.txt files deeper in the directory hierarchy as
> well, and change the references in the JS code too.

Rather than asking Mochitest people to special-case certain filename extensions, it seems easier to ask other WebGL people to just rename these files.

> 
> >+
> >+# split up into groups to work around command-line length limits
> Nit: remove comment, we're not doing that here.
> >+libs:: $(_TEST_FILES_0)
> >+	$(INSTALL) $(foreach f,$^,"$f") $(DEPTH)/_tests/testing/mochitest/tests/$(relativesrcdir)
> 
> Also, since we're not splitting up the blocks of test files, please use the
> _TEST_FILES variable (as opposed to _TEST_FILES_0) for the mochitest list. The
> variable name is just a convention but all the other mochitest makefiles use
> it...

OK

> 
> >new file mode 100644
> >--- /dev/null
> >+++ b/content/canvas/test/webgl/test_webgl_conformance_test_suite.html
> >@@ -0,0 +1,338 @@
> >+<script>
> >+
> >+  SimpleTest.waitForExplicitFinish();
> >+  //SimpleTest.requestLongerTimeout(10);
> >+
> >+
> >+  var statusElem = document.getElementById("status");
> >+  var statusTextNode = document.createTextNode('');
> >+  statusElem.appendChild(statusTextNode);
> Any reason why we can't put this code down at the bottom of start() after all
> the function declarations? 

No particular reason other than "my first lines of javascript", will do.

> 
> >+  function doRunTests() {
> >+    var reporter = new Reporter();
> >+    var iframe = document.getElementById("testframe");
> >+    var testHarness = new WebGLTestHarnessModule.TestHarness(
> >+        iframe,
> >+        'test_list.txt',
> >+        function(type, msg, success) {
> >+            return reporter.reportFunc(type, msg, success);
> >+        });
> >+    window.webglTestHarness = testHarness;
> >+    if (reporter.noWebGL) {
> >+        // XXX
> Is this "XXX" a carry-over from the webgl test code, or is this a question you
> have?

It's carried over from their code.

> 
> >+  Reporter.prototype.reportFunc = function(type, msg, success) {
> >+    switch (type) {
> >+      case reportType.ADD_PAGE:
> >+        return this.addPage(msg);
> >+      case reportType.START_PAGE:
> >+        return this.startPage(msg);
> >+      case reportType.TEST_RESULT:
> >+        return this.addResult(msg, success);
> >+      case reportType.FINISH_PAGE:
> >+        return this.finishPage(success);
> >+      case reportType.FINISHED_ALL_TESTS:
> >+        todo_is(this.totalFailed(), 0, 'Test failures with ' + GLImplementationName());
> >+        todo_is(this.totalTimeouts, 0, 'Tests timeouts with ' + GLImplementationName());
> >+        timesTestSuiteAlreadyRun++;
> As discussed (see below), I think we want a much more granular reporting here.

Yes OK.

> I noticed that we already have the original webgl conformance test file in our
> tree.  The current diff between that file and
> test_webgl_conformance_test_suite.html is pretty extreme.  Is there a way we
> can make that simpler, perhaps by changing the order of the code or factoring
> some of our js into our own files?

It's not really possible, but I don't think it matters, for the following reason: we are already using unmodified:
  - all the html test pages
  - the webgl-test-harness.js

This is more than 99% of the code. The custom test_webgl_conformance_test_suite.html is very small in comparison and we don't have a big interest in back-porting future changes that upsteam could make.

Moreover, the webgl-test-harness.js was designed specifically to allow people like us to write our own test runner and the 'standard' test runner doesn't have a lot of nontrivial code, it's pretty much just setting up the HTML page displaying the test results.


>  I'd like to see the platform library
> detection stuff of osMesaLib broken out for sure.  I'd like to make it as easy
> as possible to port these changes to a new version of
> webgl-conformance-tests.html in the future.

It's not easy for us to stay close to their code for various reasons, most importantly our need to run the test suite twice (with default GL and with OSMesa). But part of my changes are also generally useful improvements which I'm going to try to contribute to webgl-conformance-tests.html upstream, reducing the diff a little bit.

*** that's all for today, will continue replying tomorrow ***

Comment 9

9 years ago
(In reply to comment #7)
> Ok; if you're ok with the test setting the prefs itself, I'm ok with it. 
> Figured you wanted that in a central location.
> 
Usually yes, but for this test, it just makes more sense to do this in the test file.

> For osmesa, we don't link with it, and it's not system provided -- it's a
> DLL/.so that will have to be made available in some known location on the test
> machines.  So we can put it wherever we need to -- we can even check in the
> binaries directly into the tree if we have an easy way to get at that while
> running the tests.
We can stick the binaries into the tree and package them up in the make package-tests target, that will ensure they arrive on the test systems.  I imagine these pieces have to land in the components folder?  Would they need to be added to the components white-list as well?
nope, they're just bare DLLs/.so's that we'll load directly; they just have to be somewhere where we can get them.  An osmesa subdir under the toplevel directory would be ideal, since then we can find their location by getting the GRE directory and adding known strings to it.
Continuing replying...

(In reply to comment #6)
> = Logging =
> Not sure how familiar you are with mochitest.  The only items that are going to
> get logged are the items in the ok(), is(), todo() statements.

I'm very new to mochitest but I had understood that :-)

> So, you'll need
> to make those as granular as possible.  Ideally, I'd like to see a 1:1 ratio
> between the webgl 'PASS' statements and these mochitest asserts.  Failing that,
> we need at least a mochitest ok() per webgl test file.

OK, I understand, but I have a question: when I add a todo() for a currently failing test, when eventually that test starts passing, my todo() will report a failure. Keeping track of that for all the webgl test files loaded automatically from the test list would be very tedious, let alone keeping track of that for every sub-test (those PASS statements).

So maybe this should only be done once we pass 100% of the webgl tests --- hopefully in a few weeks? So we can right away add these as ok() instead of todo().


> The more granular you
> can make these, the easier it will be to debug failures in the future.

Sure, I understand.


> 
> As you're going to run the webgl tests twice, once with hardware rendering and
> once with software rendering, I think we want to call out the drawing mode in
> the mochitest log. You can do that pretty easily if you add a function that
> queries the current drawing pref and returns a string.  Something like this
> pseudocode:
> function getWebGlLogMsg(aMsg) {
>   var prefix = ''
>   if (pref == 'hardwaremode')
>     prefix = 'native_gl_mode::';
>   else
>     prefix = 'osmesa_mode::';
>   return prefix + aMsg;
> }
> 
> Then in your mochitest assertions be sure to use it:
> ok(false, getWebGlLogMsg('oh no this failed'));
> 
> But please choose something more readable than 'getWebGlLogMsg' ;)  But that
> way, it will be quickly obvious to any developer reading the mochitest log file
> that we failed in hardware mode versus software mode or vice-versa.

ok sure; this is actually quite similar to what I'm currently doing:

todo_is(this.totalFailed(), 0, 'Test failures with ' + GLImplementationName());
Yeah, I think eventually we should have ok() around every test, but for now it's probably fine to ok() entire files and todo() files that have failing things.  That shouldn't be too invasive, right?  That way we can check things in and still get test coverage before we have everything passing, without needing to do a bunch of merges when the upstream is updated.
This has the advantage that it can allow us to enable actual testing without waiting for the whole test suite to pass, so yes it is probably quite pragmatic.

I will have a look at that. It requires one to specify manually which test pages are known to fail. Not a too big deal indeed.
Ah, here's an idea: let's have such a list of test pages known to have at least some failure/timeout, and for each test page,
 - if it is in that list, we just do a todo() for this whole page
 - if it is NOT in that list, we do ok() on each sub-tests inside of that page

Comment 15

9 years ago
(In reply to comment #14)
> Ah, here's an idea: let's have such a list of test pages known to have at least
> some failure/timeout, and for each test page,
>  - if it is in that list, we just do a todo() for this whole page
>  - if it is NOT in that list, we do ok() on each sub-tests inside of that page
I think that gets us the best of both worlds.  I'm happy with either this or the proposal Vlad put forward up in comment 12.  Up to you Benoit.

For getting the OSMesalib onto the system under test, I'll propose this which should work for our three normal platforms and for remote-testing on Fennec should we decide to do this testing on those platforms.
* Put the osmesalib binaries into the tree under the webgl test directory.
* Tweak a makefile to ensure these get packaged into the packaged tests zip/tar.bz that is packaged after the build is completed.
* When mochitest starts, I'll change the harness to check for the existence of the OSMesaLib binary files.  
** If the binary files are found, they are copied into the testing profile.
** The mochitest here can reference then from a profD path.
** If the binary files are not found, then nothing is copied and they will not exist in the profile.

How's that sound?
OK, here's a new patch implementing most (but not yet all) of what we discussed.

Most importantly it does add a list of known-to-be-failing pages, and does a ok() statement for every subtest of these, and does a single ok() statement for successful pages.

A problem was that there is a small diff between the set of failing pages with default GL and with OSMesa, on my system (+/- 3 either side). To address this, the list of failing pages mentions all pages that are failing with either GL flavor, and only when a page succeeds with both GL flavors is there an error message asking for it to be removed from the list.

I don't ask for review yet; most importantly I need to rename the test_list.txt files to something that mochikit likes...
Attachment #460539 - Attachment is obsolete: true
Most importantly, with this new version, the mochitest reports success (an important step, i guess, towards having it run on the test servers).  (Of course it only reports success because the failing tests are listed in the list of known failing tests, but still).
I think this is my final patch! Please review.

Compared to my last comment, this new version:
 - renames test_list.txt to 00_test_list.txt as was already the case in the subdirs.
 - sets time-outs appropriately (hopefully). The per-test time-out is increased from 3 to 10 seconds. The global mochitest time-out is multiplied by 2, because here on a debug build it takes 2 minutes which was "dangerously close" to the default which I understand is 5 minutes (right?).
 - some modifications to webgl-test-harness.js that i'm going to propose upsteam.
 - reports on the number of expected-to-fail test pages.
Attachment #461384 - Attachment is obsolete: true
Attachment #461626 - Flags: review?(vladimir)
Attachment #461626 - Flags: review?(ctalbert)

Comment 20

9 years ago
Comment on attachment 461626 [details] [diff] [review]
Integrate WebGL conformance suite as a mochitest

This looks really good.  I have just a couple of nits.  I'm going to run it myself, so I might have more to say after I do that, but this looks great so far.  I'll complete the review after I run it in case I have more to say.

># HG changeset patch
># Parent db921e36971596d72f3607190ce96b59f280e07d
>
>diff --git a/content/canvas/src/WebGLContext.cpp b/content/canvas/src/WebGLContext.cpp
>--- a/content/canvas/src/WebGLContext.cpp
>+++ b/content/canvas/src/WebGLContext.cpp
>diff --git a/content/canvas/test/webgl/test_webgl_conformance_test_suite.html b/content/canvas/test/webgl/test_webgl_conformance_test_suite.html
>new file mode 100644
>+function start() {
>+
>+  function setOSMesaLib(value) {
>+    netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
>+    webglBranch().setCharPref("osmesalib", value);
>+  }

>+
>+  function getOSMesaLib() {
>+    netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
>+    return webglBranch().getCharPref("osmesalib");
>+  }
>+
>+  function restoreOldPrefs() {
>+    setEnabledForAllSites(saved_enabled_for_all_sites);
>+    setForceOSMesa(saved_force_osmesa);
>+    setOSMesaLib(saved_osmesalib);
>+  }
>+
>+  function create3DContext(canvas)
>+  {
>+    setEnabledForAllSites(true);
>+    setForceOSMesa(forceOSMesaInThisTestRun());
>+    setOSMesaLib("libOSMesa.so.6");
If you can give me the set of three files (windows, linux, and mac) for this library, I can create the bit of harness logic that will automatically place this file into the profile for you and set the path to it in the "osmesalib" preference.  That way we won't have to hard code this.  Once we have that, you likely won't need the "setOSMesaLib" function, as you'll be able to depend on the harness setting that preference for you.

>+  Page.prototype.finishPage = function(success) {
>+    var msg = ' (' + this.totalSuccessful + ' of ' +
>+              this.totalTests + ' passed)';
>+    if (success === undefined) {
>+      var css = 'testpagetimeout';
>+      msg = '(*timeout*)';
>+      ++this.totalTests;
>+      ++this.totalTimeouts;
>+      if (this.isExpectedToFullyPass()) {
>+        ok(false, this.errormsg('Unexpected timeout in this test page'));
>+      }
>+    } else if (this.totalSuccessful != this.totalTests) {
>+      var css = 'testpagefail';
>+      // failures have already been reported for the sub-tests
>+    } else {
>+      var css = 'testpagesuccess';
>+      testsSuccesses.push(this.url);
>+      if (this.isExpectedToFullyPass()) {
>+        ok(true, this.errormsg('Successful test page, so you should never see this message'));
Actually, you will see all the TEST-PASS messages when this is running via buildbot in the log output. Unlike xpcshell, mochitest will print out the passing message, so let's change this to just 'Successful test page' so as not to confuse people reading the logs later.
(In reply to comment #20)
> Comment on attachment 461626 [details] [diff] [review]
> Integrate WebGL conformance suite as a mochitest
> 
> This looks really good.  I have just a couple of nits.  I'm going to run it
> myself, so I might have more to say after I do that, but this looks great so
> far.  I'll complete the review after I run it in case I have more to say.
> 
> ># HG changeset patch
> ># Parent db921e36971596d72f3607190ce96b59f280e07d
> >
> >diff --git a/content/canvas/src/WebGLContext.cpp b/content/canvas/src/WebGLContext.cpp
> >--- a/content/canvas/src/WebGLContext.cpp
> >+++ b/content/canvas/src/WebGLContext.cpp
> >diff --git a/content/canvas/test/webgl/test_webgl_conformance_test_suite.html b/content/canvas/test/webgl/test_webgl_conformance_test_suite.html
> >new file mode 100644
> >+function start() {
> >+
> >+  function setOSMesaLib(value) {
> >+    netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
> >+    webglBranch().setCharPref("osmesalib", value);
> >+  }
> 
> >+
> >+  function getOSMesaLib() {
> >+    netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
> >+    return webglBranch().getCharPref("osmesalib");
> >+  }
> >+
> >+  function restoreOldPrefs() {
> >+    setEnabledForAllSites(saved_enabled_for_all_sites);
> >+    setForceOSMesa(saved_force_osmesa);
> >+    setOSMesaLib(saved_osmesalib);
> >+  }
> >+
> >+  function create3DContext(canvas)
> >+  {
> >+    setEnabledForAllSites(true);
> >+    setForceOSMesa(forceOSMesaInThisTestRun());
> >+    setOSMesaLib("libOSMesa.so.6");
> If you can give me the set of three files (windows, linux, and mac) for this
> library, I can create the bit of harness logic that will automatically place
> this file into the profile for you and set the path to it in the "osmesalib"
> preference.  That way we won't have to hard code this.  Once we have that, you
> likely won't need the "setOSMesaLib" function, as you'll be able to depend on
> the harness setting that preference for you.

On linux distros, it's libOSMesa.so.[1-9]. A Google search finds at least the numbers 3,4,6,7,8. Here on fedora 13, it's 6.

Many distros, including Fedora here, don't add a libOSMesa.so symlink, so appending the number is really needed.

On Windows, Vlad's build has the filename osmesa32.dll.

On Mac, it seems to be libOSMesa.7.dylib but I would also try other numbers like on linux.

> 
> >+  Page.prototype.finishPage = function(success) {
> >+    var msg = ' (' + this.totalSuccessful + ' of ' +
> >+              this.totalTests + ' passed)';
> >+    if (success === undefined) {
> >+      var css = 'testpagetimeout';
> >+      msg = '(*timeout*)';
> >+      ++this.totalTests;
> >+      ++this.totalTimeouts;
> >+      if (this.isExpectedToFullyPass()) {
> >+        ok(false, this.errormsg('Unexpected timeout in this test page'));
> >+      }
> >+    } else if (this.totalSuccessful != this.totalTests) {
> >+      var css = 'testpagefail';
> >+      // failures have already been reported for the sub-tests
> >+    } else {
> >+      var css = 'testpagesuccess';
> >+      testsSuccesses.push(this.url);
> >+      if (this.isExpectedToFullyPass()) {
> >+        ok(true, this.errormsg('Successful test page, so you should never see this message'));
> Actually, you will see all the TEST-PASS messages when this is running via
> buildbot in the log output. Unlike xpcshell, mochitest will print out the
> passing message, so let's change this to just 'Successful test page' so as not
> to confuse people reading the logs later.

ah ok. uploading a new patch.
this new version only has the fixed message, 'Successful test page', as discussed in previous comment.
Attachment #461626 - Attachment is obsolete: true
Attachment #462432 - Flags: review?(vladimir)
Attachment #462432 - Flags: review?(ctalbert)
Attachment #461626 - Flags: review?(vladimir)
Attachment #461626 - Flags: review?(ctalbert)

Comment 23

9 years ago
Comment on attachment 462432 [details] [diff] [review]
Integrate WebGL conformance suite as a mochitest

Ran it on mochitest, and it works great, so r+.  It also exposes a bug in mochitest, which I've filed as bug 584611.  I think we're good to go with this patch, thanks for your patience, it's been a crazy week.  

I have several more questions regarding the osmesalib library detection.

Since this is a library that we have to install on all the testing systems can we:
1. take the current versions for our testing systems and check it in to m-c
2. At package test time, that library will be bundled with mochitest
3. The file will be available in the profile directory that mochitest runs with.

Your code in the test harness would then:
1. If the osmesalib does not exist, then the code checks the os and if the os is in our list of test os directories in the profile (see below) then we use that osmesalib library. If not, go to step 2
2. If we can't find it on the system and it's not in our list, we fail to run that set of tests.

So the profile would have the a structure that looks something like:
profile/
|....fedora/
|........osmesalibfedora.so
|....win7/
|........osmesalibwin7.dll
|....winxp/
|........osmesalibwinxp.dll
and so on for each different combination of os and library that we have test machines for.


Now, this is not so good if you're a developer working on an OS that we don't have automated test boxes for.  If a developer falls into that case, they will see an error message as per step 2, and that error message could direct them to use the --set-pref option (on runtests.py) to set their own webgl.osmesalib preference to point to their own installation of the osmesalib library. 

If we do that, then the automation boxes are happy and the developers are largely happy too.  The largest set of developers who will likely be affected by this are those running ubunutu. 

I think this will be more robust than trying to guess the name of the library on the version/distribution of the operating system you happen to find yourself on.
Attachment #462432 - Flags: review?(ctalbert) → review+
(In reply to comment #23)
> Comment on attachment 462432 [details] [diff] [review]
> Integrate WebGL conformance suite as a mochitest
> 
> Ran it on mochitest, and it works great, so r+.  It also exposes a bug in
> mochitest, which I've filed as bug 584611.  I think we're good to go with this
> patch, thanks for your patience, it's been a crazy week.  

It has! But you're welcome and I'm happy to have learned stuff, too.

> 
> I have several more questions regarding the osmesalib library detection.
> 
> Since this is a library that we have to install on all the testing systems can
> we:
> 1. take the current versions for our testing systems and check it in to m-c
> 2. At package test time, that library will be bundled with mochitest
> 3. The file will be available in the profile directory that mochitest runs
> with.
> 
> Your code in the test harness would then:
> 1. If the osmesalib does not exist, then the code checks the os and if the os
> is in our list of test os directories in the profile (see below) then we use
> that osmesalib library. If not, go to step 2
> 2. If we can't find it on the system and it's not in our list, we fail to run
> that set of tests.
> 
> So the profile would have the a structure that looks something like:
> profile/
> |....fedora/
> |........osmesalibfedora.so
> |....win7/
> |........osmesalibwin7.dll
> |....winxp/
> |........osmesalibwinxp.dll
> and so on for each different combination of os and library that we have test
> machines for.
> 
> 
> Now, this is not so good if you're a developer working on an OS that we don't
> have automated test boxes for.  If a developer falls into that case, they will
> see an error message as per step 2, and that error message could direct them to
> use the --set-pref option (on runtests.py) to set their own webgl.osmesalib
> preference to point to their own installation of the osmesalib library. 
> 
> If we do that, then the automation boxes are happy and the developers are
> largely happy too.  The largest set of developers who will likely be affected
> by this are those running ubunutu. 
> 
> I think this will be more robust than trying to guess the name of the library
> on the version/distribution of the operating system you happen to find yourself
> on.

OK, it's for you to decide; but in your proposal, when I want to run just my test with

TEST_PATH=content/canvas/test/webgl/test_webgl_conformance_test_suite.html make mochitest-plain

how do I have to modify this command line to get it to find OSMesa?

Since I am used to linux, I am used to having software automatically find libraries for me without me having to do anything, but I agree that just having to pass a command-line argument is not too bad.
ah wait, I replied too fast:

(In reply to comment #23)
> Since this is a library that we have to install on all the testing systems can
> we:
> 1. take the current versions for our testing systems and check it in to m-c

Here, libOSMesa.so.6 weighs 2 megabytes. If you multiply by the number of testing platforms we haves you get something around 20 MB. Everytime we later update these binary files, that'll be another 20 MB. I think that's the kind of reason why people frown upon checking generated files into VCS's...

This is only needed for our own test servers (other people like me just want the mochitest to pick up the OSMesa they already have on their system, or else either just omit or fail that test, you decide) so I am not convinced that this needs to be checked in m-c.
If you're going to write Mochitests that expect to be able to run with OSMesa, and you expect them to be run anywhere other than tinderbox, then I think you have no choice but to check the binaries into mozilla-central.

We can make the Mochitest harness set prefs or env vars or whatever is easiest to give you the full path to the binary. We could, for example, set a dummy pref to the value, and then your tests would just need to copy its value to the correct pref in order to use it.

We could also make the harness search the system for a pre-installed libOSMesa, so that if the user already has it installed we can just use their copy which is more likely to work. I think we probably need to do both in order to make this work for most people.
(In reply to comment #26)
> If you're going to write Mochitests that expect to be able to run with OSMesa,
> and you expect them to be run anywhere other than tinderbox, then I think you
> have no choice but to check the binaries into mozilla-central.

That's true: IF we want it to work _seamlessly_ for everybody, THEN we need to check binaries into m-c; BUT let's emphasize that we probably don't want to check binaries into m-c THUS we just must not aim to make it work _seamlessly_ for everybody.

This means that if random people like me want to run this mochitest on our own machines, we have to be ready to:
1. install OSMesa by ourselves
2. either install it to a system-recognize path or specify in some way the path where it's installed
3. there remains the question of the filename: if we want to auto-detect it, there are several filenames to try on each platform, see comment 21 above.

> We can make the Mochitest harness set prefs or env vars or whatever is easiest
> to give you the full path to the binary. We could, for example, set a dummy
> pref to the value, and then your tests would just need to copy its value to the
> correct pref in order to use it.
> 
> We could also make the harness search the system for a pre-installed libOSMesa,
> so that if the user already has it installed we can just use their copy which
> is more likely to work. I think we probably need to do both in order to make
> this work for most people.

Yes, I agree, having both would be optimal. Some auto-detection is good because on many systems it's not hard to detect at all e.g. on linux you just need to check libOSMesa.so.[1-9] in decreasing order and there's not need to worry about full paths at all. But then it's also good to give a way to force a given path.

Do you want me to provide an updated version of the mochitest doing the auto-detection on linux + honoring some dummy pref if it's set ?
I'm not wild about having Mochitests that won't run on developer machines. How are people supposed to test things locally if it requires extra setup? It seems like a recipe for failure. "I ran the Mochitests locally and they were fine, but then I checked in and broke the tree."
If people want to run tests to make sure that their changes don't break the tree, they have to use the Try server anyway, no? How else can they make sure that they don't break things for other platforms than theirs?

Also, even if you want this mochitest to run in the same way for everybody without any custom step to follow, there remains the possibility of declaring that OSMesa is a dependency for building firefox's tests. This also opens the door to detecting OSMesa at configure-time when tests are enabled. One could then add the OSMesa DLL to the archive of stuff needed to build firefox on windows, etc.
That would be acceptable. We could check in configure or when Mochitest starts and error out (with a useful error message indicating how to find the library).
OK, great. So a big prerequisite is to have OSMesa binaries for
 - Windows 32 bit
 - Windows 64 bit
 - Mac OS X

On linux, this is handled by the distros though it won't hurt if we can tell the user the name of the package to install, at least for the most important distros, i'm taking care of that.

I don't have a big preference between doing this at configure-time and at mochitest start time, perhaps it's just a bit more logical to do that at configure time (again, only when building with tests enabled) like the rest and what's the point anyway of building tests that will refuse to run.
Package names on linux distros (note that since we don't use headers, we don't need -devel packages):

Debian (all versions): libosmesa6
Ubuntu (all versions): libosmesa6
Fedora (all versions): mesa-libOSMesa
openSUSE (all versions): Mesa
Two benefits to doing it in the test harness instead of configure:
1) people who just want to build and not run the tests don't get bothered
2) people running tests via the test package instead of building their own will get notified (we run tests this way on tinderbox, it's something we could reasonably hit and it would be good to get a sane error message early)
(In reply to comment #33)
> Two benefits to doing it in the test harness instead of configure:
> 1) people who just want to build and not run the tests don't get bothered

Do you mean "who just want to build firefox" or "who just want to build the tests" ?

If they just want to build firefox, they should not pass --enable-tests to configure, and then, they won't get bothered (because we'd only require OSMesa if tests are enabled).

If they want to build the tests and not run them... I hadn't thought of that case, I didn't know people would want such a thing. You decide :-)

> 2) people running tests via the test package instead of building their own will
> get notified (we run tests this way on tinderbox, it's something we could
> reasonably hit and it would be good to get a sane error message early)

Ah OK, I didn't know.
(In reply to comment #34)
> (In reply to comment #33)
> If they just want to build firefox, they should not pass --enable-tests to
> configure, and then, they won't get bothered (because we'd only require OSMesa
> if tests are enabled).
> 
> If they want to build the tests and not run them... I hadn't thought of that
> case, I didn't know people would want such a thing. You decide :-)

--enable-tests is the default, so that would mean that everyone who just wants a build would have to --disable-tests. Let's just put this in runtests.py.
(In reply to comment #35)
> (In reply to comment #34)
> > (In reply to comment #33)
> > If they just want to build firefox, they should not pass --enable-tests to
> > configure, and then, they won't get bothered (because we'd only require OSMesa
> > if tests are enabled).
> > 
> > If they want to build the tests and not run them... I hadn't thought of that
> > case, I didn't know people would want such a thing. You decide :-)
> 
> --enable-tests is the default, so that would mean that everyone who just wants
> a build would have to --disable-tests. Let's just put this in runtests.py.

Ah OK.
Er wait, no -- we need to check in the OSMesa binaries into the tree.  I would like to have an explicit known version of OSMesa that everyone is using for baseline tests; we can upgrade those as needed, and people should be able to test using their own OSMesa/GL/whatever if they want, but that should be an override pref or something similar -- perhaps by having the test harness not override the pref if one is already set or something.  But for the tests we need to have consistent versions, which means we must have binaries checked in (for all platforms even, including linux).
(In reply to comment #37)
> Er wait, no -- we need to check in the OSMesa binaries into the tree.  I would
> like to have an explicit known version of OSMesa that everyone is using for
> baseline tests; we can upgrade those as needed, and people should be able to
> test using their own OSMesa/GL/whatever if they want, but that should be an
> override pref or something similar -- perhaps by having the test harness not
> override the pref if one is already set or something.  But for the tests we
> need to have consistent versions, which means we must have binaries checked in
> (for all platforms even, including linux).

But if we check OSMesa binaries into m-c, we make m-c heavier for everybody! And everytime we update these binaries, the weight adds up. Also, this is irreversible.

And on linux, shipping binaries is nontrivial (it might well be feasible to build OSMesa binaries that work well across distros, due to the nature of OSMesa, but in general that's not always possible) and often not wanted (people are serious about only getting binaries through their distros package manager).

If I understand well, the only reason to do this would be in case of inconsistencies between different versions of OSMesa. Couldn't we wait until a concrete problem happens in this area, before we go that route?
(In reply to comment #38)
> But if we check OSMesa binaries into m-c, we make m-c heavier for everybody!
> And everytime we update these binaries, the weight adds up. Also, this is
> irreversible.

a m-c + working dir is ~1GB on disk.  A single debug build is a few GB.  osmesa for win32 is 2MB, and I'd expect the others to be similar, so let's say 6MB total.  Maybe updated once every 6 months to a year; that seems pretty minor in the long run.  Maybe growing to 10MB if we add x86-64 or ARM or something.

> And on linux, shipping binaries is nontrivial (it might well be feasible to
> build OSMesa binaries that work well across distros, due to the nature of
> OSMesa, but in general that's not always possible) and often not wanted (people
> are serious about only getting binaries through their distros package manager).

It should be possible to build a pretty generic OSMesa that doesn't depend on anything other than libc.  We're not trying to solve a generic problem.  As for the second part, we're talking about developers developing software here.  It's not like we're going to install a .so in /usr/lib (or anywhere).

> If I understand well, the only reason to do this would be in case of
> inconsistencies between different versions of OSMesa. Couldn't we wait until a
> concrete problem happens in this area, before we go that route?

No, that's a side benefit -- the reason to do this would be so that the WebGL tests can run by default on every developer's machine.  Our Windows and MacOSX developers will neither have OSMesa nor will they have an easy way to get it, and those are our most important platforms.  So we'll end up disabling the tests on those platforms if OSMesa isn't found/set up, which means that no developer will ever bother to install it.

There is no way to "require OSMesa" on win32 or MacOS X.  Those developers can't "apt-get install osmesa" (maybe OSX users can port install?  but enforcing a full mesa build on all OSX developers whenever they happen to get this changeset will not be a popular move at all; it doesn't help windows developers at all, where we'd have to at best ship osmesa with mozilla-build or something).

The only two options are: 1) put osmesa binaries in m-c; 2) don't run webgl tests by default (that is, skip them if osmesa isn't found).
OK, fine!

(In reply to comment #39)
> The only two options are: 1) put osmesa binaries in m-c; 2) don't run webgl
> tests by default (that is, skip them if osmesa isn't found).

OK, I don't have a preference between 1 and 2, let others choose.
Ok, bjacob and I talked -- probably the best option for right now is #2; basically, skip them if OSMesa is not found.

Given that -- do you guys think we can get this in this week?  Clint, what's needed on your side to get things deployed so that we can have an OSMesa library wherever we need it?  And what do you need from us?  (I assume binaries, at least)
Clint's out this week, I'm on point for helping you out. If you need to get binaries onto the test machines, you'll need to provide binaries and file a RelEng bug to get them deployed. Making runtests.py find the binaries should be easy enough, we can either put them in a known location, in the library search path, or require that an environment variable is set pointing to their location.
This new version should be ready to be checked in!

Changes since previous version:

* The OSMesa filename pref is no longer set by the mochitest, it is now the responsibility of the user to set it, by setting the EXTRA_TEST_ARGS environment variable, e.g.

   EXTRA_TEST_ARGS='--setpref=webgl.osmesalib=\"libOSMesa.so.6\"'

* OSMesa is now optional. If trying to run WebGL with OSMesa somehow fails (for example because the OSMesa prefs are not properly set), the OSMesa part is completely skipped. Thus you can still run the tests successfully, it will only test with your native GL.

* However, when checking what tests should be removed from the list of know-to-fail tests, only those tests that succeeded twice (with and without OSMesa) are reported. This means that if you want to be able to update the list of known-to-fail tests, you do need OSMesa.

* a test that was consuming stupid amounts of memory (potentially 4 GB) was disabled (subtest of quickCheckAPIBadArgs.html for bufferData())

* updated README.mozilla

* Also a change that will interest only Vlad:  in WebGLContext.cpp, better (more platform-specific) error messages when WebGL context creation fails.
Attachment #467890 - Flags: review?(vladimir)
Attachment #467890 - Flags: review?(ctalbert)
Note: the only reason why I'm not marking the previous patch as 'obsolete' is that it carries a r+ from Clint.

Comment 46

9 years ago
Comment on attachment 467890 [details] [diff] [review]
Integrate WebGL conformance suite as a mochitest, updated

In general for such minor changes to a patch, it is usually OK to carry an r+ forward to the new version.

I tried to land the related "test detection" bug (bug 584611) this morning, but it bounced, unfortunately.  I'll re-land it on Monday as I'll be afk all weekend.  

Since we're going the --EXTRA_TEST_ARG route, we need to file a RelEng bug to have them install the osmesa binaries on their slaves.  Have you done that yet? (Mozilla.org::Release Engineering, be as specific as possible, provide a link to the binaries for Fedora, Ubuntu, Win7, Win2k3, Win XP, Mac 10.5, and Mac 10.6 and reference this bug)

It would be great to land this now and then turn on the rest of the OSMesa stuff once the binaries land on the slaves that way webgl support in Firefox 4 is not gated on the buildslave upgrade, which can take a little time (there are something like 500 buildslaves).  We just have to be certain that running without OSMesa will not cause unexpected failures in the tests so that it can run in the meantime.
Attachment #467890 - Flags: review?(ctalbert) → review+

Updated

9 years ago
Depends on: 584611
Depends on: 589814
The rel-eng bug is there: bug 589814

Notice that the latest version of my Mochitest patch allows the Mochitest to run (even succeed!) without OSMesa, provided of course that some other OpenGL is available. But of course, the test coverage is more extensive/reliable with OSMesa, so it is very desirable to have OSMesa everywhere.
Final version of the patch, carrying forward r+ from Clint and Vlad.

The only difference from the previous version is that now, when WebGL context creation fails (i.e. no OpenGL is available anyhow), we record that as a todo() in the mochitest, instead of as a failure. When WebGL succeeds but not with force_osmesa, we also record that as a separate todo().

The point of this change is to be able to land this now, and take care of the rel-eng side of things later (making sure that the test machines are WebGL-capable).
Attachment #468381 - Flags: review+
... updated once again because of a typo.
Attachment #468381 - Attachment is obsolete: true
Attachment #468384 - Flags: review+
... this is starting to look like a joke: forgot to save my file before doing 'hg qref'.
Attachment #468384 - Attachment is obsolete: true
Attachment #468386 - Flags: review+
Clint: Hope you can land your fix soon!

Comment 52

9 years ago
(In reply to comment #51)
> Clint: Hope you can land your fix soon!
It caused an orange on mochitest 5.  I'm running it on try server again now, and I think I should have it resolved today.
Carrying forward r+.

This new version doesn't rely anymore on nsinstall copying directories recursively as that fails on windows.

Using tar c  | tar x instead.
Attachment #468782 - Flags: review+
also, this new version has a workaround for bug 584611.
This new version disables the running of the tests on native GL, unless the new pref webgl.mochitest_native_gl is set to true.

It turns out that differences between GL hardware are making it hard to run on without causing a few test failures for now.

Thus OSMesa becomes the 'main' way of running this mochitest. A corollary of this change is that if you don't have OSMesa AND you didn't set mochitest_native_gl to true, then this mochitest does nothing at all.
Attachment #468807 - Flags: review+
Landed: http://hg.mozilla.org/mozilla-central/rev/7017c3afc797

Let's wait a bit to make sure it doesn't bounce....
This was marked as blocking beta5. Removed that flag. Can someone mark it as... whatever is appropriate? This should be done before the feature freeze, but potentially after beta5.
blocking2.0: beta5+ → ?
I had another quick look at it... since the last time, we realized that OSMesa was too buggy and unmaintained to be very useful, so we now aim at running the mochitest only once per machine, on actual GL (or ANGLE).

This patch simplifies a lot the mochitest, no need to pass any pref anymore, and updates the list of failing tests. As a temporary measure it also turns 'unexpected successes' into todo's (instead of failures) as that is absolutely necessary in order to pass on different hardware with different failures. Of course, we want to eventually pass 100% everywhere so we'll eventually be able to make unexpected successes be failures again.

This patch is meant to be applied on top of the patch in bug 593496, as it fixes a bunch of failures.
Attachment #474023 - Flags: review?(vladimir)
Great success on Linux opt. It's nice!    http://tinderbox.mozilla.org/showlog.cgi?log=MozillaTry/1284130456.1284131348.19639.gz&fulltext=1

On the other hand, on Mac, the CGL context creation is crashing: http://tinderbox.mozilla.org/showlog.cgi?log=MozillaTry/1284130180.1284130571.16454.gz

Here are the last lines of output before the crash:

34282 INFO TEST-START | /tests/content/canvas/test/webgl/test_webgl_conformance_test_suite.html
colorbits: 24 alpha: 8 depth: 16 stencil: 0
2010-09-10 07:55:31.396 firefox-bin[442:903] invalid pixel format
2010-09-10 07:55:31.396 firefox-bin[442:903] invalid context
TEST-UNEXPECTED-FAIL | /tests/content/canvas/test/webgl/test_webgl_conformance_test_suite.html | Exited with code 1 during test run
INFO | automation.py | Application ran for: 0:04:14.276568
INFO | automation.py | Reading PID log: /var/folders/H5/H5TD8hgwEqKq9hgKlayjWU+++TM/-Tmp-/tmpguBCIwpidlog
PROCESS-CRASH | /tests/content/canvas/test/webgl/test_webgl_conformance_test_suite.html | application crashed (minidump found)


Here's the relevant crash info:

Crash reason:  EXC_BAD_ACCESS / KERN_INVALID_ADDRESS
Crash address: 0x1f

Thread 0 (crashed)
 0  XUL!mozilla::gl::GLContextProviderCGL::GetGlobalContext [GLContext.h:c7ab78580846 : 407 + 0x0]
    rbx = 0x00000000   r12 = 0x00000000   r13 = 0x5fbfa460   r14 = 0x1ecba670
    r15 = 0x5fbfa530   rip = 0x00db165e   rsp = 0x5fbfa3a0   rbp = 0x00000000
 1  XUL!mozilla::gl::CreateOffscreenFBOContext [GLContextProviderCGL.mm:c7ab78580846 : 326 + 0x4]
    rbx = 0x00000001   r12 = 0x00000000   r13 = 0x5fbfa460   r14 = 0x1ecba670
    r15 = 0x5fbfa530   rip = 0x00db1449   rsp = 0x5fbfa400   rbp = 0x5fbfa4e0
 2  XUL!mozilla::gl::GLContextProviderCGL::CreateOffscreen [GLContextProviderCGL.mm:c7ab78580846 : 483 + 0x11]
    rbx = 0x00000000   r12 = 0x00000000   r13 = 0x5fbfa460   r14 = 0x1ecba670
    r15 = 0x5fbfa530   rip = 0x00db2149   rsp = 0x5fbfa440   rbp = 0x5fbfa4e0
 3  XUL!mozilla::WebGLContext::SetDimensions [WebGLContext.cpp:c7ab78580846 : 342 + 0xc]
    rbx = 0x1ece6570   r12 = 0x0000012c   r13 = 0x00000096   r14 = 0x1ecba670
    r15 = 0x5fbfa530   rip = 0x00443d87   rsp = 0x5fbfa4e0   rbp = 0x80004005
 4  XUL!nsHTMLCanvasElement::GetContext [nsHTMLCanvasElement.cpp:c7ab78580846 : 449 + 0xc]
    rbx = 0x1ece6578   r12 = 0x004456c0   r13 = 0x5fbfa640   r14 = 0x1ecba670
    r15 = 0x5fbfa670   rip = 0x004a2c72   rsp = 0x5fbfa590   rbp = 0x1ecba5f0


So there are a few questions:
 - where do these messages, "invalid pixel format" and "invalid context" come from?
 - why are we still continuing doing GL stuff after such messages have been printed?
 - why are we jumping to bogus address 0x1F ?

In any case this looks like a OpenGL/Mac bug, not a WebGL bug. We should at least be allowed to turn WebGL on by default on platforms on which tests are successful, and that includes Linux!
Depends on: 595194
Filed the Mac crash separately: bug 595194.
Little update:
 - disable 1 test that was failing on Windows tryserver
 - disable 1 test that was failing on Mac tryserver
 - print success % at the end

Perhaps I'm optimistic, but right now it's looking like this might just be ready to land and enable WebGL by default. Just need to get a fully green tryserver run first.
Attachment #474023 - Attachment is obsolete: true
Attachment #474103 - Flags: review?(vladimir)
Attachment #474023 - Flags: review?(vladimir)
Updated:
 - implement actual commenting-out syntax (# at start of line) for failing_tests.txt. The count of failing tests was wrong otherwise.
 - since it appears that we don't have any ATI hardware in the test machines, I re-enabled the gl_PointSize test. Not yet tested on tryserver though.

Also, good news: the tryservers seem to be completely happy now, and we're still only at 59 files listed in failing_tests.txt which is not bad. Only 2 tests have a different status between different test machines. That may be thanks to having very homogeneous hardware across test machines (all mac mini with nvidia).
Attachment #474103 - Attachment is obsolete: true
Attachment #474188 - Flags: review?(vladimir)
Attachment #474103 - Flags: review?(vladimir)
Attachment #474188 - Attachment is patch: true
Attachment #474188 - Attachment mime type: application/octet-stream → text/plain
This does the same thing as the prvious patch, but also switches us to using ANGLE on win32.
Attachment #475916 - Flags: review?(bjacob)
And this temporarily disables the tes tsuite unless a pref is set; this way we can get the code in and work from a known point to fix the last issues with timeouts and similar.
Attachment #475917 - Flags: review?(bjacob)

Updated

9 years ago
Blocks: 596720

Updated

9 years ago
No longer blocks: 575738
Comment on attachment 475916 [details] [diff] [review]
switch to checking and using ANGLE

-    PRBool forceOSMesa = PR_FALSE;
+    PRBool forceOSMesa = PR_FALSE, preferEGL = PR_FALSE;
     prefService->GetBoolPref("webgl.force_osmesa", &forceOSMesa);
+    prefService->GetBoolPref("webgl.prefer_egl", &preferEGL);


no need anymore to do anything explicitly about osmesa.


+    if (!gl && forceOSMesa) {
+        gl = gl::GLContextProviderOSMesa::CreateOffscreen(gfxIntSize(width, height), format);
+        if (gl && !InitAndValidateGL()) {
+            LogMessage("WebGL Warning: You set the webgl.force_osmesa preference to true, "
+                       "but OSMesa can't be found or initialized.  Did you set webgl.osmesalib?");
+            gl = nsnull;
         }
+    }


same.



+          ok(false, 'Test expected to fail, but passed: ' + testsExpectedToFail[i]);


as we discussed, make that just a todo(false,...) so it doesn't cause test failures, as in the short term we have different tests failing on different platforms
Attachment #475916 - Flags: review?(bjacob) → review+
Ah! There is one more little part of my patch that didn't make it into yours: the support for commenting out lines in failing_tests.txt. It's really useful as otherwise, if you just count on free-form lines not having any effect, they still affect the count of tests-expected-to-fail, which is useful info to display. Here was the relevant part of my patch:

-  var testsExpectedToFail = loadTextFileSynchronous('failing_tests.txt').split('\n');
-  var testsSuccessesWithOSMesa = [];
+  var testsExpectedToFailRaw = loadTextFileSynchronous('failing_tests.txt').split('\n');
+  var testsExpectedToFail = [];
+  for (i = 0; i < testsExpectedToFailRaw.length; ++i) {
+    var str = testsExpectedToFailRaw[i];
+    if (str.length > 4 && str[0] != '#')
+      testsExpectedToFail.push(str);
+  }
+  var testsSuccesses = [];
Ah and yet one more part of my patch: displaying how many % tests passed at the end:


+        statusTextNode.textContent = 'Finished, ' +
+                                     Math.round(100 * this.totalSuccessful / this.totalTests) +
+                                     '% passed';
Attachment #475917 - Flags: review?(bjacob) → review+
--> betaN+ so we get this in a beta, but not holding B7 on this.
blocking2.0: beta7+ → betaN+
OK thanks. The only nontrivial remaining issue is figuring how to shut up the intermittent timeout in bug 595352.
Landed disabled, there's probably still some things that I missed -- but let's close off this bug and open a new one for fixing those, this is getting too long already.  Tests are disabled unless an env var is set (for local running).

http://hg.mozilla.org/mozilla-central/rev/085538a08eb2

    - Vlad
Depends on: 608526
Bad news: bug 608526 makes the JS engine crash when running this mochitest.
Posted patch fix directX SDK detection (obsolete) — Splinter Review
3 things here:

 * cut -f 4 was failing when the path contains spaces, replaced 4 by 4-.
 * also look for June 2010 SDK, let's make it easy for people to build with ANGLE i.e. let's not require them to set env vars explicitly. let me know if 'reg' allows to use wildcards on a key name.
 * fix message output, was referencing undefined variable MOZ_DIRECTX_SDK
Attachment #498806 - Flags: review?(vladimir)
Comment on attachment 498806 [details] [diff] [review]
fix directX SDK detection

Don't add the explicit June 2010 check -- the Feb 2010 check is only there because we don't want to depend on a new mozilla-build version for getting this going on tinderboxes.  Otherwise, users can just set INCLUDE/LIB in their environments for now until we update m-b to find the SDK.  We can also fix this check here to look in the DirectX registry key and enumerate all installed SDKs, possibly with a --with-directx-sdk option.  But none of that needs to happen for now, so I'd just fix the cut issue you were running into.
Attachment #498806 - Flags: review?(vladimir) → review-
ok, updated: no longer looking for June 2010
Attachment #498806 - Attachment is obsolete: true
Attachment #498813 - Flags: review?(vladimir)
Attachment #498813 - Attachment is patch: true
Attachment #498813 - Attachment mime type: application/octet-stream → text/plain
Several changes there:
 * extract d3dx9_42.dll from .cab archive, install it
 * in configure.in, no longer clear the value of MOZ_ANGLE and MOZ_DIRECTX_SDK_PATH: indeed I needed to pass them as environment variables.
 * use separate failing tests lists for windows, mac, linux. Indeed, there are differences even between linux and mac. This is actually good news, as this means that some failures are either driver bugs or us relying on undefined behavior (e.g. to set a particular GL error).
 * when loading these failing-tests files, convert to unix line breaks (indeed the windows list at least needs to support windows line endings)
 * several improvements/simplicifactions to the mochitest
Attachment #500622 - Flags: review?(vladimir)
Bad news, tryserver still reports Windows failures:

 * on windows XP, it still can't load EGL, need to check the build log.

 * on windows 7, it crashes in the JS engine, in EnterMethodJIT(),

http://tinderbox.mozilla.org/showlog.cgi?log=MozillaTry/1293930880.1293931937.13357.gz&fulltext=1#err0

I'll try to reproduce locally.
Comment on attachment 500622 [details] [diff] [review]
Final patch, should be green on all platforms...

>diff --git a/content/canvas/test/webgl/Makefile.in b/content/canvas/test/webgl/Makefile.in
>--- a/content/canvas/test/webgl/Makefile.in
>+++ b/content/canvas/test/webgl/Makefile.in
>@@ -41,18 +41,19 @@ srcdir		= @srcdir@
> VPATH		= @srcdir@
> relativesrcdir  = content/canvas/test/webgl
> 
> include $(DEPTH)/config/autoconf.mk
> include $(topsrcdir)/config/rules.mk
> _TEST_FILES = \
> 	test_webgl_conformance_test_suite.html \
> 	00_testFIXME_list.txt \
>-	failing_tests.txt \
>+	failing_tests_linux.txt \
>+	failing_tests_windows.txt \
>+        failing_tests_mac.txt \
> 	$(NULL)

Nit: don't use tabs here, use two-space indents for continuations in Makefiles.
>diff --git a/gfx/angle/Makefile.in b/gfx/angle/Makefile.in
>--- a/gfx/angle/Makefile.in
>+++ b/gfx/angle/Makefile.in
>@@ -143,42 +143,49 @@ endif
> ifdef BUILD_ANGLE
> ifdef MOZ_DEBUG
> ANGLE_DIR = Debug
> else
> ANGLE_DIR = Release
> endif
> 
> ifdef MOZ_DIRECTX_SDK_PATH
>-INCLUDE := $(INCLUDE);$(MOZ_DIRECTX_SDK_PATH)\include
>-LIB := $(LIB);$(MOZ_DIRECTX_SDK_PATH)\lib\x86
>+INCLUDE := $(INCLUDE);$(MOZ_DIRECTX_SDK_PATH)/include
>+LIB := $(LIB);$(MOZ_DIRECTX_SDK_PATH)/lib/x86
>+D3DX9_DLL := d3dx9_42.dll

Are we redistributing this file with Firefox? Is this a redistributable file? We should double-check the licensing if we are going to distribute it. (If this is test-only it might not matter.)
Ted:
 * applied your indentation comments
 * Vlad already asked Legal to look at this, I'll let him reply about that.

This new patch:
 * fixes a bug whereby we were not extracting and installing d3dx9_42.dll when the DirectX SDK was found in the registry
 * adds d3dx9_42.dll to the manifest so it's really packaged.

Unfortunately:
 * this still doesn't fix the issue on the WinXP tryserver. The build log confirms that d3dx9_42.dll is now packaged, and it still can't load libGLESv2.dll. Will try to work with releng people tomorrow on that.
 * there is a strange amount of variation between test runs on Windows 7, everytime a different set of tests is failing (using ANGLE).

Only good news:
 * Windows 7 debug didn't crash this time (there used to be a JS engine crash).
Attachment #500622 - Attachment is obsolete: true
Attachment #500821 - Flags: review?(vladimir)
Attachment #500622 - Flags: review?(vladimir)
At gfx/angle/src/build_angle.gyp:189 we have:

  'AdditionalDependencies': ['d3dx9.lib'],

isn't that .lib a static library? Doesn't that mean that we link statically hence don't need the DLL? Also, nowhere in the ANGLE sources is a d3dx9*.dll file mentioned.
(In reply to comment #78)
>  * this still doesn't fix the issue on the WinXP tryserver. The build log
> confirms that d3dx9_42.dll is now packaged, and it still can't load
> libGLESv2.dll. Will try to work with releng people tomorrow on that.

You can ask to borrow a WinXP Talos slave. Running the tests while viewing the display might be informative. You might also be able to run Firefox under procmon or dependency walker or something to find out why it's failing to load.
(In reply to comment #79)
> At gfx/angle/src/build_angle.gyp:189 we have:
> 
>   'AdditionalDependencies': ['d3dx9.lib'],
> 
> isn't that .lib a static library? Doesn't that mean that we link statically
> hence don't need the DLL? Also, nowhere in the ANGLE sources is a d3dx9*.dll
> file mentioned.


d3dx9.lib is an import library.
These builds seem to work fine (and _42 is the correct lib version, I checked); do the WinXP machines have hardware acceleration of any kind?  Are they VMs or physical machines?
Though I don't think XP ever shipped with even the base d3d9 runtime -- are these machines able to run our d3d9 accel?  (Viewing the gfx section of about:support on these machines would be useful)
Depends on: 622676
Filed JS engine bug about the crash with debug build on win7 tryserver: bug 622676
Comment on attachment 500821 [details] [diff] [review]
Updated yet again, still orange on windows...


>diff --git a/configure.in b/configure.in
>--- a/configure.in
>+++ b/configure.in
>@@ -6314,18 +6314,17 @@ if test -n "${MOZ_JAVAXPCOM}"; then
>   fi
> fi
> 
> dnl ========================================================
> dnl = ANGLE OpenGL->D3D translator for WebGL
> dnl = * only applies to win32
> dnl = * enabled by default (shipping build); requires explicit --disable to disable
> dnl ========================================================

Replace this with:

MOZ_ANGLE=${MOZ_ANGLE=}
MOZ_DIRECTX_SDK_PATH=${MOZ_DIRECTX_SDK_PATH=}

to leave the variables defined, but pulling in the env vars as necessary

>diff --git a/gfx/angle/Makefile.in b/gfx/angle/Makefile.in
>--- a/gfx/angle/Makefile.in
>+++ b/gfx/angle/Makefile.in
>@@ -143,42 +143,49 @@ endif
> ifdef BUILD_ANGLE
> ifdef MOZ_DEBUG
> ANGLE_DIR = Debug
> else
> ANGLE_DIR = Release
> endif
> 
> ifdef MOZ_DIRECTX_SDK_PATH
>-INCLUDE := $(INCLUDE);$(MOZ_DIRECTX_SDK_PATH)\include
>-LIB := $(LIB);$(MOZ_DIRECTX_SDK_PATH)\lib\x86
>+INCLUDE := $(INCLUDE);$(MOZ_DIRECTX_SDK_PATH)/include
>+LIB := $(LIB);$(MOZ_DIRECTX_SDK_PATH)/lib/x86
>+D3DX9_DLL := d3dx9_42.dll
> endif
> 
> ANGLE_DEP_PATTERNS = \
> 	src/common/*.cpp src/common/*.h \
> 	src/compiler/*.cpp src/compiler/*.h \
> 	src/compiler/preprocessor/*.cpp src/compiler/preprocessor/*.h \
> 	src/libEGL/*.cpp src/libEGL/*.h \
> 	src/libGLESv2/*.cpp src/libGLESv2/*.h \
> 	$(NULL)
> 
> ANGLE_DEPS = $(filter-out Gen_glslang.cpp Gen_glslang_tab.cpp glslang_tab.h,$(wildcard $(ANGLE_DEP_PATTERNS)))
> 
> libs:: libGLESv2.dll libEGL.dll
>-	$(INSTALL) $(IFLAGS2) libGLESv2.dll libEGL.dll $(DIST)/bin
>+	$(INSTALL) $(IFLAGS2) libGLESv2.dll libEGL.dll $(D3DX9_DLL) $(DIST)/bin
> 
> # we don't want this to attempt to parallel-build these dlls;
> # building one will build both.
> libGLESv2.dll: libEGL.dll
> 
> libEGL.dll: $(GLOBAL_DEPS) $(ANGLE_DEPS)
> 	@(echo "=== Building ANGLE via devenv.exe ===" \
> 	&& rm -rf angle-build && mkdir angle-build \
> 	&& cp -r $(srcdir)/src $(srcdir)/include angle-build \
> 	&& cd angle-build/src \
> 	&& echo "Upgrading solution..." \
> 	&& devenv angle.sln //upgrade \
> 	&& echo "Building solution, target $(ANGLE_DIR)|Win32..." \
> 	&& devenv angle.sln //useenv //build "$(ANGLE_DIR)|Win32" \
> 	&& echo "Copying dlls..." \
>-	&& cp $(ANGLE_DIR)/*.dll ../..)
>-
>+	&& cp $(ANGLE_DIR)/*.dll ../.. \
>+        && cd ../.. \
>+        && ( test ! $(D3DX9_DLL) \
>+             || ( echo "Extracting D3DX9 dll from cab file..." \
>+                  && expand "${MOZ_DIRECTX_SDK_PATH}"/Redist/Aug2009_d3dx9_42_x86.cab . -F:$(D3DX9_DLL) \
>+                ) \
>+           ) \
>+        )
> endif

So this is fine, but it will only be correct with the February 2010 SDK.  Newer SDKs might use a newer version of the d3dx9 dll.  In particular, this will likely break if someone has a newer SDK installed for local builds (such as one that they'd need for 2010 builds).  A better way would probably be to figure out the right library in configure.in, and use a #define for it.  You can get the right lib name by doing something like:

vladimir@lightning[661]$ dumpbin //headers /c/dxsdk/Lib/x86/d3dx9.lib | grep 'DLL name' | grep d3dx9_ | head -1  | sed 's,^.*: ,,'
d3dx9_42.dll

Also, this hardcodes x86, but it really shouldn't; we'll turn ANGLE on for x64 at some point, so may as well write this correctly the first time.
Whiteboard: [hard blocker]
Whiteboard: [hard blocker] → [hardblocker]
Depends on: 624044
So, the last thing blocking this, namely the strange failures on Win7 test slaves, are now understood: they were a bug in the old DirectX libraries installed on these test slaves. Upgrading DirectX and the NVIDIA driver fixed it (and the NVIDIA driver alone didn't fix it). So we're now just waiting on bug 624044.
No longer depends on: 624044
Whiteboard: [hardblocker] → [january 19] [hardblocker]
Whiteboard: [january 19] [hardblocker] → [january 14] [hardblocker]
This new version no longer tried to install DirectX runtime stuff, and forces usage of OpenGL. The goal is to be able to land it ASAP, and file a followup about running ANGLE on test slaves, but that is blocked by bug 624044.
Attachment #500821 - Attachment is obsolete: true
Attachment #503273 - Flags: review?(vladimir)
Attachment #500821 - Flags: review?(vladimir)
http://hg.mozilla.org/mozilla-central/rev/2793d141b019

Filed followup to remove the prefer_gl: bug 625849
Status: NEW → RESOLVED
Last Resolved: 9 years ago
Resolution: --- → FIXED
Depends on: 629595
Attachment #462432 - Flags: review?(vladimir)

Updated

7 years ago
Depends on: 745880
You need to log in before you can comment on or make changes to this bug.