Closed Bug 467988 Opened 16 years ago Closed 15 years ago

Speed up canvas mochitests

Categories

(Core :: Graphics: Canvas2D, enhancement)

enhancement
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla1.9.2a1
Tracking Status
status1.9.1 --- ?

People

(Reporter: roc, Assigned: wesongathedeveloper)

References

Details

Attachments

(2 files, 6 obsolete files)

I was running mochitests and noticed we spend tons of time in the canvas mochitests. There are a huge number of files each with just one or a few tests. Seems like we could speed things up a lot by putting all the tests in one big file. (This is all auto-generated anyway.) We'd still want to create a new canvas object for each test, but that should be pretty cheap.
Alternately, this could live in whatever component the tests actually belong to. (somewhere in content?)
Component: Testing → Mochitest
Product: Core → Testing
QA Contact: testing → mochitest
Version: Trunk → unspecified
Component: Mochitest → Layout: Canvas
Product: Testing → Core
QA Contact: mochitest → layout.canvas
Attached file Unified Canvas Mochitests (obsolete) —
This HTML file contains all the tests in the content/canvas/tests path except test_bug397524.html. For some reason, this specific Mochitest file seems to execute about 2400 tests in total, which is different from the number of tests given by running the Mochitests as part of the build process (about 2800). What could the reason for this be? (the excluded file doesn't appear to account for this difference).
It looks like there are some syntax errors ("if (!_deferred) });") which may prevent some tests from being run.
Attached file Unified Canvas Mochitests (obsolete) —
All the /content/canvas/test canvas test scripts are in the attached file. Running this test file currently yields:
Passed: 2854
Failed: 0
Todo: 516

In comparison, running the tests through "make" results in:
Passed: 2870
Failed: 0
Todo:   520

I am not sure what the reason for the discrepancy is - the code from all the files seems to be running.
If you run with the right logging level (run mochitests.py with --help to see the various options), you should get log output for all tests, pass or fail. Diffing those logs should show you what the differences are.
Attached file Unified Canvas Mochitests (obsolete) —
This attachment currently runs (and passes) all the canvas mochitests enabled for the Windows build. The Makefile in /content/canvas/test has some rules for including/excluding tests based on the platform. Does it make sense to comment out (from the unified file) all the tests that receive special treatment in the makefile so that only tests that should be run on ALL platforms are indeed run from the unified tests file? (in which case the other platform specific test files would still be needed).
Attachment #370807 - Attachment is obsolete: true
Attachment #372420 - Attachment is obsolete: true
You're generating this combined file offline and checking the whole thing in, right? Given that, I think that makes sense. In the future, it might be nice to be able to make them todo(). We could add a reftest-like fails-if construct using the nsIXULRuntime.widgetToolkit attribute I added a while ago.
This looks great, though I think we need to wrap each test invocation function call in a try/catch -- otherwise if one throws for some reason, nothing after it will get run.

Also, would be good to have just one copy of the isPixel function, and to remove the bogus stringified location/color arguments (and just print the values directly), but that's not crucial.
Attached patch Unified Canvas Mochitests (obsolete) — Splinter Review
Each of the individual tests is now enclosed in a try-catch block to ensure that all tests run
Attachment #375446 - Attachment is obsolete: true
Attachment #376531 - Flags: review?(roc)
Here's the makefile that would accompany the unified tests file
Attachment #376532 - Flags: review?(roc)
Attachment #376531 - Flags: review?(roc) → review+
Comment on attachment 376531 [details] [diff] [review]
Unified Canvas Mochitests

I'm not sure if I'm the right reviewer here. However, this look fine to me, Vlad's looked at it, and I want this checked in, so here we go...
Attachment #376532 - Flags: review?(roc) → review+
Attachment #376531 - Flags: superreview?(vladimir)
Attachment #376532 - Flags: superreview?(vladimir)
Attachment #376531 - Flags: superreview?(vladimir)
Attachment #376532 - Flags: superreview?(vladimir)
Attached patch Unified Canvas Mochitests (obsolete) — Splinter Review
Updated unified tests patch with changes from trunk
Attachment #376531 - Attachment is obsolete: true
Attachment #376532 - Attachment is obsolete: true
Attachment #388093 - Flags: superreview?(vladimir)
Updated unified tests patch with changes from trunk - and replaced MochiKit.js with packed.js although not all the tests used packed.js
Attachment #388093 - Attachment is obsolete: true
Attachment #388099 - Flags: superreview?(vladimir)
Attachment #388093 - Flags: superreview?(vladimir)
Attachment #388099 - Flags: superreview?(vladimir) → review?(roc)
Attachment #388099 - Flags: review?(roc) → review+
Keywords: checkin-needed
Whiteboard: [needs landing]
(In reply to comment #0)

> Seems like we could speed things up a lot by putting all the tests in one big
> file.

Do you have figures to compare?

> (This is all auto-generated anyway.)

I was going to say that a 25 k lines test file is not great to maintain.
But it seems fine if it's "auto-generated": where/how are these tests maintained?

I think the try+catch should explicitly report/include the exception.

> We'd still want to create a new
> canvas object for each test, but that should be pretty cheap.

Fwiw, the reftests cache canvases now...
Assignee: nobody → wesongathedeveloper
Severity: normal → enhancement
Status: NEW → ASSIGNED
Version: unspecified → Trunk
Reftest canvas caching is completely unrelated.
I realized we need a followup patch to delete all the test files that are no longer used.
http://hg.mozilla.org/mozilla-central/rev/5d5ef9c4b2c1

Thanks!!!
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Whiteboard: [needs landing]
> though I think we need to wrap each test invocation function call in a try/catch

Doesn't that mean that we won't catch it when one of the tests fails due to throwing an exception?  Or are we making sure that the catch blocks do something that would trigger a test failure?
It looks like this was implemented sanely:
+ try {
+  test_2d_imagedata_coercion();
+ } catch (e) {
+  ok(false, "unexpected exception thrown in: test_2d_imagedata_coercion");
+ }
(In reply to comment #14)
> I think the try+catch should explicitly report/include the exception.

(In reply to comment #19)
> It looks like this was implemented sanely:

Yes, except it could use the 'diag' parameter to forward 'e'...

> + } catch (e) {
> +  ok(false, "unexpected exception thrown in: test_2d_imagedata_coercion");
Attachment #394992 - Flags: review?(roc)
Attachment #394992 - Flags: review?(roc) → review+
Attachment #388099 - Attachment description: Unified Canvas Mochitests → Unified Canvas Mochitests [Checkin: Comment 17]
status1.9.1: --- → ?
Flags: in-testsuite+
Target Milestone: --- → mozilla1.9.2a1
Shouldn't the unused test files be deleted from mozilla-central as well?
Yes, we need to check your patch in.
Keywords: checkin-needed
Whiteboard: [needs landing]
Comment on attachment 394992 [details] [diff] [review]
Delete unused test files [checked in]

http://hg.mozilla.org/mozilla-central/rev/e5761c66aa50
Attachment #394992 - Attachment description: Delete unused test files → Delete unused test files [checked in]
Keywords: checkin-needed
Whiteboard: [needs landing]
Blocks: 572581
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: