Closed Bug 457753 Opened 16 years ago Closed 14 years ago

Run unit tests on pre-existing release builds

Categories

(Release Engineering :: General, defect, P3)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: tchung, Assigned: catlee)

References

Details

(Whiteboard: [Q4 goal] [1.9.2-fixed])

Attachments

(9 files, 5 obsolete files)

708 bytes, patch
ted
: review+
Details | Diff | Splinter Review
1.58 KB, patch
ted
: review+
bhearsum
: review+
Details | Diff | Splinter Review
1.17 KB, patch
ted
: review+
Details | Diff | Splinter Review
3.80 KB, patch
bhearsum
: review+
catlee
: checked-in+
Details | Diff | Splinter Review
1.21 KB, patch
ted
: review+
Details | Diff | Splinter Review
2.17 KB, patch
coop
: review+
nthomas
: checked-in+
Details | Diff | Splinter Review
9.93 KB, patch
bhearsum
: review+
ted
: review+
catlee
: checked-in+
Details | Diff | Splinter Review
4.70 KB, patch
bhearsum
: review+
catlee
: checked-in+
Details | Diff | Splinter Review
19.28 KB, patch
bhearsum
: review+
catlee
: checked-in+
Details | Diff | Splinter Review
From my understanding, we currently do NOT apply our tens of thousands of mochitests ran against the release candidates for branch and trunk builds.   They are ran on checkins and nightly builds, but not applied to the rc's.  Can we please get a system up so running these is part of the release process before handing off to QA?
What's the difference from running these as part of the release process vs running them after the final checkin (which is already done)? The code is the same so the results should be the same.

We normally even do a clobber build with tests before officially starting the branching/tagging of a release.
(In reply to comment #1)
> What's the difference from running these as part of the release process vs
> running them after the final checkin (which is already done)? The code is the
> same so the results should be the same.
> 
> We normally even do a clobber build with tests before officially starting the
> branching/tagging of a release.

Hmm, i guess my real question is, do any of the configurations made in the releases bits impact any of the clobber build bits?  If so, it just seems right to automate testing against the build that we are finalizing to the public. 

If not, then i guess its moot point to bring this up.
one way would be running this manually with the existing debug vm infrastructure that some of us have.

As example, would be doable for me to run this tests in my existing VM environment.
OS: Mac OS X → All
Hardware: PC → All
It would be better if it was set up with the other automation instead of having to special case this and have someone do it by hand every time.
The bits produced by the constant build process can be different from the milestone build process.  For the milestone build, different machines are used and somewhat different scripts are used IIRC.  We had a problem one time where the tool chain was modified unknowingly.  If there is some file access issue or a tool chain mod happens, the bits could be different and could fail an automated test.  I don't think we need to hold up smoke tests for a full automated run, but a full automated run seems like cheap insurance that _really_ nothing we could easily test was broken.  We should be thinking of preventing the worse case scenario.
(In reply to comment #5)
> milestone build process.  For the milestone build, different machines are used
> and somewhat different scripts are used IIRC.  We had a problem one time where

You mean different VMs? John can correct me if I'm wrong, but I think they use the same toolchain now, just with different VMs. Given the problems in the past, there was a push to make the release 'machines' the same as the nightly 'machines' (in this case, VMs).

I can't speak to the scripts being different (I'm sure they are, but I'm not sure at what level).
We use the same scripts and processes for building releases as we do for nightlies. However, we maintain separate configuration files and mozconfigs. Tim is referring to one of the Firefox 3 Betas where we forgot to turn on PGO for win32 in the 'release' mozconfig.

It's hard to get around something like this because we do need extra/different things for releases, like --enable-official-branding.

Firefox 2/3 releases are built on different machines than the nightlies are. Going forward (3.1+) this will not be the case. Release builds use the exact same VMs as nightlies and dep builds.

I don't think we've ever had toolchain mismatch problems during my tenure here. GCC and friends get upgraded so rarely there just isn't the chance for that to happen.

Running unit tests and Talos on the actual bits we ship is tough in the Talos case, and impossible in the unittest case unless we can run tests on arbitrary builds. (https://bugzilla.mozilla.org/show_bug.cgi?id=421611)
Valid request here is that we automatically run some testsuites on nightly builds, yet we dont run those same testsuites on release builds.

Dont have time for this right now, so triaging to Future.
Component: Release Engineering → Release Engineering: Future
Summary: Run mochitests against branch and trunk releases → Run mochitests against branch and trunk release builds
Tweaking summary to cover all unittests, not just mochitest.

Noting that this is a Q3 goal.
Summary: Run mochitests against branch and trunk release builds → Run unit tests against branch and trunk release builds
From discussion with ted in irc, we should be able to do linux, win32 easily enough, but ted needs to do some work before we can run unittests on mac osx release builds.
(In reply to comment #10)
> From discussion with ted in irc, we should be able to do linux, win32 easily
> enough, but ted needs to do some work before we can run unittests on mac osx
> release builds.

Mac osx packaging work is in bug#463605.
Depends on: 463605
Summary: Run unit tests against branch and trunk release builds → Run unit tests on pre-existing release builds
(In reply to comment #9)
> Noting that this is a Q3 goal.

Should this be un-futured then?
Lukas said she would take this on last week, as is similar to the work she is doing for running unittests on nightly builds and on debug builds, but neither of us updated the bug with that info.

Moved from Future and assigned to Lukas, to match reality.
Assignee: nobody → lsblakk
Component: Release Engineering: Future → Release Engineering
Assignee: lsblakk → catlee
Priority: -- → P3
Blocks: 517212
'make package-tests' was failing on release builds because we set MOZ_PKG_PRETTYNAMES=1

This should fix that problem.
Attachment #403325 - Flags: review?(ted.mielczarek)
Comment on attachment 403325 [details] [diff] [review]
Make sure package directory exists before trying to package tests into it.

I guess I overlooked the MOZ_PKG_PRETTYNAMES case, despite putting PKG_PATH in there.
Attachment #403325 - Flags: review?(ted.mielczarek) → review+
Comment on attachment 403325 [details] [diff] [review]
Make sure package directory exists before trying to package tests into it.

Pushed to m-c:
http://hg.mozilla.org/mozilla-central/rev/bb2505a80730
Attachment #403325 - Flags: checked-in+
Attachment #403599 - Flags: review?(ted.mielczarek)
Attachment #403599 - Flags: review?(bhearsum)
We need to print out the URLs of the files so that the builder can capture them and include them in a sendchange so the test builders know where to download them from.

Also, we want to keep the crashsymbols around, they're also used when running tests to give stack traces with symbols.
Attachment #403600 - Flags: review?(bhearsum)
Attachment #403599 - Flags: review?(ted.mielczarek) → review+
Comment on attachment 403599 [details] [diff] [review]
Upload crash symbols to the right place in release builds

This seems fine. Can we hold off on landing it until after 3.5.4/3.6b1? The last thing we need is more complications with those releases =\
Attachment #403599 - Flags: review?(bhearsum) → review+
Comment on attachment 403600 [details] [diff] [review]
post_upload fixes for release builds

Looks fine to me
Attachment #403600 - Flags: review?(bhearsum) → review+
Comment on attachment 403325 [details] [diff] [review]
Make sure package directory exists before trying to package tests into it.

I think this patch busted up the try server mac builds with errors like:
make -C /builds/slave/sendchange-macosx-hg/build/objdir/ppc UNIVERSAL_BINARY= package-tests
make[2]: Nothing to be done for `package-tests'.
make -C /builds/slave/sendchange-macosx-hg/build/objdir/i386 UNIVERSAL_BINARY= package-tests
make[2]: Nothing to be done for `package-tests'.
cp /builds/slave/sendchange-macosx-hg/build/objdir/ppc/dist/test-package-stage/mochitest/automation.py \
           /builds/slave/sendchange-macosx-hg/build/objdir/i386/dist/test-package-stage/mochitest/
cp: directory /builds/slave/sendchange-macosx-hg/build/objdir/i386/dist/test-package-stage/mochitest does not exist

I'm going to push a backout of this to try as a test.
That was bug 518641, FWIW. I'm pushing a fix.
Attachment #403325 - Flags: approval1.9.2+
Attachment #403599 - Flags: approval1.9.2+
Attachment #403600 - Attachment is obsolete: true
Attachment #408593 - Flags: review?(ted.mielczarek)
Attachment #408593 - Flags: approval1.9.2?
installdmg.sh needs to use quotes around the dmg filename since it has spaces in it for release builds.

post_upload.py needs to know how to output URLs for release builds.
Attachment #408594 - Flags: review?(bhearsum)
Attachment #408594 - Flags: review?(bhearsum) → review+
Attachment #408593 - Flags: review?(ted.mielczarek)
Attachment #408593 - Flags: review+
Attachment #408593 - Flags: approval1.9.2?
Attachment #408593 - Flags: approval1.9.2+
Whiteboard: Q4 goal
Attachment #409078 - Flags: review?(bhearsum)
Attachment #409128 - Flags: review?(bhearsum)
Comment on attachment 403599 [details] [diff] [review]
Upload crash symbols to the right place in release builds

Pushed to m-c:
http://hg.mozilla.org/mozilla-central/rev/0c58b4c74e3f
Attachment #403599 - Flags: checked-in+
Comment on attachment 408593 [details] [diff] [review]
Need quotes around test package name, since it has spaces on OSX

Pushed to m-c:
http://hg.mozilla.org/mozilla-central/rev/1e9a2f7d1577
Attachment #408593 - Flags: checked-in+
Attachment #409128 - Flags: review?(bhearsum) → review+
Comment on attachment 409128 [details] [diff] [review]
Configs for enabling tests on release builds

This all looks good to me!
Attachment #409078 - Flags: review?(bhearsum) → review+
Attachment #410237 - Flags: review?(ted.mielczarek)
Attachment #410237 - Flags: checked-in?
Attachment #410237 - Flags: approval1.9.2?
Attachment #410237 - Flags: review?(ted.mielczarek)
Attachment #410237 - Flags: review+
Attachment #410237 - Flags: approval1.9.2?
Attachment #410237 - Flags: approval1.9.2+
Comment on attachment 410237 [details] [diff] [review]
Need to make sure $(DIST)/$(PKG_PATH) exists before trying to zip files into it.

http://hg.mozilla.org/mozilla-central/rev/c2f1caae085f
Attachment #410237 - Flags: checked-in? → checked-in+
Comment on attachment 408594 [details] [diff] [review]
tools updates for tests on release builds

changeset: 418:5dc265010f4a
Attachment #408594 - Flags: checked-in+
Running l10n verify for 3.6b3 we hit
  Unknown package type for file: firefox-3.6b3-build1/linux-i686/en-US/firefox-3.6b3.crashreporter-symbols.zip
This patch excludes the crash symbols using 'rsync -n' so it'll do the job. Could you land it, reconfig, and force l10n verify on pm ? Or pass it to coop if he's on the hook for this release too.
Attachment #412539 - Flags: review?(catlee)
Attachment #412539 - Flags: review?(catlee) → review?(ccooper)
Attachment #412539 - Flags: review?(ccooper) → review+
Comment on attachment 412539 [details] [diff] [review]
Exclude crash symbols when downloading for l10n verify

http://hg.mozilla.org/build/buildbotcustom/rev/d541e1126582
Attachment #412539 - Flags: checked-in+
Attachment #409078 - Attachment is obsolete: true
Attachment #415621 - Flags: review?(bhearsum)
Comment on attachment 415621 [details] [diff] [review]
ReleaseFactory changes for doing tests


>+        for master, warn in self.unittestMasters:
>+            self.addStep(SendChangeStep(
>+             name='sendchange_%s' % master,
>+             warnOnFailure=warn,
>+             master=master,
>+             branch=self.unittestBranch,
>+             revision=WithProperties("%(got_revision)s"),
>+             files=[WithProperties('%(packageUrl)s')],
>+             user="sendchange-unittest")
>+            )
>+

You'll need to do 'for master, warn, retries in self.unittestMasters' since the patches from bug 532228 landed yesterday.

Looks fine otherwise.
Attachment #415621 - Flags: review?(bhearsum) → review-
Attachment #415621 - Attachment is obsolete: true
Attachment #415624 - Flags: review?(bhearsum)
Attachment #415624 - Attachment is patch: true
Attachment #415624 - Attachment mime type: application/octet-stream → text/plain
Attachment #409128 - Attachment is obsolete: true
Attachment #415625 - Flags: review?(ted.mielczarek)
Attachment #415625 - Flags: review?(bhearsum)
Comment on attachment 415625 [details] [diff] [review]
mozconfigs for turning on --enable-tests on moz-central and moz-1.9.2 release builds

I really do wish you'd leave --enable-tests out of the mozconfigs, since it's the default, and people will copy your mozconfigs and leave it in as cruft until the end of time, but I'm not going to r- you for that.
Attachment #415625 - Flags: review?(ted.mielczarek) → review+
Attachment #415624 - Flags: review?(bhearsum) → review+
Attachment #415625 - Flags: review?(bhearsum) → review+
Last iteration, I hope.

This one just excludes the *.tests.tar.bz2 files in l10n_verification step.  It doesn't seem to hurt anything, but I'd rather be safe.
Attachment #415624 - Attachment is obsolete: true
Attachment #415855 - Flags: review?(bhearsum)
Comment on attachment 415625 [details] [diff] [review]
mozconfigs for turning on --enable-tests on moz-central and moz-1.9.2 release builds

changeset:   1815:3d69b07fbf34
Attachment #415625 - Flags: checked-in+
Attachment #415855 - Flags: review?(bhearsum) → review+
Comment on attachment 415876 [details] [diff] [review]
Release master changes to do unittests on release builds

I'm wondering if makes more sense to combine talosTestPlatforms and unittestPlatforms. Is there a time when we want to one but not the other?
(In reply to comment #44)
> (From update of attachment 415876 [details] [diff] [review])
> I'm wondering if makes more sense to combine talosTestPlatforms and
> unittestPlatforms. Is there a time when we want to one but not the other?

I don't know if we'd ever want to run 1.9.1 builds on talos...we definitely won't be running unittests on those builds though, too much to backport.

Also, various mobile platforms or future release platforms (like 64-bit linux) may have mismatched unittest / talos support.
Comment on attachment 415876 [details] [diff] [review]
Release master changes to do unittests on release builds

ok, looks good to me!
Attachment #415876 - Flags: review?(bhearsum) → review+
So this will work for trunk and 1.9.2 release builds or are those already active?
(In reply to comment #47)
> So this will work for trunk and 1.9.2 release builds or are those already
> active?

Once we turn this on, it will be for releases done off of the 1.9.2 and trunk branches.
Comment on attachment 415855 [details] [diff] [review]
ReleaseFactory changes for doing tests

changeset:   552:ad98ad43d718
Attachment #415855 - Flags: checked-in+
Comment on attachment 415876 [details] [diff] [review]
Release master changes to do unittests on release builds

changeset:   1862:8b06758c28b8
Attachment #415876 - Flags: checked-in+
Successfully run on 3.6rc1 build!

All results green except for a hang on win32 mochitest-browser-chrome:

Running chrome://mochikit/content/browser/browser/components/sessionstore/test/browser/browser_394759_privatebrowsing.js...
TEST-PASS | chrome://mochikit/content/browser/browser/components/sessionstore/test/browser/browser_394759_privatebrowsing.js | sessionstore.js was removed

command timed out: 1200 seconds without output
program finished with exit code 1
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
A 2nd run of this went green.  The timeout looks like bug 518970.
Should we ask mstange to watch the Firefox3.6-Release tree with tbpl ? Should make the results more visible to tree watchers.
Depends on: 538457
Whiteboard: Q4 goal → [Q4 goal] [1.9.2-fixed]
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.