The default bug view has changed. See this FAQ.

Generate nightly and per-checkin AddressSanitizer [ASAN] builds on mozilla-central (and try?)

RESOLVED FIXED

Status

Release Engineering
General Automation
P4
normal
RESOLVED FIXED
5 years ago
4 years ago

People

(Reporter: decoder, Assigned: joduinn)

Tracking

(Blocks: 1 bug, {sec-want})

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [sg:want][asan-build-blocker])

Attachments

(3 attachments)

(Reporter)

Description

5 years ago
One of our goals in security this quarter is to work towards builds of Firefox that are compiled with AddressSanitizer (a compile-time instrumentation which also detects bugs similar to Valgrind, but is a lot faster).

With the configurations being added in bug 753135, these builds already work on the try servers. We would like to have these being built and uploaded regularly. Having this for Linux 64bit debug+opt would be a good start for now (we would like to have Mac builds later too, but there are still other problems that need to be tackled first).

As for our test suites, the builds are not green but running the tests is not one of our immediate goals but rather one of the next steps once we have builds (ideally at some point all tests are green and we can turn this on for developers so we immediately see when we regressed memory safety somewhere). Joel Maher mentioned this should be possible by hiding the builds on tbpl for now.

I'm not an expert on the whole build infrastructure thing so I've cc'ed Joel who might be able to correct me in case I mentioned something wrong forgot something :)
I'm not sure if we're going to be able to get to this this quarter.

Do you need to run unittests and talos as part of this? Or there are tests in-tree that run as part of 'make check'?
Priority: -- → P4
(Reporter)

Comment 2

5 years ago
(In reply to Chris AtLee [:catlee] from comment #1)

> Do you need to run unittests and talos as part of this? Or there are tests
> in-tree that run as part of 'make check'?

Right now, we don't need any tests as that is not part of our goal this quarter (though make check will probably run anyway). Later, we would like to have all tests running except performance tests like Talos because I believe there is little value in running these on ASan builds. The main reason for running these tests later is to find regressions in memory safety.
per mtg w/abillings just now:

1) "regularly" in this context means *not* per checkin. Whether these are needed per-night or per-week is open for discussion, depending on the amount of time needed for fuzz testing. Unclear how long these ASAN builds need to be retained.

2) these ASAN (Address Sanitizer) builds are already available on try, but per discussion w/abillings, some people are still building these ASAN builds by hand anyway. Unknown why. If there are any problems with the ASAN builds on try, can you please file bugs so we can fix before we use same process for (1)?

3) the only reason to have these ASAN builds is to do these additional ASAN-type-fuzzing. There is no need to run unittest/talos on these ASAN builds.

4) once these ASAN builds are available, we can schedule these ASAN-type-fuzzer jobs as idle-time jobs in our production RelEng pools... just like we already do for jesse's other fuzzer jobs.

5) unclear what OS it makes sense to do ASAN builds on. Linux64?/linux32? seems most likely, but unclear if other OS are workable at this time based on reported bugs with ASAN tools.
> 3) the only reason to have these ASAN builds is to do these additional
> ASAN-type-fuzzing. There is no need to run unittest/talos on these ASAN
> builds.

Are we sure this is the case? I think there have been situations where unittests which were run on ASAN builds have shown up problems before, and having the unittests run occasionally as well will be of benefit.
I said that we may wish to run mochitests (and possibly unittests) but I will defer to Christian and others here. I don't think we'll be running Talos, no matter what.
(Reporter)

Comment 6

5 years ago
(In reply to John O'Duinn [:joduinn] from comment #3)
> per mtg w/abillings just now:
> 
> 1) "regularly" in this context means *not* per checkin. Whether these are
> needed per-night or per-week is open for discussion, depending on the amount
> of time needed for fuzz testing. Unclear how long these ASAN builds need to
> be retained.

This is correct and we can discuss on the build cycle. It would be nice to retain the builds at least as long as try does right now.

> 
> 2) these ASAN (Address Sanitizer) builds are already available on try, but
> per discussion w/abillings, some people are still building these ASAN builds
> by hand anyway. Unknown why. If there are any problems with the ASAN builds
> on try, can you please file bugs so we can fix before we use same process
> for (1)?

No, the try builds we're doing are fine. The reasons why some people are still building are twofold. First of all, the necessary documentation was missing until the MDN wiki move completed but I updated it now to include try build information: https://developer.mozilla.org/en/Building_Firefox_with_Address_Sanitizer


> 
> 3) the only reason to have these ASAN builds is to do these additional
> ASAN-type-fuzzing. There is no need to run unittest/talos on these ASAN
> builds.
> 

This is definitely wrong, we want to run all of the unit tests we have, excluding the performance tests (that is Talos I think, correct?). We have had several regressions so far on Mochitests and xpcshell tests so we want continuous testing using these builds. The performance here is a lot better compared to valgrind btw.

> 4) once these ASAN builds are available, we can schedule these
> ASAN-type-fuzzer jobs as idle-time jobs in our production RelEng pools...
> just like we already do for jesse's other fuzzer jobs.
> 

For Jesse's tools we can and should do that, correct :) Especially domfuzzer is a primary target of interest on these builds. But I'm not sure if this is dependent on getting the builds from you, since Jesse can pull off builds from anywhere, including the try builds I publish online (he added support for that).


> 5) unclear what OS it makes sense to do ASAN builds on. Linux64?/linux32?
> seems most likely, but unclear if other OS are workable at this time based
> on reported bugs with ASAN tools.

OSes we would like to target now are Linux 64 and Mac OSX 64. 32 bit has less priority right now because there are several problems with stack space that prevent it from working properly on all tests. Windows is entirely unsupported to my knowledge.
(In reply to Christian Holler (:decoder) from comment #6)
> (In reply to John O'Duinn [:joduinn] from comment #3)
> > 1) "regularly" in this context means *not* per checkin. Whether these are
> > needed per-night or per-week is open for discussion, depending on the amount
> > of time needed for fuzz testing. Unclear how long these ASAN builds need to
> > be retained.
> 
> This is correct and we can discuss on the build cycle. It would be nice to
> retain the builds at least as long as try does right now.

 I think a number of us, including QA, would like to see "nightly" mozilla central ASAN builds so we don't need to generate a try build (or actually two of them) every time we want to verify an ASAN bug fix. I told John that I'd expect keeping them for either 60 or 90 days would be enough, given their large size.
 
> > 2) these ASAN (Address Sanitizer) builds are already available on try, but
> > per discussion w/abillings, some people are still building these ASAN builds
> > by hand anyway. Unknown why. If there are any problems with the ASAN builds
> > on try, can you please file bugs so we can fix before we use same process
> > for (1)?
> 
> No, the try builds we're doing are fine. The reasons why some people are
> still building are twofold. First of all, the necessary documentation was
> missing until the MDN wiki move completed but I updated it now to include
> try build information:
> https://developer.mozilla.org/en/Building_Firefox_with_Address_Sanitizer

 I think people have been unclear about making sure the symbols match up with downloaded builds since, without these, ASAN builds are useless.

> > 4) once these ASAN builds are available, we can schedule these
> > ASAN-type-fuzzer jobs as idle-time jobs in our production RelEng pools...
> > just like we already do for jesse's other fuzzer jobs.
> > 
> 
> For Jesse's tools we can and should do that, correct :) Especially domfuzzer
> is a primary target of interest on these builds. But I'm not sure if this is
> dependent on getting the builds from you, since Jesse can pull off builds
> from anywhere, including the try builds I publish online (he added support
> for that).

 John is arguing here that we may be able to avoid building a fuzzing cluster if ALL of our fuzzing stuff (including under ASAN) that would be run on a cluster of our own could be run on idle periods on existing infrastructure, like Jesse's fuzzer work.
(In reply to Christian Holler (:decoder) from comment #6)
> We have had several regressions so far on Mochitests and xpcshell tests
> so we want continuous testing using these builds.

Continuous testing will catch both newly introduced security bugs and bugs that would make the builds useless for fuzzing.
(In reply to John O'Duinn [:joduinn] from comment #3)
> 1) "regularly" in this context means *not* per checkin. Whether these are
> needed per-night or per-week is open for discussion, depending on the amount
> of time needed for fuzz testing. Unclear how long these ASAN builds need to
> be retained.

Ultimately we DO want these run per check-in with tests and published on tbpl. The builds run fast enough (unlike Valgrind) and will catch regressions. I've already got Christian running our full test suites manually on ASan builds so we know it's possible. Better to catch these on check-in than to let Abhishek or someone else find it in their fuzzer the next day and earn an easy $3000 per bug off us.

Keeping a month of builds for each branch as we do for other tinderbox builds should be fine. We won't have all platforms, and if space is a problem having both -pgo and non is pretty much wasted space so maybe we could trim there to make room.

> 2) these ASAN (Address Sanitizer) builds are already available on try, but
> per discussion w/abillings, some people are still building these ASAN builds
> by hand anyway. Unknown why.

Because the try builds go away after two weeks, and several of the people using these builds don't have try-push permission (some QA, several community members).

> 3) the only reason to have these ASAN builds is to do these additional
> ASAN-type-fuzzing. There is no need to run unittest/talos on these ASAN
> builds.

There's no need to run talos (just as I hope we don't run debug builds on talos perf tests), but running the full regression suite will catch introduced bugs.
I wanted to chime in here for QA. Regressing and verifying ASan bugs is a rather large investment of time for QA right now. In theory, we should be able to test on either side of a fixed changeset but this doesn't always work out. My personal experience has been that reproducing the issue is most successful when using a build matching the reported changeset. Decoder's builds are handy from time to time but since they only exist for the last 30 days of Nightly and Aurora builds, it leaves some considerable gaps.

1. If the regression range is older than a month we need to generate our own builds using try-server.
2. Building against try-server requires access which generally narrows the scope of people capable of testing.
3. Building against try-server also adds a couple hours to each verification
4. Even with try-server, we cannot generate builds against Beta, Release, or ESR. This means we can't truly verify a fix against the builds we are releasing soonest.

The amount of time and effort it takes for us to verify ASan fixes right now means we have to deprioritize verification of many of these bugs. In turn, this means releasing product with untested fixes. I'm okay making this trade-off because I have a lot of trust in the ability of our development team. However, this is an unnecessary trade-off to be making if we just had builds.

I'm CCing Matt Wobensmith on this bug who recently join the QA team in a Security/Privacy role. Matt can elaborate further with his own concerns.
Pretty much just what Anthony said.

To truly verify a bug was correctly fixed, we'd like access to both ASan builds - an affected build and a fixed one. That's currently either not feasible or not possible. 

In a perfect world, we'd have ASan builds for all supported configs dating back to the beginning of time (Moz time) but clearly we've been resource-constrained.

If people are happy with our current rate and method of bug verification, nothing needs to change. However, with a full range of regular ASan builds, archived perpetually, QA can do much, much better. 

Personally, I'm not comfortable with the amount of bugs we simply can't/don't verify, so any ideas here would be welcomed.

Comment 13

4 years ago
>  Regressing and verifying ASan bugs is a rather large investment of time for QA 
> right now.

Our understanding of the way other brower development teams are doing this involves not using humans to do the regression testing.  The regression testing is done by saving the test case when the bug first appears, then when that test case stops crashing or asserting the verification is considered complete.
Assuming every ASan bug comes with a testcase and that testcase will eventually be checked in to mozilla-central, should QA be spending *any* time on these bugs?
(Reporter)

Comment 15

4 years ago
(In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #14)
> Assuming every ASan bug comes with a testcase and that testcase will
> eventually be checked in to mozilla-central, should QA be spending *any*
> time on these bugs?

The (general) problem with our approach of testing is that we add a test after we fixed the issue. There is no guarantee that in our test system, the test will actually trigger the failure if it remains unfixed.

I already proposed in another meeting to do this fully automatically. If we have the regression range, the necessary builds (which this bug is about), and the test available in the bug, verification can be fully automatic (just like Google is doing it). I am already doing this for all of our JS bugs that have crashtests in them, going to the browser is just advancing it one level further, but no black magic. It requires work from all sides though, technical work as well as organizational work to make it easier for automation to work with these bugs.

My overall opinion is that for bugs that have automated tests, it should not be necessary to waste human resources to verify them.

We should take further discussion of this to email though, except if we're really talking about the builds :)
(In reply to chris hofmann from comment #13)
> >  Regressing and verifying ASan bugs is a rather large investment of time for QA 
> > right now.
> 
> Our understanding of the way other brower development teams are doing this
> involves not using humans to do the regression testing.  The regression
> testing is done by saving the test case when the bug first appears, then
> when that test case stops crashing or asserting the verification is
> considered complete.

That would require automated tests, which we don't get from reporters normally, except maybe internal ones.
Summary: Getting builds with AddressSanitizer support → Regular RelEng AddressSanitizer builds (at least nightly, if not per-build like other branches)
This bug is a platform security priority.
I think we want two things.

1. Build and keep around Asan nightly FF builds for quicker triage/bisecting.
2. Build hourly-ish (or based on checkins) to catch issues early.

Should those be separate bugs?
Assigning to joduinn to resolve any prioritisation issues.
Assignee: nobody → joduinn
Just a drive by link drop on Asan building:
https://developer.mozilla.org/en-US/docs/Building_Firefox_with_Address_Sanitizer
Summary of several email threads, and some inperson chat w/dbolter just now.


0) Plenty of of advocacy comments here in this bug. Now its a prioritization question.

1) This is a request for build only, no tests. At this time, there is no requests for tests, because they dont work yet. When tests do work, a new separate followup bug will be filed.

2) Builds would be nightly on mozilla-central. Both opt and debug. If we can do more frequently, great. These builds would be retained for the same duration as our usual nightly builds, which is forever. Unclear if we can do release-ASAN builds, needs investigation, and will spin out as separate bug if needed. Release-ASAN builds, would be kept for ever, along side normal release builds.

2) Its unclear how long these builds take to run, or how much space they consume.

3) :decoder: Has anyone actually tried an ASAN build on a RelEng specific machine, with RelEng specific toolchain?  Before we can start, we need specific instructions on how to do an ASAN build using toolchain specifically pre-installed on RelEng machines. Have either of you ever done these ASAN builds on a loaner production RelEng machine? If not, could one of you please file a dep bug asking for loaner linux64 bit machine to do this investigation. We would also like to know duration of builds on these exact machines. If, as part of these ASAN builds, you need to install anything on the loaner machine, please let us know specific details so we can rollout changes to production. 

4) Are there any security concerns about these generated builds, or can they simply be placed on ftp, alongside the other builds for the same changeset?

5) ASAN-ified fuzzing can be done on the same pool as our existing fuzzing idle-jobs. Separate bug to be filed after we get builds running.

Did I miss anything?
(Reporter)

Comment 22

4 years ago
(In reply to John O'Duinn [:joduinn] from comment #21)

> 
> 1) This is a request for build only, no tests. At this time, there is no
> requests for tests, because they dont work yet. When tests do work, a new
> separate followup bug will be filed.

No, the tests work fine and we need them as well. I already clarified that in comment 6.

> 
> 2) Builds would be nightly on mozilla-central. Both opt and debug. If we can
> do more frequently, great. These builds would be retained for the same
> duration as our usual nightly builds, which is forever. Unclear if we can do
> release-ASAN builds, needs investigation, and will spin out as separate bug
> if needed. Release-ASAN builds, would be kept for ever, along side normal
> release builds.
> 
> 2) Its unclear how long these builds take to run, or how much space they
> consume.

Both is not true since we do have try builds already on a regular basis. I can get you all the necessary information in the blink of an eye. Size is pretty easy, each of these take up ~320 MB for the main tarball. For speed, we'd have to evaluate several try pushes I guess.

> 
> 3) :decoder: Has anyone actually tried an ASAN build on a RelEng specific
> machine, with RelEng specific toolchain?  Before we can start, we need
> specific instructions on how to do an ASAN build using toolchain
> specifically pre-installed on RelEng machines. Have either of you ever done
> these ASAN builds on a loaner production RelEng machine? If not, could one
> of you please file a dep bug asking for loaner linux64 bit machine to do
> this investigation. We would also like to know duration of builds on these
> exact machines. If, as part of these ASAN builds, you need to install
> anything on the loaner machine, please let us know specific details so we
> can rollout changes to production. 

As stated earlier (also in this bug), we already do our builds using Try servers. These use the RelEng machines that you are talking about, is that right?


> 
> 4) Are there any security concerns about these generated builds, or can they
> simply be placed on ftp, alongside the other builds for the same changeset?

There are no immediate concerns but they just not be used for production (especially not since they won't update themselves obviously).


> 5) ASAN-ified fuzzing can be done on the same pool as our existing fuzzing
> idle-jobs. Separate bug to be filed after we get builds running.

Jesse will likely be able to answer this :)
> Did I miss anything?

Having an ASAN-ified js shells placed in the same directory as ASAN-ified nightly builds will be useful for jsfunfuzz. (for starters, same schedule is fine)

Regular js shells are already being built with nightly builds and placed to the same ftp directory.

Updated

4 years ago
Flags: needinfo?

Updated

4 years ago
Flags: needinfo?
(In reply to Christian Holler (:decoder) from comment #22)
> (In reply to John O'Duinn [:joduinn] from comment #21)

>I can get you all the necessary information in the blink of an eye.

Sounds good. I think John needs all the info possible so he can estimate resources before squeezing this in among other top priorities. Anything you can supply to make it easier/clearer is great.
Created attachment 702472 [details] [diff] [review]
dep and nightly asan builds for m-c and try
Attachment #702472 - Flags: review?(bhearsum)
Comment on attachment 702472 [details] [diff] [review]
dep and nightly asan builds for m-c and try

Good candidates for try-nondefault? (bug 691177)
Attachment #702472 - Flags: review?(bhearsum) → review+

Updated

4 years ago
Blocks: 831294
(In reply to Ed Morley [:edmorley UTC+0] from comment #27)
> Comment on attachment 702472 [details] [diff] [review]
> dep and nightly asan builds for m-c and try
> 
> Good candidates for try-nondefault? (bug 691177)

yes, for sure. how does that work?
Comment on attachment 702472 [details] [diff] [review]
dep and nightly asan builds for m-c and try

this will get deployed in the next reconfig
Attachment #702472 - Flags: checked-in+
(In reply to Chris AtLee [:catlee] from comment #28)
> (In reply to Ed Morley [:edmorley UTC+0] from comment #27)
> > Good candidates for try-nondefault? (bug 691177)
> 
> yes, for sure. how does that work?

Steve, can you explain, so I don't end up saying the wrong thing? :-)
Flags: needinfo?(sphink)
(In reply to Ed Morley [:edmorley UTC+0] from comment #30)
> (In reply to Chris AtLee [:catlee] from comment #28)
> > (In reply to Ed Morley [:edmorley UTC+0] from comment #27)
> > > Good candidates for try-nondefault? (bug 691177)
> > 
> > yes, for sure. how does that work?
> 
> Steve, can you explain, so I don't end up saying the wrong thing? :-)

If you add 'try-by-default': False to the *-asan platform configuration sections, then that platform won't be included with something like |try: -p all|. To get those builds, you'd need to explicitly request |try: -p linux64-asan| or |try: -p all,linux64-asan|.

Oh, darn. That's what Callek meant about using underscores instead of dashes. He's right; I should have made the option try_by_default. Don't be surprised if it changes.

Individual tests can also be targeted for 'try-by-default': False, but it doesn't sound like you need that granularity here.
Flags: needinfo?(sphink)
disabled on try for now until we can make sure it's off by default
http://hg.mozilla.org/build/buildbot-configs/rev/7b6345c14c69
Blocks: 831483
Created attachment 703020 [details] [diff] [review]
Do not generate ASan builds by default on try

In concrete terms, here's the patch.
Attachment #703020 - Flags: review?(catlee)
Assignee: joduinn → sphink
Blocks: 831491
Blocks: 831500
summary of meetings w/:decoder, dbolter and catlee:

There are actually several different asks here, with different timelines, so separating out to reduce confusion:

1) generate ASAN builds on mozilla-central (and maybe try)
1a) generate ASAN builds per-checkin and also nightly. 
1b) These would be both opt and debug builds. 
1c) There is no need for updates for these ASAN nightly builds. 
1d) These ASAN would be kept for the same duration as other builds on mozilla-central (nightly:forever!, per-checkin: 30days). 
1e) Because of the massive size of each these ASAN builds (>300mb!), we'll watch how ftp.m.o handles the load from mozilla-central before we enable on try, or rollout to other project branches. (typically mozilla-central is ~3% of overall RelEng load, while try is typically ~50% of overall RelEng load). Its possible that with builds on mozilla-central, :decoder would no longer need to push to try, so no need for ASAN builds on try. If we *do* decide to enable ASAN builds on try, ASAN would be not-by-default.

We're rolling this portion out quickly, reconfigs in progress as I type. Expect to see these ASAN builds live in production today.

2) generating ASAN builds on release branch
Whether we should also generate ASAN builds on aurora and beta is unclear. :decoder would not be using aurora/beta/release, but he thinks that QA would. Either way, discussion of what is wanted, and any related work to make happen, is being moved to bug831483.


3) run tests on ASAN builds
Per decoder, we need to run tests on ASAN builds. We need to run different testsuites on debug / opt and per-checkin / nightly builds. Spinning out separate bug#831491 to track.


4) run idle-time fuzzer on ASAN builds
Track in bug#831500.
Summary: Regular RelEng AddressSanitizer builds (at least nightly, if not per-build like other branches) → Generate nightly and per-checkin AddressSanitizer [ASAN] builds on mozilla-central (and try?)

Comment 35

4 years ago
In production.
(In reply to John O'Duinn [:joduinn] from comment #34)
> Its possible that with builds on mozilla-central, :decoder would no longer
> need to push to try, so no need for ASAN builds on try. If we *do* decide to
> enable ASAN builds on try, ASAN would be not-by-default.

decoder might not need to push to try, but a Windows developer who is trying to fix an ASan bug would very much want to be able to push to try.
(In reply to Steve Fink [:sfink] from comment #36)
> (In reply to John O'Duinn [:joduinn] from comment #34)
> > Its possible that with builds on mozilla-central, :decoder would no longer
> > need to push to try, so no need for ASAN builds on try. If we *do* decide to
> > enable ASAN builds on try, ASAN would be not-by-default.
> 
> decoder might not need to push to try, but a Windows developer who is trying
> to fix an ASan bug would very much want to be able to push to try.

Agreed. This would have been very useful when I had a number of different ASAN reports to investigate a couple months ago.
Depends on: 831712
ASan builds are currently orange, nightlies are red.
I've just hidden both on mozilla-central until green.

To see them:
https://tbpl.mozilla.org/?noignore=1&jobname=asan

Build test failures:
{
TEST-UNEXPECTED-FAIL | jit_test.py --no-jm        | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js:4:4 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --ion-eager    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js:4:4 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm --no-ti| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js:4:4 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js:4:4 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion       | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js:4:4 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion -d    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/basic/test-apply-many-args.js:4:4 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-jm        | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm --no-ti| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-ti| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-ti -a -d| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --ion-eager    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion       | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion -a    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion -d    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion -a -d | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/Environment-identity-03.js:93:0 InternalError: too much recursion
TEST-UNEXPECTED-FAIL | jit_test.py --no-jm        | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js:15:0 TypeError: frame.eval(...) is null
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm --no-ti| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js:15:0 TypeError: frame.eval(...) is null
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-ti| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js:15:0 TypeError: frame.eval(...) is null
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js:15:0 TypeError: frame.eval(...) is null
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion       | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js:15:0 TypeError: frame.eval(...) is null
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion -d    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js: /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/debug/onEnterFrame-03.js:15:0 TypeError: frame.eval(...) is null
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion -d    | /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/jaeger/recompile/bug661859.js
TEST-UNEXPECTED-FAIL | jit_test.py --no-ion --no-jm| /builds/slave/m-cen-lnx64-dbg-asan/build/js/src/jit-test/tests/v8-v5/check-crypto.js
}

Nightly:
{
DEBUG: doshell: chrootPath:/builds/mock_mozilla/mozilla-centos6-x86_64/root/, uid:500, gid:496
DEBUG: doshell environment: {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'TZ': 'EST5EDT', 'HOSTNAME': 'mock', 'HOME': '/builds', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'TMPDIR': '/tmp'}
DEBUG: doshell: command: /usr/bin/env HG_SHARE_BASE_DIR="/builds/hg-shared" LC_ALL="C" CCACHE_COMPRESS="1" IS_NIGHTLY="yes" SYMBOL_SERVER_HOST="symbols1.dmz.phx1.mozilla.com" MOZ_SYMBOLS_EXTRA_BUILDID="linux64-dbg-asan" POST_SYMBOL_UPLOAD_CMD="/usr/local/bin/post-symbol-upload.py" CCACHE_DIR="/builds/ccache" SYMBOL_SERVER_SSH_KEY="/home/mock_mozilla/.ssh/ffxbld_dsa" PATH="/tools/buildbot/bin:/usr/local/bin:/usr/lib64/ccache:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/tools/git/bin:/tools/python27/bin:/tools/python27-mercurial/bin:/home/cltbld/bin" MOZ_UPDATE_CHANNEL="nightly" CCACHE_BASEDIR="/builds/slave/m-cen-lnx64-dbg-asan-ntly" TINDERBOX_OUTPUT="1" SYMBOL_SERVER_PATH="/mnt/netapp/breakpad/symbols_ffx/" MOZ_OBJDIR="obj-firefox" MOZ_CRASHREPORTER_NO_REPORT="1" SYMBOL_SERVER_USER="ffxbld" DISPLAY=":2" CCACHE_UMASK="002" make -C obj-firefox/tools/update-packaging
DEBUG: child environment: {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'TZ': 'EST5EDT', 'HOSTNAME': 'mock', 'HOME': '/builds', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'TMPDIR': '/tmp'}
make: *** obj-firefox/tools/update-packaging: No such file or directory.  Stop.
}
Depends on: 831721
Depends on: 799494
Created attachment 703284 [details] [diff] [review]
Disable snippets to fix Nightlies

I think this should fix the issue in comment 38 (updates aren't wanted for ASan nightlies).
Attachment #703284 - Flags: review?(catlee)

Updated

4 years ago
Attachment #703284 - Flags: review?(catlee) → review+
Comment on attachment 703284 [details] [diff] [review]
Disable snippets to fix Nightlies

https://hg.mozilla.org/build/buildbot-configs/rev/ef04076299a5
Attachment #703284 - Flags: checked-in+
(In reply to Ed Morley [:edmorley UTC+0] from comment #39)
> Created attachment 703284 [details] [diff] [review]
> Disable snippets to fix Nightlies
> 
> I think this should fix the issue in comment 38 (updates aren't wanted for
> ASan nightlies).

I don't question that this is right, but I am hoping we have verified/tested it?
(In reply to David Bolter [:davidb] from comment #41)
> I don't question that this is right, but I am hoping we have verified/tested
> it?

I don't have any way to test buildbot changes locally :-(

That said, releng run sanity checks prior to recofnigs (aiui) which normally catch any issues.
(And I'll retrigger the ASan Nightlies once we get a reconfig for this)

Updated

4 years ago
Attachment #703020 - Flags: review?(catlee) → review+
Oops, I should make bzexport not auto-take a bug when there's already a patch there, or something. I didn't mean to assign this bug to myself.
Assignee: sphink → joduinn
Attachment #703020 - Flags: checked-in+
http://hg.mozilla.org/build/buildbot-configs/rev/c3f24e992e83
(In reply to Ed Morley [:edmorley UTC+0] from comment #43)
> (And I'll retrigger the ASan Nightlies once we get a reconfig for this)

The retriggered nightlies will appear on:
https://tbpl.mozilla.org/?noignore=1&rev=712eca11a04e&jobname=asan
(In reply to Ed Morley [:edmorley UTC+0] from comment #46)
> The retriggered nightlies will appear on:
> https://tbpl.mozilla.org/?noignore=1&rev=712eca11a04e&jobname=asan

Due to the change in capitalisation of buildername, these retriggers (of an old buildername job) didn't work - so guess just easiest to wait for the new set in a couple of hours:
https://tbpl.mozilla.org/?noignore=1&jobname=asan
(Reporter)

Comment 48

4 years ago
(In reply to Ed Morley [:edmorley UTC+0] from comment #47)

> Due to the change in capitalisation of buildername, these retriggers (of an
> old buildername job) didn't work - so guess just easiest to wait for the new
> set in a couple of hours:

Thanks Ed! A fix for the orange is also already on the way, terrence is working on it.

Comment 49

4 years ago
This is in production.

Updated

4 years ago
Blocks: 831712
No longer depends on: 831712
Depends on: 834761
(Reporter)

Comment 50

4 years ago
I think we can call this one fixed since the builds are running now for central. Further changes/additional builds should go to separate bugs. Thanks to all who helped with this! :)
Status: NEW → RESOLVED
Last Resolved: 4 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.