Closed Bug 577431 Opened 14 years ago Closed 14 years ago

10.6 boxes hanging in 'make buildsymbols' for opt and debug builds

Categories

(Release Engineering :: General, defect, P2)

x86
macOS
defect

Tracking

(blocking2.0 beta4+)

RESOLVED FIXED
Tracking Status
blocking2.0 --- beta4+

People

(Reporter: nthomas, Assigned: nthomas)

References

Details

Attachments

(11 files, 1 obsolete file)

37.86 KB, application/octet-stream
Details
325.11 KB, image/jpeg
Details
470.62 KB, image/jpeg
Details
383.38 KB, image/jpeg
Details
1.67 MB, image/jpeg
Details
1.37 MB, image/jpeg
Details
532 bytes, text/plain
Details
2.51 KB, text/plain
Details
21.98 KB, patch
nthomas
: review+
nthomas
: checked-in-
Details | Diff | Splinter Review
1.01 KB, patch
jaas
: review+
nthomas
: checked-in+
Details | Diff | Splinter Review
1.22 KB, patch
catlee
: review+
nthomas
: checked-in-
Details | Diff | Splinter Review
Appears to correspond to the first revision that opt and debug builds could complete a complete a compile after the changes in bug 567424 (breakpad update) and bug 576053 (enabling Breakpad on Mac OS X/x86-64) landed. 1G of RAM may not be enough to extract symbols from XUL.
12 boxes down for the count so far, and also affects try now people are pushing recent m-c there.
I watched this happening for a build on 304db4815de4. We get to 
 Processing file: ./dist/bin/XUL.dSYM
It doesn't write that to the log, but I checked the process list. It grabs 350M of 'Real Mem' according to Activity Monitor, swap usage increases by 200MB then the box stops responding.

I disabled symbols on mac64 again
  http://hg.mozilla.org/mozilla-central/rev/e83325810760
because we don't have anyone in MV to recover these boxes. There's a large enough pool that we can live with it until next week.
Fun. It worked fine on my macbook pro.
Could you grab a memory usage profile so we can see how far off we are with 1G ?
I'll see if I can do that. Doesn't seem unreasonable that it ought to be able to run in 1GB.
Instruments tells me that the peak heap usage is ~450MB. If there was nothing else running on the machine that might be ok, but that's a pretty big chunk if you've only got 1GB.
Actually I'm not sure if that's exactly right (first time using Instruments). I think 450MB is the sum total of all allocations from the process.
Ok, I wasn't running the exact same commands that "make buildsymbols" does. If I follow that more closely, I get:
1) running dsymutil on libxul takes ~1 minute, chews through CPU like crazy, and hits a peak heap usage of almost 500MB.
2) running dump_syms on the resulting .dSYM file takes a while and has a peak heap usage of about 540MB. 

That's a lot of memory churn if you only have 1GB.
(In reply to comment #8)
> Ok, I wasn't running the exact same commands that "make buildsymbols" does. If
> I follow that more closely, I get:
> 1) running dsymutil on libxul takes ~1 minute, chews through CPU like crazy,
> and hits a peak heap usage of almost 500MB.
> 2) running dump_syms on the resulting .dSYM file takes a while and has a peak
> heap usage of about 540MB. 
> 
> That's a lot of memory churn if you only have 1GB.

Lot of churn, which would make things slower, but this should still work?

(We can get you access to a staging machine if it would help debug this)
It's not clear what the failure mode here is. If we're just stuck thrashing virtual memory, it might make the system unresponsive long enough that we lose the buildbot connection and can't complete the build.
Ted was hoping these machines would finish eventually but AFAICT they didn't recover until IT rebooted them.

moz2-darwin10-slave01/02/40-49 are in MV. I'll set up 40 with a patched build and ask someone to hook up a monitor to watch for the failure mode.
I'll take this for now.
Assignee: nobody → nrthomas
Status: NEW → ASSIGNED
Priority: -- → P2
Ok, moz2-darwin10-slave40 is setup with a MOZ_CRASHREPORTER=1 build (using rev 304db4815de4, the one before I disabled symbols on mac64 again), and is ready for someone in MV to attach a monitor and see what happens.

We're running 
 cd /builds/slave/mozilla-central/macosx64/build/obj-firefox
 export MOZ_OBJDIR=obj-firefox
 make buildsymbols 2>&1 | tee make_buildsymbols.log

I've left the Activity Monitor open so you can track RAM and swap usage. Via ssh/vnc the box stops responding when it gets to XUL (see comment #2), and the aim here is figure out if network is dying or the whole box. And to gather any additional information we can.
Assignee: nrthomas → lsblakk
Attached file make buildsymbols log
Ran the make buildsymbols on moz2-darwin10-slave40 as per above comment.  Here's the log.  What I observed while running it:

When processing the symbol files the symbol utility uses up nearly all of the memory, leaving as little as 7MB free at some points.  It does indeed take a very long time (and lots of RAM) at the XUL symbol but when running it on the actual machine it does eventually come out of the apparent 'hang' and continue on to zipping up the symbols.  bsdtar and zip each take their turn using up a ton of CPU 90-120% at peaks, then it's complete.  The Network graph in Activity Monitor shows a spike in the network but then calms down at the end and the machine is apparently still connected to the network.

Screenshots forthcoming.
(In reply to comment #14)
> When processing the symbol files the symbol utility uses up nearly all of the
> memory, leaving as little as 7MB free at some points.  It does indeed take a
> very long time (and lots of RAM) at the XUL symbol but when running it on the
> actual machine it does eventually come out of the apparent 'hang' and continue
> on to zipping up the symbols.  

Hmm, how long is 'a very long time' ?

> bsdtar and zip each take their turn using up a
> ton of CPU 90-120% at peaks, then it's complete.  The Network graph in Activity
> Monitor shows a spike in the network but then calms down at the end and the
> machine is apparently still connected to the network.

I don't see nagios complaining that the machine is dead, so somehow it mostly-worked when it was failing before.

How come the uptime on slave40 is only a 3hrs 45 mins ? 'make buildsymbols' shouldn't have uploaded anything or caused a reboot.
(In reply to comment #18)
> (In reply to comment #14)
> >  It does indeed take a
> > very long time (and lots of RAM) at the XUL symbol 
> 
> Hmm, how long is 'a very long time' ?

actually, it wasn't that long that that you ask it was just paused on that symbol longer than any other.  i would say no more than 10 minutes altogether

> 
> > bsdtar and zip each take their turn using up a
> > ton of CPU 90-120% at peaks, then it's complete.  The Network graph in Activity
> > Monitor shows a spike in the network but then calms down at the end and the
> > machine is apparently still connected to the network.
> 
> I don't see nagios complaining that the machine is dead, so somehow it
> mostly-worked when it was failing before.
> 
> How come the uptime on slave40 is only a 3hrs 45 mins ? 'make buildsymbols'
> shouldn't have uploaded anything or caused a reboot.

when i connected the monitor i couldn't wake the computer up so i had to reboot before running this test
ran twice more.  the second attempt crashed most likely because i was trying to take screenshots during the run and i had to reboot again.

third run i got time:
real   9m44.481s
user   4m5.314s
sys    0m21.329s

also screenshots attached for packets in/out from start to finish.
i wonder if we can maintain a TCP session at all while linking.  I wonder if we could use nc on the mac and nc -l on another machine?
I can reproduce this (dump_syms growing >450MiB resident) on my Linux box. There's no legitimate reason for that process to be that big; the output file is only 81MiB.

valgrind won't run the program at the moment, but if I can get around that, I can check for leaks or stupidity.
Trivial change to dump_syms allows valgrind to run.  Will have leak detection results, data from valgrind's "Massif" heap profiler in a bit.
Valgrind says dump_syms isn't leaking. (Damned straight it don't leak.) Heap profiler next.
passing back to nthomas to supervise this now that physical testing is done.
Assignee: lsblakk → nrthomas
I've posted a patch for Google's review upstream which reduces dump_syms' memory consumption by 20% on the libxul.so that I mentioned in comment 24. I don't know if that's enough to avoid this problem. Further savings might be more work, and I don't have the time right now.

http://breakpad.appspot.com/131001
Landed upstream as r626.
Next step, if we believe that the patch would fix the problem, is to re-import upstream Breakpad into our tree.
Thanks for working on this Jim. Can you sync breakpad to our tree in time for for b2 or should we wait for Ted's return ?
If we can wait for Ted, I think we should.

Ted got the fixes he needed for the last update applied upstream, but we still have other local changes, and I can't afford the time to follow up if things don't go smoothly. I was only able to do the two patches linked above because they were both straight shots.
(In reply to comment #30)
> Next step, if we believe that the patch would fix the problem, is to re-import
> upstream Breakpad into our tree.

I've filed bug#580375 to track updating breakpad


(In reply to comment #32)
> If we can wait for Ted, I think we should.
> 
> Ted got the fixes he needed for the last update applied upstream, but we still
> have other local changes, and I can't afford the time to follow up if things
> don't go smoothly. I was only able to do the two patches linked above because
> they were both straight shots.

Not sure how much time we have for 10.6 buildsymbols.
Over in bug 580375 josh reports trying Jim's fix on the try server (rev 5016010876be). Both the opt and debug builds failed in 'make buildsymbols', the machines going dark to ssh connections and nagios checks. So no improvement from we first enabled symbols on mac64.

We have a some 10.6 slaves with 2G of RAM (eg moz2-darwin10-slave02 in staging), so I've done a try build on the same revision as josh (under buildbot, just like a normal build). buildsymbols succeeded there, and the whole build ran through to completion. While processing XUL.dSYM we hit a max 'Real Mem' of 790MB, as reported by the Activity Monitor. I don't know why this is larger than what Ted and Jim report above, apples and oranges probably. There was still ~400M physical RAM marked Inactive, which OSX seems to like to hold on to rather than give to needy apps; didn't catch if it was swapping or not.

I'm completing a memory survey to see if we can shuffle slaves around.
Depends on: 580375
Short version
-------------

We have 1G on most of our minis, 13 out of a total of 183 have 2G instead. So reshuffling isn't going to give us enough coverage. :-(


Detailed survey (mostly for future reference):
----------------------------------------------

For the 10.5 minis we have
* moz2-darwin9-slaveNN  
 * 1G: 02-26, 38-72
 * 2G: 01, 29-37
 * there's no 28
* try-mac-slaveNN
 * 1G: 01-04,06-19, 21-47;
 * 2G: 20
 * there's no 05

For the 10.6 minis we have
* moz2-darwin10-slaveNN 
 * 1G: 01, 03-29, 40-49, 51-53; 
 * 2G: 02, 50
 * there's no 30-39
* try-mac64-slaveNN
 * 1G: 01-06, 09-26
 * 07,08 are down, but will be 1G too

For comparison we have 
* 4G for the win32 & linux ix hardware, and the xserves
* 2G for linux & win32 VMs
So the obvious questions is why do 1G minis cope just fine with buildsymbols on dual-arch builds, but not with single arch 64bit builds ?
FWIW, on dual arch builds we run dsymutil/dump_syms twice, once per architecture. I'm not sure why dump_syms doesn't have a problem on x86 while it does on x86-64, though.
jimb and I chatted about this, and we think that the main problem is just that dump_syms is being built and run as a 64-bit binary on the 10.6 builds. The majority of the memory in use is pointers, so we effectively double our memory usage when running it as a 64-bit binary. I think we can work around this by telling configure that our host system/compiler is x86, which will make us produce a 32-bit dump_syms (which should dump 64-bit symbols just fine and use less memory).

This would mean adding to the 64-bit mozconfigs something like:
ac_add_options --host=i386-apple-darwin
export HOST_CC="gcc-4.2 -arch i386"
export HOST_CXX="g++-4.2 -arch i386"

(I haven't tested to see if this works yet.)
Attached file mozconfig
I successfully built a 64-bit Firefox with this mozconfig on 10.6. "make buildsymbols" is erroring out on me, but only because it's trying to dump nsinstall and failing because there's no debug symbols (dsymutil just errors). If we hit that on tinderbox I can patch symbolstore.py pretty easily.

This build gave me a 32-bit dump_syms, which should hopefully work on the minis.
I'm running opt and debug builds in our staging using a mozconfig that incorporates the new goo in attachment 456685 [details]:
http://hg.mozilla.org/users/nthomas_mozilla.com/buildbot-configs/file/a8d379f35509/mozilla2-staging/macosx64/mozilla-central/debug/mozconfig
http://hg.mozilla.org/users/nthomas_mozilla.com/buildbot-configs/file/a8d379f35509/mozilla2-staging/macosx64/mozilla-central/nightly/mozconfig

I left off the -gfull in --enable-debug-symbols since we only have -gdwarf-2 right now, and the only reference to -gfull I could find was in mxr for PSM where MOZ_DEBUG_SYMBOLS was going to set that anyway.
Attached file Build failure
Clobbered to avoid configure in ctypes/libffi failing.
I hit the failure in 'make buildsymbols' at nsinstall too, worked around by using the exists guard from http://hg.mozilla.org/mozilla-central/file/default/toolkit/crashreporter/tools/symbolstore.py#l708 before line 715.

With that both opt and debug ran complete buildsymbols. Got up to 790MB 'Real Mem' and up to 500M in swap usage, and took a 8 minutes to do the whole operation, but made it through. Huzzah!

I'll work up a patch for our mozconfigs.
Updates every branch that's mozilla-central or derived, including mozilla-2.0. There are symlinks to cover electrolysis, jaegermonkey, shadow-central, tracemonkey. With this, and a Ted patch that re-enables crashreporter and fixes symbolstore.py, we should be good.
Attachment #459764 - Flags: review?(bhearsum)
With this patch I'm able to run "make buildsymbols" on my build successfully.
Attachment #459785 - Flags: review?(joshmoz)
No longer depends on: 580375
Once we land the mozconfig changes and my patch here, we can re-land the patch in bug 576053 and we should be in business.
Attachment #459785 - Flags: review?(joshmoz) → review+
Needed for a b3 blocker.
blocking2.0: --- → beta3+
Comment on attachment 459764 [details] [diff] [review]
[buildbot-configs] mozconfig changes

Spinning the wheel of reviewers, since bhearsum is stuck on release work.
Attachment #459764 - Flags: review?(bhearsum) → review?(catlee)
Attachment #459764 - Flags: review?(catlee) → review+
Comment on attachment 459785 [details] [diff] [review]
Skip files when dsymutil doesn't produce a .dSYM

http://hg.mozilla.org/mozilla-central/rev/ce8c5b97125c
Attachment #459785 - Flags: checked-in+
Burnt due to a lack of clobber, now set on m-c and m-2.0. Bit of a pain because we'll get bustage on each branch as they sync mozilla-central.
From bug 576053 comment #6, after enabling symbols again:
Unfortunately the slave for the opt build froze in buildsymbols, so I had
disable it again: http://hg.mozilla.org/mozilla-central/rev/dc4f4587d548

The debug build did succeed, so there's some other factor here - free
memory/disk/something random. I'll grab a few slaves and put them in a loop
building with the patch from bug 580375 to see if it helps. Other suggestion
was to try upgrading to 10.6.4, which is known to make building mozilla more
reliable.
Comment on attachment 459764 [details] [diff] [review]
[buildbot-configs] mozconfig changes

Backed out:
 http://hg.mozilla.org/build/buildbot-configs/rev/f847295b6788

Might have caused bug 581888.
Attachment #459764 - Flags: review+ → review-
http://hg.mozilla.org/try/pushloghtml?fromchange=758170aa8c5a&tochange=2deea0a2675a
is enables symbols using with Breakpad r626 included. If the opt and debug builds succeed overnight I'll run several more builds on the same revision to test reliability across slaves.
(In reply to comment #53)
The opt build has hung a try slave, so r626 + 32bit dump_syms isn't enough either. Debug was green, and seems consistently so when opt is flaky, any idea why that is ?

We're back to more RAM or trying 10.6.4.
It goes without saying that we appreciate all the efforts going into this from the code side. 

For one of the hanging boxes (moz2-darwin10-slave20) nothing is obvious in the system logs for these machines after they died, just this on reboot
Jul 26 14:49:36 localhost DirectoryService[11]: Improper shutdown detected
'last' knows the system crashed, and gets the time right. I can't find any CrashReporter log files though.

We did have this during the compile, presumably when linking gklayout, XUL etc:
Jul 25 19:33:08 moz2-darwin10-slave20 kernel[0]: (default pager): [KERNEL]: Switching ON Emergency paging segment
Jul 25 19:33:32 moz2-darwin10-slave20 kernel[0]: (default pager): [KERNEL]: Recovered emergency paging segment
/var/log/kernel.log, but nothing to indicate the same happened buildsymbols.

Is there some additional logging I can turn on to get more information ? Does using CrashReporterPrefs and setting it to Developer mode give me system crash info ? Seems to be talking about all user apps.
try-mac64-slave17, moz2-darwin10-slave45, moz2-darwin10-slave46 have been taken off their buildbot masters and updated to 10.6.4. These machines have all failed to compile 64bit w/ symbols at some time. I'm building the try code from comment #53 on them.
moz2-darwin10-slave46 died part way through the OS update so flaky hardware could be a factor here.
moz2-darwin10-slave46 update installed at the third attempt.

First build from try-mac64-slave17 (w/ 10.6.4)
* buildsymbols worked right after compiling - YAY
* running buildsymbols again crashed the box - BOO

More data to come.
Comment on attachment 459764 [details] [diff] [review]
[buildbot-configs] mozconfig changes

Oops, fixing these flags. r+ should have been left alone, and checkin- on the backout.
Attachment #459764 - Flags: review-
Attachment #459764 - Flags: review+
Attachment #459764 - Flags: checked-in-
Attachment #459764 - Flags: checked-in+
On moz2-darwin10-slave45 I compiled then watched 'make buildsymbols' run several times. First three runs were OK. On the fourth run it staggered through dsymutil, the machine hanging for a few 10s of seconds while deep in swap (700-800MB) with RAM exhausted, but did recover. I ran 'purge' to force OSX to empty the disk cache, no need to duplicate memory usage by caching and mapping XUL (perhaps this was futile). Still ~350MB of swap in use, and the box became completely unresponsive on the fifth run. If it wasn't for try-mac64-slave17 failing first time (comment #58) it might be OK to die on the 5th. Conclusion: 10.6.4 isn't the magic bullet.

Now that I think back I have seen other failures running dsymutil, rather than dump_syms. ie we get
  Processing file: ./dist/bin/xpt_link.dSYM
in the log but not the equivalent line for XUL, so we didn't get to Dumper.ProcessFile() yet. I tried modifying symbols_store.py to use
   arch -i386 dsymutil
to force it to not use 64bit arch of that util. That failed on the 8th cycle on moz2-darwin10-slave46, after 3 using 64bit dumpsyms. So we can probably say
  Breakpad r626 + 32bit dump_syms + 10.6.4 + 32bit dsymutil 
is better. I want to see if we can remove the 10.6.4 part of that.
Sigh, 'Breakpad r626 + 32bit dump_syms + 32bit dsymutil' died in the 5th cycle on moz2-darwin10-slave47 (with build transferred from slave45). I wasn't watching VNC at the critical moment.
We have XCode 3.2.1 on the 10.6 boxes, which reports '@(#)PROGRAM:dsymutil  PROJECT:dwarfutils-70' for --version. I had XCode 3.2.2 and it has dwarfutils-72, and the new XCode 3.2.3 has dwarfutils-78. So it's possible that there might be helpful fixes in those, but googling doesn't turn up a changelog. Apple internal project ?  (CCing LegNeato)

I took try-mac64-slave17 (which crashed at the second buildsymbols with Breakpad r626 + 32bit dump_syms + 10.6.4 + 64bit dsymutil@3.2.1) and substituted the 3.2.3 dsymutil (running 64bit). It completed with hanging but with some problems running dump_syms for XUL, libssl3.dylib, libnss3.dylib:
  Couldn't read DWARF symbols in: ./dist/bin/XUL.dSYM

I did the same run with the 3.2.1 dsymutil so I could compare dist/bin/crashreporter-symbols, and found that the 3.2.3 XUL.sym file misses lots of FILE declarations and references to hg.m.o, and XUL.dSYM.tar.bz2 was missing.
Running the 3.2.3 dsymtil a second time hung the box, so it's no magic bullet either.

I don't know where to go from here. The point of running buildsymbols several times is to try to deal with the randomness of the hang. I could get more data (using more machines) to try to determine the usefulness of each option. How we use that data is an open question, eg how stable does it need to be in testing for prod to be OK. IMO updating OS + XCode + Kitchen Sink and hoping for the best isn't really a viable strategy, given we've still seen failures. Help!
Thanks for all the work investigating this Nick. It seems like the bottom line might be that 1 GB RAM is not enough for this task, is it still true that we haven't seen a failure on a 2 GB machine?

I'm worried about the schedule for our 64-bit builds, if 2 GB RAM is the only solution we know of then we should strongly consider making that painful move ASAP.
I'll grab the two 10.6 slaves with 2G and do a bunch of runs on them.
I've run 'make buildsymbols' ten times in a row on the 2G slaves (moz2-darwin10-slave02 and 50), using the 
  Breakpad r626 + 32bit dump_syms + 10.6.2 + 64bit dsymutil
configuration. They both handled that without any trouble at all, retaining free physical memory at all times and not touching swap. (I was using tarball of source and objdir, they both had approximately 1G free memory at the start of each cycle.)

I want to try a 64bit dump_syms now, since that would mean we don't have to use the cross-compile fix for the host utilies, and therefore avoid bug 581888.
Ten iterations on 
  Breakpad r626 + 64bit dump_syms + 10.6.2 + 64bit dsymutil
also went fine. dump_syms peaks at about 820MB of 'Real Mem' in Activity Monitor, which means we're just about dipping into swap. Not having Breakpd r626 might be pushing it a bit close to the wind. Run time is 5 mins; it was about 7 and a half on a 1G box when it worked.

So, the question is how do we get 2G machines in enough quantity to keep up with mozilla-central, try, and so on ? Also, fast. Someone suggested packing up all the bits we need and sending them off to a 2G box to run buildsymbols, which could be OK as an interim fix. We really ought to get some better build slaves in the longer term though.
blocking2.0: beta3+ → beta4+
(In reply to comment #66)
> Someone suggested packing up
> all the bits we need and sending them off to a 2G box to run buildsymbols,
> which could be OK as an interim fix.

I don't think this is feasible. Since we're failing to run dsymutil, that means that we would ostensibly need to ship the entire objdir to another machine. Apple's GCC leaves the debug info in the object files, and dsymutil's whole purpose in life is to gather it out of the object files and stick it in one place, the .dSYM file. Thus, we'd need every single object file and binary shipped to another machine to run dsymutil + dump_syms.
Depends on: 583968
To summarize:
* for 64bit mac builds, enabling symbols and then calling 'make buildsymbols' reliably kills minis with only 1G of RAM
* we have tried several techniques to reduce the memory requirement, none of them were individually, or collectively, sufficient for 'make buildsymbols' to succeed reliably on 1G minis. They were
 * making dump_syms more memory efficient ('Breakpad r626')
 * running dump_syms as a 32bit application instead of 64bit
 * updating to OS X 10.6.4 for memory allocation fixes
 * running dsymutil as a 32bit application intead of 64bit
* minis with 2G will reliably build symbols with only the 'Breakpad r626' fix. It is a sufficient amount of memory now, but if we need further memory reductions in future we can run dsymtil and dump_syms as 32bit apps
* we have 13 minis with 2G of RAM and 170 minis with 1G, out of machines currently assigned to 32bit universal and 64bit builds
Depends on: 584289
My plan is
* use only 3G slaves for moz2 builds today, and enable symbols
* use this patch to disable symbols on try until we finish upgrading the machines there
Attachment #464952 - Flags: review?(catlee)
Attachment #464952 - Flags: review?(catlee) → review+
Attached patch Partial slave set for moz2 (obsolete) — Splinter Review
jabba says he's going to do moz2-darwin10-slave13 thru 21 then five more. This will get reverted once all the moz2-darwin10 slaves are done.
Attachment #464956 - Flags: review?(catlee)
Comment on attachment 464956 [details] [diff] [review]
Partial slave set for moz2

catlee points out that it's easier to just disable buildbot on those slaves which haven't been upgraded yet.
Attachment #464956 - Attachment is obsolete: true
Attachment #464956 - Flags: review?(catlee)
Comment on attachment 464952 [details] [diff] [review]
Disable crash reporter on try

Only some of the try slaves are upgraded to 3GB - Bug 584820 comment #24

http://hg.mozilla.org/build/buildbot-configs/rev/a7d2d6b0d479
Attachment #464952 - Flags: checked-in+
I watched the first symbol-enabled m-c build on a slave with 3GB go by. dump_syms got up to 1.1G RAM usage, still more than a gig of inactive memory for the OS to give it, no swap usage. That's without the breakpad r626 change (unless I didn't notice it landing in the meantime). So we're in good shape.
We never did land r626, since your testing showed it wasn't going to fix the problem. We'll pick it up in another Breakpad sync in the near future. Sounds good, thanks for all the work!
This looks fixed.
Status: ASSIGNED → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Comment on attachment 464952 [details] [diff] [review]
Disable crash reporter on try

Reverted the mozconfig change for try:
  http://hg.mozilla.org/build/buildbot-configs/rev/3fdb74fcaf2f

All but one slave has 3G now (and that one is disconnected until upgraded), and having builds with symbols is a pre-cursor to making the changes to the test configs in bug 558947.
Attachment #464952 - Flags: checked-in+ → checked-in-
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: