Closed Bug 585365 Opened 14 years ago Closed 14 years ago

Some of the L10n win repack build slaves are failing

Categories

(Release Engineering :: General, defect, P2)

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: wladow, Assigned: armenzg)

References

Details

Attachments

(2 files, 1 obsolete file)

Some of the L10n win repack build slaves are failing with: ... ./allmakefiles.sh: line 114: ./toolkit/toolkit-makefiles.sh: No such file or directory program finished with exit code 1 see http://tinderbox.mozilla.org/showlog.cgi?log=Mozilla-l10n/1281187619.1281187752.22270.gz and many more reports at http://tinderbox.mozilla.org/Mozilla-l10n/ Reading through the todays repacks run these slaves seem to be affected: win32-slave12 win32-slave13 win32-slave19
Assignee: server-ops → nobody
Component: Server Operations → Release Engineering
QA Contact: mrz → release
I've set these machines to clobber. coop/armen, could we try to figure out why this is happening ? Or figure out a way to clobber on the first nightly build a slave does ? Maybe just '-t 18' instead of '-t 168' for the periodic clobber timeout.
I will check what is going on in the morning.
Assignee: nobody → armenzg
Priority: -- → P2
This seems to be happening with the slaves you mention. I have put them on the side until I look further into it. I can't reach at the moment moz2-win32-slave13 but let's leave it in the pool and see what happens.
OK so we are having trouble with moz2-win32-slave[12,13,16,19] Not sure if clobberer works properly with L10n nightly builds (I don't know the system myself). It might not work because the slave returns None as the last clobbered date: mozilla-central-win32-nightly:Our last clobber date: None mozilla-central-win32-nightly:Server clobber date: 2010-08-09 12:54:58 We also run out of space in one of the slaves when checking out after having 2.06 GB of space available. I will post a couple of patches to address that. I will clobber manually the mentioned slaves. NOTE: This is the issue we are hitting: ./allmakefiles.sh: line 114: ./toolkit/toolkit-makefiles.sh: No such file or directory
Attachment #464411 - Flags: review?(ccooper)
Attachment #464416 - Flags: review?(ccooper)
This patch is the correct one.
Attachment #464416 - Attachment is obsolete: true
Attachment #464417 - Flags: review?(ccooper)
Attachment #464416 - Flags: review?(ccooper)
(In reply to comment #6) > Created attachment 464411 [details] [diff] [review] > Default l10n space variable To note I am increasing it from 2GB to 3GB.
(In reply to comment #5) > Not sure if clobberer works properly with L10n nightly builds (I don't know the > system myself). It might not work because the slave returns None as the last > clobbered date: > mozilla-central-win32-nightly:Our last clobber date: None > mozilla-central-win32-nightly:Server clobber date: 2010-08-09 12:54:58 This is invalid as it has nothing to do with L10n. Please ignore.
I have clobbered manually the mentioned machines and hopefully I have not missed any. The slaves are back in the pool. I will check in the morning.
Status: NEW → ASSIGNED
Looking at todays tinderbox waterfall the slave05 might need a clobber as well
Right. Done.
Attachment #464411 - Flags: review?(ccooper) → review?(nrthomas)
Attachment #464417 - Flags: review?(ccooper) → review?(nrthomas)
Changed reviewer since coop is away until next week. There were only 2 repacks that failed today and they seems to be valid failures (ca and rm). I am still not sure why clobberer did not work the first time. Let me give a try.
Well it seems that clobber does work. So let's hope that removing 2GB instead of 2GB will prevent us from getting into this state again. mozilla-central-win32-l10n-nightly:Our last clobber date: 2010-08-09 12:54:58 mozilla-central-win32-l10n-nightly:Server clobber date: 2010-08-11 12:37:10 mozilla-central-win32-l10n-nightly:Server is forcing a clobber, initiated by armenzg@mozilla.com mozilla-central-win32-l10n-nightly:Clobbering...
Armen, I don't see anything here that indicates why increasing the allocated space to 3G will help. Could you fill me in ? We've had concerns in the past about the bash code we use to pulling mercurial not being robust enough, I figured that was the issue here.
Comment on attachment 464411 [details] [diff] [review] Default l10n space variable Ah, you did find one instance of running out of space.
Attachment #464411 - Flags: review?(nrthomas) → review+
Attachment #464417 - Flags: review?(nrthomas) → review+
Yeah on comment#5. We should separately fix bug 554438.
Blocks: 586793
Attachment #464417 - Flags: checked-in+
This has also landed today which should help avoid other L10n-broken scenarios: http://hg.mozilla.org/build/buildbotcustom/rev/91f1cc47614b
Status: ASSIGNED → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: