This changeset failed every build due to timeouts while cloning: http://hg.mozilla.org/try/pushloghtml?changeset=a10cabf0429a
This changeset looks to be about to do the same: http://hg.mozilla.org/try/pushloghtml?changeset=5ee809208a79
Same patch I know but should be no way for it to cause clone timeouts. I've also noticed lately that pushing to try is extremely slow.
I wonder if the try repo is just too big now and needs to be recloned from m-c again? I note that m-c has some 61k changesets in it, try has about 93k in comparison.
IT, please check on the health of dm-vcview04. We're approaching 100% failure to clone try now, with the first problem showing up about 4:30 today.
Non-try repos seem fine. Try failures are independent of slave hardware.
While resetting try has probably come round again I don't know the pre-requisites for doing so and we can't do it straight away. Besides, it would be strange if a relatively normal day's worth of pushes to try suddenly sent us over the edge.
Looks like there are about 12 try pulls in process right now. They seem to be going okay. This is a known issue that the try server can only handle so many try pulls at the same time.
21:21 < nthomas> saw three clones happen in 20mins on linux64 slaves
21:21 < nthomas> finishing a few mins ago
21:24 < nthomas> yeah, so morph the bug into 'reset try repo again' ?
@john: Can you co-ordinate with folks and let us know when we can do this?
Haven't seen any failures since the accidental restart of Apache when bkero was adjusting the config earlier. And we're now in the window where we'd be getting timeouts an hour after this pretty lot of pushes
(In reply to comment #5)
> @john: Can you co-ordinate with folks and let us know when we can do this?
1) How much of a downtime do you need for this?
2) this would only impact try repo, and other branches could remain open throughout?
Would be great if this could be done at same time as bug#614786. (both require tree closures).