At around 7:56 this morning we had a few slaves die during hg operations with errors like this: requesting all changes adding changesets transaction abort! rollback completed abort: premature EOF reading chunk (got 47 bytes, expected 343) And at 8:44 another slave tried to pull/update and got this error: abort: data/mozilla2/master1.cfg.i@74eafc955fa9: no match found!
And another at 8:44: requesting all changes adding changesets adding manifests adding file changes added 2462 changesets with 5455 changes to 1073 files (+1 heads) updating working directory abort: data/mozilla2/master1.cfg.i@74eafc955fa9: no match found!
On my way in, will start looking at this in like 20 minutes.
Was this limited to slaves from the castro office?
(In reply to comment #3) > Was this limited to slaves from the castro office? No, we had it happen on moz2-linux-slave* VMs, too, which are MPT. It also started happening way before this morning's network problems.
Are you guys noticing these issues still? or can this bug fade away and die?
With the try server fixes, I think the rest of the backend servers are a lot happier and you shouldn't be seeing any more hg timeouts etc. Please re-open if that's not the case.