Our try repo accumulates a lot of heads over time, which slows down hg eventually. It would be nice to be able to prune older heads.
Since we don't have any presumption of try being a stable-pullable repo, would it be possible to periodically just drop its current state on the floor, and re-push a copy from our then-current m-c to the same location.
Instead of trying to do some unsupported function of hg?
[Note I'm not sure if any of our infra needs the head information AFTER the fact]
That's what we currently do. We're looking for a solution that we can enact without having to enforce a downtime while we nuke the entire repo.
(In reply to comment #1)
> Since we don't have any presumption of try being a stable-pullable repo, would
> it be possible to periodically just drop its current state on the floor, and
> re-push a copy from our then-current m-c to the same location.
> Instead of trying to do some unsupported function of hg?
> [Note I'm not sure if any of our infra needs the head information AFTER the
Yeah, there's the risk of breaking builds if we drop the entire repo. There's always *some* delay between a change being pushed to try, and the hg checkout being started. This delay can be several minutes depending on if there's a free build slave available or not. If we reset the entire repo in this period, then the subsequent hg clone will fail.
*** Bug 563340 has been marked as a duplicate of this bug. ***
Adding some people from the other bugs. See https://bugzilla.mozilla.org/show_bug.cgi?id=557325 (especially comment 6) for solutions here. Bug 563340 may also have useful information.
As per bug 563340, if we can determine why try is so slow, and fix the underlying issue, then we wouldn't have to prune the repo automatically. Pruning is just a workaround.
Is the issue here only the number of heads? If yes, can we backout every branch that people push and merge it against the base mozilla-central changeset, to get rid of the added head?
(In reply to comment #7)
> Is the issue here only the number of heads? If yes, can we backout every
> branch that people push and merge it against the base mozilla-central
> changeset, to get rid of the added head?
Yeah, number of heads is the issue AFAIK. If we did this directly on an hg host we'd avoid getting these merges in pushlog, too, so we wouldn't get builds from them -- which is ideal.
This is bad again:
time wget http://hg.mozilla.org/try/rev/e61a1699f49c
Resolving hg.mozilla.org... 220.127.116.11
Connecting to hg.mozilla.org|18.104.22.168|:80... connected.
HTTP request sent, awaiting response... 200 Script output follows
Length: unspecified [text/html]
[ <=> ] 45,685 47.67K/s
14:21:29 (47.52 KB/s) - `e61a1699f49c' saved 
There's two issues here I think:
1) accessing the try repo itself (eq comment #9) becomes slow; impacts people who're inspecting changesets and possibly build slaves pulling to build. Trimming heads may help
2) querying pushlog becomes slow, which impacts tbpl and buildbot
Ted, any thoughts on whether 2) is fixed by trimming heads ?
The pushlog queries that buildbot do should not touch the repo at all. (The feed simply hits the database.)
TBPL hits the HTML page, which does have to query the repo to get the changeset author and commit message.
*** Bug 529179 has been marked as a duplicate of this bug. ***
I'm not actively working on this, putting it back in the pool.
*** This bug has been marked as a duplicate of bug 1055298 ***