Closed Bug 1262029 Opened 5 years ago Closed 5 years ago

[shipping] json-changesets times out with mercurial 3.7.3, repo['default'] is evil

Categories

(Webtools Graveyard :: Elmo, defect)

defect
Not set
normal

Tracking

(firefox46+ fixed, firefox47+ fixed, firefox48+ fixed)

RESOLVED FIXED
Tracking Status
firefox46 + fixed
firefox47 + fixed
firefox48 + fixed

People

(Reporter: Pike, Assigned: Pike)

Details

Attachments

(2 files)

Greg, after updating mercurial, one of my python code paths times out.

It seems that repo['default'] just takes VERY LONG. repo['tip'] is quick, as is repo['0334bcac4033']

Judging from a python shell, this is building a cache locally, as the second reference to it works like charm.

There are no mercurial extensions involved.

My suspicion is that bug 1137668 would also help, as hg cat on the command line doesn't have a problem.

This is time-critical right now, I'd either had to go back in time to another hg version, or we need to find something constructive. The current behavior is bad for releases, we have to manually enter the actual revision of mozilla-beta when going to build.
oh, yeah.

my suspicion is that generating hgtagsfnodes1 is slow, or tags2-visible. At least those two are new in .hg/cache, once I'm through the expensive op.

Greg, http://mercurial.808500.n3.nabble.com/PATCH-tags-extract-hgtags-filenodes-cache-to-a-standalone-file-td4021410.html says that you introduced that file? Is there a way to not pay the price?

In particular, I guess that with the web request timing out, I might actually never get to the completed cache file, so even reloading at a later time won't work.

Liz' later attempt probably did work because I ran the cache-generating command locally, and then subsequent calls through the web didn't suffer.
Flags: needinfo?(gps)
....

Running `hg tags` from the command line does generate the cache. I could probably run that in automation, though I wonder if I need to run that each time .hgtags changes, or if it's good enough to just prime the cache once for each repo?
Yes, cache generation on first run can take a while. There are mitigations but I need to know exactly which cache is the problem.

rm every file in .hg/cache and then add "--profile" to the hg command exhibiting the problem so we can see which cache is at fault.

Also, if you do a regular clone (not a stream clone - which clone bundles will prefer if you are cloning from S3), the tags cache from the server is transferred to the client so the client doesn't have to recompute values.
Flags: needinfo?(gps)
Also, which repos are exhibiting the problem?
Flags: needinfo?(l10n)
I'm on a clone of mozilla-beta, without any extensions.

First, this is without hgtagsfnodes1 and tags2-visible
Flags: needinfo?(l10n)
Note, most of my clones are old, they're long-standing clones on teh elmo infrastructure.

Note the previous two commands are 36 seconds first, and then .3 seconds
Flags: needinfo?(gps)
The slowness is definitely due to tags cache calculation.

As of Mercurial 3.4 or 3.5, `hg clone` will pull down tag cache entries that have been computed on the server, saving the client from recomputing these. The caveat is if a "stream/uncompressed clone" is used. And, hg.mozilla.org is currently configured to serve stream/uncompressed clone bundles to clients in AWS, undermining this.

Anyway, the reason `repo['default']` takes a long time is because "default" isn't a reserved name, so Mercurial has to resolve all names (including tags) to determine what it means. If you resolve "tip", an integer, or a SHA-1 fragment, it shouldn't require tags be resolved. Furthermore, some commands (like `hg log` by default) print tags. That means some commands will have to resolve tags. You should be able to hack around this by using a template that doesn't print tags. e.g. `hg log -T '{node}\n'`.
Flags: needinfo?(gps)
I'm not sure if the release repo is affected but we should try to figure that out soon (ie before the next release, which is April 25/26)
This is something that only comes from the repos the l10n dashboard has, and that doesn't have the release-repos. So this won't affect release-migration.

Also probably works because we never update l10n between the last beta and the actual release, unless we manually edit stuff within ship-it. Which happened once, IIRC.
We didn't time out for beta 10 milestones, so maybe we can resolve this bug. Is there anything left to do here?
Flags: needinfo?(l10n)
Flags: needinfo?(gps)
No, I think we're good here. Thanks.
Assignee: nobody → l10n
Status: NEW → RESOLVED
Closed: 5 years ago
Flags: needinfo?(l10n)
Flags: needinfo?(gps)
Resolution: --- → FIXED
Product: Webtools → Webtools Graveyard
You need to log in before you can comment on or make changes to this bug.