Closed Bug 1279367 Opened 8 years ago Closed 8 years ago

Replication breaks for extremely large change sets (as in m-b clone to user repo)

Categories

(Developer Services :: Mercurial: hg.mozilla.org, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: hwine, Assigned: gps)

References

Details

(Whiteboard: [workaround])

See bug 1279311 for details, diagnosis from #vcs:
[15:33] <       hwine> | gps: hg pull on master yields: replication log not available; cannot close transaction
[15:34] <       hwine> | any tips - nothing in mana docs I see about it
[15:35] <        gps> | hwine: `hg pull` ?
[15:37] <       hwine> | gps: yes - was working on k.moir's busted clone, found empty repo at end, manually tried to populate
[15:38] <       hwine> | gps: see https://bugzil.la/1279311#c5 - I just updated that
[15:38] <     firebot> | Bug 1279311 — ASSIGNED, hwine@mozilla.com — cloning mozilla-beta to my user repo doesn't do anything
[15:39] <       hwine> | atm it smells like the 'pash clone' might not be initializing the repo correctly.
[15:39] <         gps>| it's an issue with the replication system not liking to write 300k nodes in a single message
[15:40] <       hwine> | what's the workaround?
[15:42] <         gps>| i need to fix the replication system to not write the 40 character sha-1 for every node being replicated
[15:42] <         gps>| or tune kafka
[15:42] <       hwine> | okay - that's a "fix" is there a workaround?
[15:43] <         gps>| hg --config extensions.vcsreplicator=! pull; hg replicatesync
[15:43]              * | hwine starts a screen session this time ;)
this has popped up a couple more times. two user repos for mantaroh_gmail.com, plus I did the manual workaround for resetting larch.
Assignee: nobody → gps
See Also: → 1281757
Whiteboard: [workaround]
Lemme try to crank this out today.
Status: NEW → ASSIGNED
Pushed by gszorc@mozilla.com:
https://hg.mozilla.org/hgcustom/version-control-tools/rev/9791517850fc
vcsreplicator: do not put all nodes in changegroup replication message
Status: ASSIGNED → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
This is deployed to prod.
You need to log in before you can comment on or make changes to this bug.