Closed Bug 1399916 Opened 7 years ago Closed 5 years ago

[merge-day] Can't push to mozilla-release: Symlinks aren't allowed in this repo


(Developer Services :: Mercurial:, defect)

Not set


(Not tracked)



(Reporter: jlorenzo, Unassigned)



(Whiteboard: [releaseduty])

Today is merge-day and we're blocked on the mozilla-beta to mozilla-release migration because of:

> 17:15:58     INFO -  pushing to ssh://
> 17:15:58     INFO -  searching for changes
> 17:15:58     INFO -  remote: adding changesets
> 17:15:58     INFO -  remote: adding manifests
> 17:15:58     INFO -  remote: adding file changes
> 17:15:58     INFO -  remote: added 9557 changesets with 95413 changes to 61664 files
> 17:15:58     INFO -  remote:
> 17:15:58     INFO -  remote: ********************************** ERROR **********************************
> 17:15:58     INFO -  remote: 080aa07d32df adds or modifies the following symlinks:
> 17:15:58     INFO -  remote:
> 17:15:58     INFO -  remote:   mobile/android/app/src/main/res/drawable-ldltr/as_contextmenu_divider.xml
> 17:15:58     INFO -  remote:
> 17:15:58     INFO -  remote: Symlinks aren't allowed in this repo. Convert these paths to regular
> 17:15:58     INFO -  remote: files and try your push again.
> 17:15:58     INFO -  remote: ***************************************************************************
> 17:15:58     INFO -  remote:
> 17:15:58     INFO -  remote: transaction abort!
> 17:15:58     INFO -  remote: rollback completed
> 17:15:58     INFO -  remote: pretxnchangegroup.mozhooks hook failed
> 17:15:58    ERROR -  abort: push failed on remote
> 17:15:58    ERROR -  Automation Error: hg not responding
> 17:15:58    ERROR - Return code: 255
> 17:15:58    FATAL - Push failed!  If there was a push race, try rerunning

Called pointed out the check added in Bug 985087.

`find mozilla-release -type l` shows these symlinks:
> mozilla-release/testing/mozharness/configs/single_locale/
> mozilla-release/testing/mozharness/configs/single_locale/
> mozilla-release/media/libav/README
> mozilla-release/mobile/android/app/src/main/res/drawable-ldltr/as_contextmenu_divider.xml

The first 3 still live in mozilla-central. That may hold the the mozilla-central to mozilla-beta migration.
Added the following to mozilla-release/.hg/hgrc on hgssh:

check.prevent_symlinks = false

to skip the symlink check. jlorenzo's push bombed on an intermittent vcsreplicator error:

18:19:35     INFO -  pushing to ssh://
18:19:35     INFO -  searching for changes
18:19:35     INFO -  remote: adding changesets
18:19:35     INFO -  remote: adding manifests
18:19:35     INFO -  remote: adding file changes
18:19:35     INFO -  remote: added 9557 changesets with 95413 changes to 61664 files
18:19:35     INFO -  remote: (prevent_symlinks check disabled per config override)
18:19:35     INFO -  remote: recorded push in pushlog
18:19:35     INFO -  remote: replication log not available; cannot close transaction
18:19:35     INFO -  remote: transaction abort!
18:19:35     INFO -  remote: rolling back pushlog
18:19:35     INFO -  remote: rollback completed
18:19:35     INFO -  remote: pretxnclose.vcsreplicator hook failed
18:19:35    ERROR -  abort: push failed on remote
18:19:35    ERROR -  Automation Error: hg not responding
18:19:36    ERROR - Return code: 255
18:19:36    FATAL - Push failed!  If there was a push race, try rerunning

which corresponds to the following on our side:

Sep 14 16:18:31 vcsreplicator: {moz}/releases/mozilla-release EXCEPTION Traceback (most recent call last):#012  File "/var/hg/version-control-tools/pylib/vcsreplicator/vcsreplicator/", line 121, in pretxnclosehook#012    repo.replicationpartition)#012  File "/var/hg/version-control-tools/pylib/vcsreplicator/vcsreplicator/", line 56, in send_heartbeat#012    }, partition=partition)#012  File "/var/hg/version-control-tools/pylib/vcsreplicator/vcsreplicator/", line 49, in send_message#012    self.topic, partition, msg)#012  File "/var/hg/venv_pash/lib/python2.7/site-packages/kafka/producer/", line 349, in send_messages#012    return self._send_messages(topic, partition, *msg)#012  File "/var/hg/venv_pash/lib/python2.7/site-packages/kafka/producer/", line 390, in _send_messages#012    fail_on_error=self.sync_fail_on_error#012  File "/var/hg/venv_pash/lib/python2.7/site-packages/kafka/", line 480, in send_produce_request#012    (not fail_on_error or not self._raise_on_response_error(resp))]#012  File "/var/hg/venv_pash/lib/python2.7/site-packages/kafka/", line 247, in _raise_on_response_error#012    raise resp#012FailedPayloadsError

there were no other errors, so under the assumption that it was intermittent, had jlorenzo push again and it worked. (

I've re-enabled the symlink hook on mozilla-release.
The symlink hook grandfathered in existing exceptions. However, due to the way we do repo management, mozilla-release/mozilla-beta/etc don't get the changesets that were already present on central until uplift day. The hood doesn't know they were already pushed to central, so it complains. If we operated a single unified repo to push to, this wouldn't be a problem.

We'll experience the same failure for next ESR. Mitigation of disabling the check on the repo for the push will be the same.
Whiteboard: [releaseduty]
jlorenzo can this bug be closed or should it remain open?  Lizzard pinged me about the state of bug  	1384626 which this blocks so this is why I'm asking.
Flags: needinfo?(jlorenzo)
I'm actually unsure.

Back on merge day, we just worked the problem around. If I understand correctly, next m-b to m-r merge won't be an issue because [1] disappeared from m-b and the other files are already in the m-r tree.

:gps pointed out ESR 59 might be problematic. We actually create the new branch, just like a regular mergeday[2].

Therefore, we may want to keep this bug open for the 59 migration. For sure, this doesn't block last month's migration (bug 1384626).

[1] mobile/android/app/src/main/res/drawable-ldltr/as_contextmenu_divider.xml
Flags: needinfo?(jlorenzo)
No longer blocks: 1384626
Today is merge-day! For the record, issue described in comment 0 didn't occur. However, we hit the same exact race condition than the one comment 1. Pushing once more solved the problem. It might be worth to file a separate issue. What do you all think?
Comment 5 was about the m-b to m-r merge. We hit that problem once more in the m-c to m-b merge. I think the pattern is: first fails, second passes. I don't know the details of vcsreplicator. Could it pass because the results were first cached and then used the second time? What do you think :gps?
Blocks: 1407602
Flags: needinfo?(gps)
Yes, the "very large push fails in vcsreplicator" is a separate bug. It /may/ be filed already. I certainly have known about issues with very large pushes for a while. It doesn't happen enough to justify me working on it :/

Why don't you just file a bug. If it is a dupe, we'll mark it someday :)
Flags: needinfo?(gps)
See Also: → 1415233
Okay, I filed bug 1415233. Let's keep the current bug open for esr59 (like said in comment 4)

I don't think this is an issue anymore

Closed: 5 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.