Closed Bug 973298 Opened 10 years ago Closed 9 years ago

Updated bmo deployment scripts for git

Categories

(bugzilla.mozilla.org :: Infrastructure, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: mcote, Assigned: fubar)

References

Details

(Whiteboard: [kanban:https://kanbanize.com/ctrl_board/4/198] )

This isn't a hard blocker for the migration, since changes will be mirrored to bzr.mozilla.org, but at some point the bmo deployment scripts should be updated to use git.mozilla.org/webtools/bmo/bugzilla.git.
Removing the blocker since we will just mirror bmo/4.2 and bmo/4.2.  Probably should be done soonish after the migration, though.  glob, who maintains these scripts?
No longer blocks: 929685
Flags: needinfo?(glob)
(In reply to Mark Côté ( :mcote ) from comment #1)
> glob, who maintains these scripts?

fubar/ashish
Flags: needinfo?(glob)
Assignee: server-ops-webops → klibby
(In reply to Byron Jones ‹:glob› from bug 973298#2)
> we should look at changing the workflow when we move to updating the webheads
> from git instead of bzr eg. create a current-production branch.

Generally speaking, it currently looks like this:

get current rev
bzr pull --overwrite $tag
get new rev
bzr log -n0 -r${currev+1}..${newrev}

which in git-ese is, I think:

git pull
git log ORIG_HEAD..HEAD

could explicitly gather the revs before and after using git rev-parse HEAD and FETCH_HEAD, respectively.

this has the benefit of preserving ORIG_HEAD, so if something went pear-shaped one could 'git reset --hard ORIG_HEAD'. and, ofc, one could break the pull into it's constituent fetch && merge.

however, looking through puppet it seems that another popular option is doing a fetch and then 'git reset --hard origin/master'. I'm not sure what the benefit there is..

I'm guessing that it'd work similarly if you end up using branches, but I'm not certain. If you have an idea of what you're likely to do there, I can start poking in that direction.
Blocks: bzr-decom
we need to solidify our plan for branching/tagging in git before this can happen.
i'll update this bug once we've done that.
Flags: needinfo?(glob)
from irc scrollback it looks like our git branching situation was sorted out on friday.

our structure is now:

- day-to-day commits are made to the 'master' branch
- a new 'production' branch has been created
- on push day, a bmo dev will merge changes from 'master' to the 'production' branch

the deployment script should pull changes from git.mozilla.org/webtools/bmo/bugzilla.git's 'production' branch.
Flags: needinfo?(glob)
And bugzilla-dev should pull from the 'development' branch.
.. and the script which automatically updates bugzilla.allizom.org should pull from 'master'.
Ok, so...

bugzilla-dev -> development branch
bugzilla-stage -> master branch
bugzilla (prod) -> production branch

There's religion around how "best" to pull updates with git (pull vs fetch+merge), plus there's another method webops uses for deploys (git fetch && git reset --hard origin/master && git clean -f). I'm agnostic, do you have a preference?

(the latter is how most or our websites get pulled onto the web heads from the admin node, including bmo until it was recently changed to use rsync)
Flags: needinfo?(mcote)
Flags: needinfo?(glob)
I don't have a preference either... the latter sounds good (assuming "master" is corrected to the appropriate branch for the system :).  glob, I presume we're safe to use git clean (removes files unknown to git)?  Or are there generated or config files that should stick around?
Flags: needinfo?(mcote)
(In reply to Mark Côté ( :mcote ) from comment #9)
> I don't have a preference either... the latter sounds good (assuming
> "master" is corrected to the appropriate branch for the system :).

"master" is correct for bugzilla-stage

> glob, I presume we're safe to use git clean (removes files unknown to git)?
> Or are there generated or config files that should stick around?

there's at least one generated file which isn't in .gitignore that only exists on moco deployed infra, so i would prefer we didn't take the 'git clean' approach (cvs-update.log).  while we could update bmo code to accommodate our deployment situation, i think it would be better to use pull or fetch+merger instead :)

i too have no preference between pull or fetch+merge; as long as the files get on the systems.
Flags: needinfo?(glob)
I think I'm inclined to use 'git fetch && git reset' but without the 'git clean'. The clean makes sense where it's used (to make sure web heads are pristine, at the end of the deploy process), but isn't needed here. However, the reset means the working copy is correct for the deploy and we don't have to worry about a merge bombing out - any local changes to tracked files (there *shouldn't* be any, right?) are nuked.

I'll start testing on a local copy of dev and see if I run into any issues.
Whiteboard: [kanban:https://kanbanize.com/ctrl_board/4/198]
Component: WebOps: Bugzilla → Infrastructure
Product: Infrastructure & Operations → bugzilla.mozilla.org
I think I've sorted out the various git incantations to match what we've been doing with bzr. The dev and staging scripts are also slightly different than prod, and there's a lot of less than ideal bash scripting (imnsho). I'll aim to get dev converted tomorrow and see how that goes. I'll let folks know when it's in place so we can be on the lookout for breakage.
update-bugzilla-stage has been updated to pull from git. It appears to do the right thing, but I need to verify that all of the correct files are there. There are local files that are not part of the repo, so I need to compare the trees and make sure I found everything.
nice work - as far as a i can tell things appear to be working, and the change i made was deployed :)

it looks like cvs-update.log was truncated and only contains git messages.
i'm not sure if the contents of that file was empty prior to this change, but i'd like to ensure that that file on production doesn't get truncated.
yeah, that was laziness; copied over the old data so it'll all be in the next push and will doe the same for dev/prod.
dev updated. will plan to do prod next week.
i just hit an issue where the changes that were part of an update weren't reflected in the running code.  manually restarting httpd fixed that.


it looks like httpd isn't being restarted correctly after the push:

before:
+ issue-multi-command bugzilla-dev /etc/init.d/httpd restart
[12:23:35] [2014-06-24 12:23:35] [web1.stage.bugs.scl3.mozilla.com] running: /etc/init.d/httpd restart
[12:23:35] [2014-06-24 12:23:35] [web5.stage.bugs.scl3.mozilla.com] running: /etc/init.d/httpd restart
[12:23:38] [2014-06-24 12:23:38] [web1.stage.bugs.scl3.mozilla.com] finished: /etc/init.d/httpd restart (3.288s)
[12:23:38] [web1.stage.bugs.scl3.mozilla.com] out: Stopping httpd: [  OK  ]
[12:23:38] [web1.stage.bugs.scl3.mozilla.com] out: Starting httpd: [  OK  ]
[12:23:38] [web1.stage.bugs.scl3.mozilla.com] err: [Tue Jun 24 12:23:37 2014] [warn] WARNING: HOME is not set, using root: /\n
[12:23:38] [web1.stage.bugs.scl3.mozilla.com] err: [Tue Jun 24 12:23:37 2014] [warn] _default_ VirtualHost overlap on port 443, the first has precedence
[12:23:38] [2014-06-24 12:23:38] [web5.stage.bugs.scl3.mozilla.com] finished: /etc/init.d/httpd restart (3.487s)
[12:23:38] [web5.stage.bugs.scl3.mozilla.com] out: Stopping httpd: [  OK  ]
[12:23:38] [web5.stage.bugs.scl3.mozilla.com] out: Starting httpd: [  OK  ]
...

now:
[06:53:27] + issue-multi-command bugzilla-dev '/etc/init.d/httpd restart'
[06:53:27] [2014-07-07 06:53:27] 
[06:53:27] [2014-07-07 06:53:27]
additionally, script it bombing out and NOT emailing us if checksetup.pl fails to run:

[16:00:10] syntax error at ./extensions/BMO/Extension.pm line 1018, near "elsif"
[16:00:10] BEGIN not safe after errors--compilation aborted at ./extensions/BMO/Extension.pm line 1495.
[16:00:10] Compilation failed in require at Bugzilla/Extension.pm line 82.
[16:00:10] checksetup.pl exited with return code 255!
httpd restarts not happening was because I didn't realize that issue-multi-command checks for a TTY and if not found will only read from STDIN. 

failure to send mail on checksetup failure has been fixed with an exit trap to handle sending the mail. 

both tested via cron and command line. 

committed for dev in 90496. feel free to push a test, bonus for trying to push one that breaks checksetup.pl.
committed the fix for issue-multi-command for staging, finally. if this week continues to be quiet, will work on getting prod done.
Didn't intend to wait this long to convert prod, but c'est la vie. dev and staging have been working well; just converted production over to pulling from git and using the new update script.

checksetup.pl ran cleanly with no errors. I don't expect a deploy to go sideways, but we should be extra watchful on the next prod push (maybe do one EST morning?).
forgot to mention: backup tarballs of the bzr version are in /data/bugzilla/{src,www}.tgz for quick rollback.
It works; we're done here.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.