Both the old and the new cluster have the same data and the same proxy configuration. Administrative cronjobs are aleady running on the new cluster. So this will have no impact. We have laid the groundwork to migrate this hostname to the redundant releng web cluster and are ready to make the cutover. We want to touch base ot make sure that there are no local changes that releng would have made to the system that would be missed and to give you a heads up in case this will necessitate any change in workflow. Please let us know if you see any reason not to proceed.
Bulk-unassigning releng cluster migrations from me, into the main webops component. Most (all?) of these come from relengweb1.dmz.scl3. Check /etc/httpd/conf/domains for Apache configs. DocRoots seem to be mostly in /var/www/html, but there's also some stuff in /data/www. There is a NetApp mount with 93GB of data on it... possibly needs migrated, possibly can just be used as-is on the new cluster. This stuff is all migrating to the new releng webapp cluster in scl3... admin node is relengwebadm.private.scl3. Dustin and Amy are primary contacts for questions. Some of this may need to go through CAB... they can advise which ones. How these are updated is unknown (at least by me). There are user accounts and file ownership permissions in place on relengweb1.dmz.scl3. There are logins in "last", so perhaps people are updating the content directly, by hand.
Sorry for the follow-up massive bug update, but we thought it best to avoid confusion. This site is currently served from relengweb1.dmz.scl3, but is already set up and running on the releng cluster. There's no technical work to do here, and no need to worry about the (ugly, weird) way things are hosted on relengweb1. All that remains is to change the CNAME. And to do that, we need either CAB permission, releng sign-off, or a decision to move forward with neither. As a sanity check, I'd like to make one last comparison of the old and new clusters just before flipping the switch, just in case something has been changed since the last three times I made that check. The new configuration is, as near as I could get it, a "normal" webapp configuration. Jake had a look and didn't see anything unusual or surprising, so I think I did OK. Updates are via normal push procedures for sites with code, and via puppet changes to the Apache config for code-free sites. Docs for all sites are in Mana.
DNS changed at 10am PDT today with no issues.