Since we are not pursuing openstack any longer (or at least anytime soon) we need to clean up all the cruft from the last POC.
This is a list of the vm that I'll be deleting in this clean up process. I believe some of them may still have config files that I'd like to backup and store for posterity just in case we ever come back around to openstack. controller1.admin.cloud.releng.scl3.mozilla.com glance-controller1.admin.cloud.releng.scl3.mozilla.com glance1.admin.cloud.releng.scl3.mozilla.com horizon1.admin.cloud.releng.scl3.mozilla.com ironic1.admin.cloud.releng.scl3.mozilla.com ironic2.admin.cloud.releng.scl3.mozilla.com ironic3.admin.cloud.releng.scl3.mozilla.com keystone1.admin.cloud.releng.scl3.mozilla.com network-node1.admin.cloud.releng.scl3.mozilla.com neutron1.admin.cloud.releng.scl3.mozilla.com
There are a handful of vlans that were created for the OpenStack project, and netops needs use of one them soon'ish... vlan 2100. However, if none of the vlans are in use, we shouldn't just leave the configs in place -- they should be cleaned up. Can you please let me know if these vlans are still in use or not? There is some urgency to this, as we want to use vlan 2100 as part of the PHX1 to SCL3 vmotion move. Thanks in advance.
(In reply to Dave Curado :dcurado from comment #2) > There are a handful of vlans that were created for the OpenStack project, > and netops needs use of one them soon'ish... vlan 2100. > However, if none of the vlans are in use, we shouldn't just leave the > configs in place -- they should be > cleaned up. > Can you please let me know if these vlans are still in use or not? > There is some urgency to this, as we want to use vlan 2100 as part of the > PHX1 to SCL3 vmotion move. > Thanks in advance. Dave, We are not using the openstack vlans any longer. The may me removed.
Is the rest of this bug still held up with comment 1 ("I believe some of them may still have config files that I'd like to backup")? Or is this free to decom completely?
(In reply to Greg Cox [:gcox] from comment #4) > Is the rest of this bug still held up with comment 1 ("I believe some of > them may still have config files that I'd like to backup")? Or is this free > to decom completely? Yes, I'll take care of that ASAP (right now)
I retrieved the files I wanted to hold on to and I've deleted the VMs listed in c1.
Extra cleanup: 10.26.102.16 = controller1.admin.cloud.releng.scl3 10.26.102.17 = glance-controller1.admin.cloud.releng.scl3 10.26.103.3 = glance1.admin.cloud.releng.scl3 10.26.103.4 = horizon1.admin.cloud.releng.scl3 10.26.103.6 = ironic1.admin.cloud.releng.scl3 10.26.103.1 = ironic2.admin.cloud.releng.scl3 10.26.103.2 = ironic3.admin.cloud.releng.scl3 10.26.103.7 = keystone1.admin.cloud.releng.scl3 10.26.102.18 = network-node1.admin.cloud.releng.scl3 10.26.103.5 = neutron1.admin.cloud.releng.scl3 Deleted from DNS and inventory. No RHN or puppetdashboard. Little bit for netops, I'll file that. VLANs 288, 296, 2100, 2102, 2105 pulled from ESX and UCS. Nothing interesting in infra puppet. I think that's everything here.
Thanks for the misc cleanup. r/f
OH, also there were openstack templates from the POC, I came across those yesterday. Removed those as well.