Closed Bug 1170212 Opened 9 years ago Closed 9 years ago

Decommission symbols{1,2}.dmz.phx

Categories

(Infrastructure & Operations :: Virtualization, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: ted, Assigned: gcox)

References

Details

(Keywords: spring-cleaning, Whiteboard: [vm-delete:2])

Once we close all the deps of bug 1071724 and have everything uploading symbols directly to the Socorro symbol API we can decommission symbols{1,2}.dmz.phx.

(If this isn't the right component feel free to move it.)
Keywords: spring-cleaning
Blocks: 1181647
bug 1071724 comment 11 says "More work to do in dependent bugs, but in our current state does not block AWS move.  It does block turning off symbolpush".

Is this decom still blocked?
  (if so, can we get a new blockerbug attached for ease of go/nogo)?
Well, we turned off syncing from the NetApp to S3, so I guess that ship has sailed! We might still have some stragglers that need to migrate to the new setup (I was contacted by the Fedora distro maintainer, and I think I saw a bug about a B2G partner), but these machines are not actually doing anything useful now so we might as well get rid of them.
OK then!
10.8.74.48 = symbols1.dmz.phx1
10.8.74.88 = symbols2.dmz.phx1
Has NFS, no netvault.

Pulled from nagios in change 107230, powered off.  Per standard practice we'll leave the VMs powered off but untouched.  If it causes issues we can power back on pretty quickly.  In a week or so we'll delete them and they won't be recoverable.. but you can ask for more time during that week.
See Also: → 1197011
Per an email thread, one of our B2G partners hadn't switched over to the new upload API. Can we power these back on until we get that resolved? (I know rhelmer turned off the auto-sync script, so we'll probably need to do a one-off sync if they upload via SSH.)
Powered back on.  Did not revert nagios back to monitoring them, under the assumption this is temporary.

If there's bug for the partner to get fixed, please mark it as blocking this bug.
(In reply to Ted Mielczarek [:ted.mielczarek] from comment #4)
> Per an email thread, one of our B2G partners hadn't switched over to the new
> upload API. Can we power these back on until we get that resolved? (I know
> rhelmer turned off the auto-sync script, so we'll probably need to do a
> one-off sync if they upload via SSH.)

I did indeed turn off syncing, just lmk if you need it back on or if you want me to do a one-off sync.
VMs still needed?  If so, is bug 1130138 an appropriate blocker?
(In reply to Greg Cox [:gcox] from comment #7)
> VMs still needed?  If so, is bug 1130138 an appropriate blocker?

Yeah, that's the right blocker bug. I think we only have the one B2G partner that was using this upload endpoint. I'll help get them moved over so we can close this out for good.
Depends on: 1130138
All blocker's out. This is ready to decomm
ack, will pull them on Monday after I'm back from PTO.
Assignee: server-ops-virtualization → gcox
Powered off.  Entering hold for complaints until Friday.
inventory, dns, rhn, puppetdashboard done.
zeus, Will file for webops due to the evacuation of phx1.

Removed node declaration from puppet (change 109638), continued not-removing module definitions.
Nothing in newrelic.
Will file netops changes.
VMs deleted from disk.
That removes the last users of the symbols NFS volume in phx1.  unmounted and offlined that, will destroy soon.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Whiteboard: [vm-delete:2]
You need to log in before you can comment on or make changes to this bug.