I don't know how much space the NETAPP_STORAGE mount has on the AMO boxes, but plans are coming together to pull the images out of the database(!) in bug 482837 and that's going to be about 2-3G extra right there, and then any new images will be put on disk as well. Can you verify that we'll have enough space for this now and in the future? Also, what is the nagios disk space monitoring threshold on this mount since it'll be growing faster after we close that bug?
Right now amo is sharing a mount with the rest of the webapps. That mount has 14G free. I just created a new mount with 90G usable space that I want to migrate amo to soon, but that'll probably need some downtime.
Sweet. Do you want to do that at the same time that we upgrade AMO (9/29)? I can add it to the production push steps
What's the largest image size? Want to make sure ZXTM is set to cache something that big (Amsterdam is already set to 20MB).
god, I hope it's under that
1.6M is the largest image I found.
(In reply to comment #2) > Sweet. Do you want to do that at the same time that we upgrade AMO (9/29)? I > can add it to the production push steps Did we do that?
Nope, but the mount is ready whenever. Just need some time to rsync over. Is it possible to disable new addon uploads?
We had a "disable submissions" flag but the latest dev tools apparently no longer respects it. So, no.
Wil, I believe this can wait till the next content push - do you know when that is?
The next AMO cycle is freeze on the 13th, push on the 20th. We hope to get this in this cycle but with the search bugs taking up time I'm not sure we'll be able to. We'll need to preview/verify this first so preview.amo will need to have ample space as well.
Preview is already using the new mount. We can switch over addons whenever. Only catch is we'll need a way to disable submissions while we sync the files, which could take an hour or so.
Can't we sync them ahead of time and then in our regularly scheduled downtime do another sync to pick up any last minute changes?
Yeah, I plan on doing that, but the netapp is crazy slow so I'd still expect it to take a little while.
It sounds like we're going to have some downtime on Thursday for database upgrades. Can we do this at the same time?
We should be able to keep the diffs fairly low, if we start sooner than later with syncing.
(In reply to comment #15) > It sounds like we're going to have some downtime on Thursday for database > upgrades. Can we do this at the same time? Tomorrow is Thursday. Can we plan for this?
What is the new ETA for this?
IT fail on docs to carry this out last Thurs. This coming Tues @ 9pm work for everyone?
Any idea how long this will take? It seems to me we can start earlier with no downtime in trying to keep the two storage units in sync, and then that should minimize downtime when we cutover.
I don't but what you suggest is what we'd do anyways.
Any reason not to knock this out tonight? I've done the initial rsync and am timing a resync right now. I'll post the estimate in a bit.
We are delaying for the release.
The rersync took 36 min.