Closed
Bug 837705
Opened 12 years ago
Closed 12 years ago
More disk needed for Socorro dev/stage database systems
Categories
(Infrastructure & Operations :: DCOps, task)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: scabral, Unassigned)
References
Details
+++ This bug was initially created as a clone of Bug #790690 +++
dev and stage need more disks, too. Production (tp-master01-socorro01 and tp-master02-socorro02) has been updated.
Reporter | ||
Updated•12 years ago
|
Whiteboard: [2013q1]
Comment 1•12 years ago
|
||
Moving this to IT so that we actually order these. Causing problems/pages again.
Assignee: nobody → server-ops-devservices
Blocks: 826271
Component: Database → Server Operations: Developer Services
Product: Socorro → mozilla.org
QA Contact: shyam
Version: unspecified → other
Reporter | ||
Comment 2•12 years ago
|
||
Shyam, can we order disks for dev/stage socorro so we can increase the disk space for those environments, since we've upgraded production already.
Reporter | ||
Comment 3•12 years ago
|
||
dev and stage are 2 different machines...let's get them to the right capacity...given the stats below, Laura what do you think the right capacity is for dev? Stage is the same as production, 1 instance, so we can just order the same capacity as production, but dev has a few things there....so I'm not sure about those.
Right now production looks like this:
/dev/mapper/vg_dbsoc-lv_wal
50G 15G 36G 29% /wal
/dev/sdb1 2.5T 724G 1.8T 29% /pgdata
stage looks like this:
/dev/mapper/vg_dbsoc-lv_wal
50G 33M 50G 1% /wal
/dev/cciss/c0d0p1 1.1T 665G 453G 60% /pgdata
dev looks like this:
/dev/mapper/vg_dbsoc-lv_wal
50G 33M 50G 1% /wal
/dev/cciss/c0d0p1 1.1T 869G 249G 78% /pgdata
10.8.70.226:/vol/pio_symbols
2.9T 2.6T 239G 92% /mnt/socorro/symbols
[root@socorro1.dev.db.phx1 pgdata]# du -sh * | grep G
63G crash-stats-dev
58G devdb
688G pgslave
61G tmp
Comment 4•12 years ago
|
||
Corey can grab this.
Assignee: server-ops-devservices → server-ops-virtualization
Component: Server Operations: Developer Services → Server Operations: Virtualization
QA Contact: shyam → dparsons
Updated•12 years ago
|
Assignee: server-ops-virtualization → server-ops-webops
Component: Server Operations: Virtualization → Server Operations: Web Operations
QA Contact: dparsons → nmaul
Comment 5•12 years ago
|
||
Clarifying summary, because it confused me.. we just did prod. :)
Summary: More disk needed for Socorro database systems → More disk needed for Socorro dev/stage database systems
Updated•12 years ago
|
Assignee: server-ops-webops → cshields
Comment 6•12 years ago
|
||
what are the dev and stage hostnames so I can find their SNs in inventory?
Comment 7•12 years ago
|
||
(In reply to Corey Shields [:cshields] from comment #6)
> what are the dev and stage hostnames so I can find their SNs in inventory?
Here you go:
socorro1.stage.db.phx1.mozilla.com
socorro1.dev.db.phx1.mozilla.com
Comment 8•12 years ago
|
||
Adding Rich to this.
Rich, need to upgrade disks in 2 SB40c's (sn SGI1250012 and SGI125000Y). Can we hit 3TB in each of those?
Comment 9•12 years ago
|
||
Max capacity for drives is 6 x 1TB either(limited by 2.5 inch form factor)
1TB 6G SAS 7.2K rpm SFF DP Midline HDD
or
HP 1TB 3G SATA 7.2K 2.5in MDL HDD
new Storage Bldes D2200 can do 12 drives of 1 TB SATA or 900GB SAS
Comment 10•12 years ago
|
||
Hate SATA, let's go with SAS.. But we can go with 6 easily, do we have to get 1TB drives? Do you have a 500 or 600GB we can go x6?
Comment 11•12 years ago
|
||
sorry didn't realize you meant 3 TB total (thought it was individual drives). yes we can get 6 of these for SB4C storage blades
600 GB - hot-swap - 2.5" SFF - SAS-2 - 10000 rpm
Drives are $435 each
Lead time is about 10 days
Comment 12•12 years ago
|
||
(In reply to Rich Pomper from comment #11)
> sorry didn't realize you meant 3 TB total (thought it was individual
> drives). yes we can get 6 of these for SB4C storage blades
> 600 GB - hot-swap - 2.5" SFF - SAS-2 - 10000 rpm
> Drives are $435 each
> Lead time is about 10 days
Great! Email me a quote for 12 please. Thanks!
Reporter | ||
Comment 13•12 years ago
|
||
Ship date is now 4/15. FYI.
Reporter | ||
Comment 14•12 years ago
|
||
Rich, is there a tracking # for these?
Reporter | ||
Comment 15•12 years ago
|
||
We should have received these in phoenix already.
Reporter | ||
Comment 16•12 years ago
|
||
Can this be scheduled into a phoenix DC trip?
1st order
12 drives (6 per box) shipped via the following tracking information
Shipped Via - FEDEX GROUND delivered last Friday
Tracking#(s) - 075570099379942 , 075570099379959
2nd order
Shipped Via - FEDEX GROUND 166454776296887 delivered on Monday
We will be returning one order.
Comment 17•12 years ago
|
||
(In reply to Sheeri Cabral [:sheeri] from comment #16)
> Can this be scheduled into a phoenix DC trip?
>
:sheeri, we'll have people onsite on 4/25 and 4/29
Assignee: cshields → server-ops-dcops
Component: Server Operations: Web Operations → Server Operations: DCOps
QA Contact: nmaul → dmoore
Updated•12 years ago
|
colo-trip: --- → phx1
Comment 18•12 years ago
|
||
4/29 is the next trip.
Comment 19•12 years ago
|
||
I am on site with the drives. Please ping me in #dcops to coordinate for the upgrade as I am unable to find any DBAs.
Comment 20•12 years ago
|
||
notified #breakpad that we are starting on stage first
Comment 21•12 years ago
|
||
I swapped out all 6 drives at once on socorro1.stage.db.phx1.mozilla.com which had unexpected results.
mpressman wants to swap 1 drive at a time on socorro1.dev.db.phx1.mozilla.com to avoid downtime. Informed him that this won't be completed this trip and he was okay with continuing and completing the upgrade next trip.
Comment 22•12 years ago
|
||
fyi, next scheduled trip is 5/14.
Comment 23•12 years ago
|
||
These drives are taking a few hours to rebuild after every drive replacement. I was able to swap out 2 drives yesterday and will be swapping out the final 2 today.
Comment 24•12 years ago
|
||
Completed - 6x 600gb drives swapped into storage array.
[vle@socorro1.dev.db.phx1 ~]$ sudo hpacucli ctrl all show config
Smart Array P400 in Slot 3 (sn: PAFGL0T9S0U00O)
logicaldrive 1 (1.1 TB, RAID 6 (ADG), OK)
array A (SAS, Unused Space: 1717342 MB)
physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 600 GB, OK)
physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 600 GB, OK)
physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 600 GB, OK)
physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 600 GB, OK)
physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 600 GB, OK)
physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 600 GB, OK)
Smart Array P410i in Slot 0 (Embedded) (sn: 5001438006B81890)
logicaldrive 1 (136.7 GB, RAID 1, OK)
array A (SAS, Unused Space: 0 MB)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Updated•10 years ago
|
Product: mozilla.org → Infrastructure & Operations
You need to log in
before you can comment on or make changes to this bug.
Description
•