Treeherder is filling disk space



4 years ago
4 years ago


(Reporter: bjohnson, Unassigned)


(Depends on: 1 bug)


(Whiteboard: [treeherder][data:breakfix-other])



4 years ago
Treeherder has 300GB data partitions and is rapidly closing in on that amount of space used.

In particular, some of the tables are quite large and may need to have a data lifecycle defined.

[root@treeherder2.db.scl3 mysql]# du -sh ./* | grep G | grep -v bin
2.3G	./ash_jobs_1
17G	./b2g_inbound_jobs_1
1.7G	./b2g_inbound_objectstore_1
2.2G	./cedar_jobs_1
33G	./fx_team_jobs_1
2.1G	./fx_team_objectstore_1
3.6G	./gaia_try_jobs_1
3.6G	./jamun_jobs_1
1.5G	./maple_jobs_1
4.5G	./mozilla_aurora_jobs_1
1.2G	./mozilla_b2g30_v1_4_jobs_1
1.5G	./mozilla_beta_jobs_1
16G	./mozilla_central_jobs_1
1.3G	./mozilla_central_objectstore_1
84G	./mozilla_inbound_jobs_1
6.5G	./mozilla_inbound_objectstore_1
46G	./try_jobs_1
6.0G	./try_objectstore_1

Comment 1

4 years ago
discussed with jeads and gcox today.

During growth discussions I had data lifecycle discussions as well. The data is set to expire at 6 months. The app automatically should purge it after that point. They're using 250GB now and are 3.5 months old. Estimated 700 GB requested from virt team.
Assignee: server-ops-database → server-ops-virtualization
Component: Server Operations: Database → Server Operations: Virtualization
QA Contact: scabral → cshields

Comment 2

4 years ago
We're going with the 700g figure.  It's coming with a line-in-the-sand handshake agreement that this is as big as it gets, because of the need to share the environment's resources with others.

treeherder2 was bumped to 700g.  Waiting for DB failover to work on treeherder1.

Comment 3

4 years ago
DB failover done and treeherder1 boosted to 700g.  Closing out with :cyborgshadow's concurrence.
Last Resolved: 4 years ago
Resolution: --- → FIXED
Depends on: 1078392
Whiteboard: [treeherder]
Whiteboard: [treeherder] → [treeherder][data:breakfix-other]
Product: → Infrastructure & Operations
You need to log in before you can comment on or make changes to this bug.