Closed
Bug 1144798
Opened 10 years ago
Closed 9 years ago
set up expiration/lifecycle policies for S3 buckets
Categories
(Socorro :: Infra, task)
Socorro
Infra
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: rhelmer, Unassigned)
References
Details
We don't have any expiration policies set up - these work based on when the data was uploaded (HTTP PUT) into S3, so it wouldn't impact our older data yet anyway.
We at least need one on the main crash bucket, and we may want to run a manual cleanup to get the old stuff when we transition to a new region (bug 1144179)
Updated•10 years ago
|
Comment 1•10 years ago
|
||
What kind of params are we thinking here? How many months/years should a file have existed prior to being archived?
Comment 3•9 years ago
|
||
The TTL should be 12 months as per the retention policy we have
https://mana.mozilla.org/wiki/pages/viewpage.action?pageId=5734601#crash-stats.mozilla.com%28Socorro%29-DataExpirationPolicy
Can we retrofit this for all objects in the bucket easily or do we need to loop over all of them and update one at a time?
Also, what needs to be done so that it sets it for all new objects being stored?
(/me not very well versed in S3 advanced features)
Comment 4•9 years ago
|
||
JP, to accomplish this one do we need to change our python code (where we use boto to write) or is it something we can do globally in configuration on AWS?
Comment 5•9 years ago
|
||
This is all set, and is at 365 days ttl.
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•