Read from a LIST of S3 buckets

RESOLVED WONTFIX

Status

RESOLVED WONTFIX
3 years ago
a year ago

People

(Reporter: peterbe, Unassigned)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

(Reporter)

Description

3 years ago
At some point we need to move all our raw crashes from one bucket to another (a new one that doesn't have dots in the name). During that transition time (the slow sync) we're going to have some raw crashes in the new bucket and some in the old since we're going to change the bucket that collector (or crashmover) writes to. 

So, when downloading a raw crash we should have the capability to read from a LIST of buckets instead. That way it'll try the first and not raise a CrashIDNotFound until it has tried all buckets.
(Reporter)

Comment 1

3 years ago
Lars, 
Does this make sense? Perhaps we need to do the same with reading processed crashes too. I'm in particular thinking about socorro.external.boto.crash_data
the FallbackCrashStorage class doesn't implement the 'get*' methods as it was intended for save operations only.  To use it as you suggest, the "get*" methods will have to be added.
(Reporter)

Comment 4

3 years ago
(In reply to K Lars Lohn [:lars] [:klohn] from comment #3)
> the FallbackCrashStorage class doesn't implement the 'get*' methods as it
> was intended for save operations only.  To use it as you suggest, the "get*"
> methods will have to be added.

What's this then?
https://github.com/mozilla/socorro/blob/cb73ba18eef033f3d18eac151eef65354b45ba07/socorro/external/crashstorage_base.py#L697-L753
well, i'm obviously working from faulty memory.
At some point, we're going to switch S3 buckets, but all the good plans I've seen so far deal with the complexity via infrastructure and not via code changes like this one.

Given that, I'm going to WONTFIX this. If it turns out we really do need to make big changes like this, then we can re-open.
Status: NEW → RESOLVED
Last Resolved: a year ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.