Closed
Bug 1075700
Opened 10 years ago
Closed 3 years ago
encrypt/decrypt crash blobs in primary store
Categories
(Socorro :: Backend, task)
Tracking
(Not tracked)
RESOLVED
WONTFIX
People
(Reporter: lonnen, Unassigned)
Details
As an added layer of protection, we should encrypt and decrypt crash blobs before putting them on disk. This could be accomplished with a metacrashstorage class. It is prudent to implement this for s3, but I'm not sure if its worth using with hbase.
Comment 1•10 years ago
|
||
What does this look like in the context of the larger system ? In particular, where is the key (or equivalent, depending on what encryption mechanism is used) stored ? I'm guessing it'd be somewhere on our internal infra, meaning that we'd encrypt the blobs here, then transfer them over to Amazon for storage. Elucidation ?
Comment 2•10 years ago
|
||
06:25:45 <@lars> It would be implemented as a meta-crashstore in the same manner as the benchmarking crashstore. 06:27:57 <@lars> it accepts the same api as the other crashstores, but holds a reference to a subsidiary crashstore. On receiving a crash via the save_* method, it does whatever the encryption step ends up being, then passes the result on to the subsidiary crashstore 06:28:46 <@lars> implemented like that, it can be add via configuration to _any_ other crashstore that we've got.
Reporter | ||
Comment 3•10 years ago
|
||
bcrypt in metacrashstorage class with a secret in the config will work fine. The collector and processor and mware will all need to be able to use it, since all three read or write crashes. We want to encrypt raw+processed crashes not just before they get written down, but also before they are transmitted between data centers.
Updated•10 years ago
|
Assignee: nobody → rhelmer
Status: NEW → ASSIGNED
Reporter | ||
Comment 4•10 years ago
|
||
First -- I was wrong about bcrypt and kind of hand wavy about the whole thing. We can use ssl for upload and download from s3. I think we can downgrade this from 'need' to 'nice to have' iff boto validates SSL. We should look into s3 access control and maybe we have to run a proxy in AWS until the rest of the services move, just so the primary store isn't open to the world.
Comment 5•10 years ago
|
||
(In reply to Chris Lonnen :lonnen from comment #4) > First -- I was wrong about bcrypt and kind of hand wavy about the whole > thing. We can use ssl for upload and download from s3. I think we can > downgrade this from 'need' to 'nice to have' iff boto validates SSL. > > We should look into s3 access control and maybe we have to run a proxy in > AWS until the rest of the services move, just so the primary store isn't > open to the world. We should be able to lock the bucket down using ACLs, to restrict read access to certain AWS IDs. There are also "policies" that would allow us to restrict to IP, which might be a reasonable additional measure (that way the AWS ID+key leaking wouldn't lead to disclosure unless the IP was also controlled): http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html
Comment 6•10 years ago
|
||
Might be nice to do still, per bug 1096510 we have verified that over-the-wire encryption is good. So, leaving open but not blocking.
No longer blocks: 1091124
Comment 7•9 years ago
|
||
Not currently working on this but I suggest looking into Amazon's KMS instead of doing a custom implementation.
Assignee: rhelmer → nobody
Status: ASSIGNED → NEW
Comment 8•3 years ago
|
||
This hasn't been touched in 6 years. I understand the use case and it would be nice to have, but the buckets we store crash data in are pretty restricted so I'm not sure the value here is worth the effort.
I'm going to close it as WONTFIX. We can revisit if a compelling reason comes up.
Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•