Closed Bug 1617187 Opened 5 years ago Closed 5 years ago

configure collector and processor/webapp for AWS SQS on stage

Categories

(Cloud Services :: Operations: Socorro, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: willkg, Assigned: brian)

References

Details

Socorro has different configuration for the collector and the processor/webapp. We need to configure all three services to use AWS SQS queues instead of Google Pub/Sub.

The collector needs this:

CRASHPUBLISH_ACCESS_KEY
CRASHPUBLISH_SECRET_ACCESS_KEY
CRASHPUBLISH_REGION
CRASHPUBLISH_QUEUE_NAME

The CRASHPUBLISH_QUEUE_NAME value should be the queue name of the standard queue created in bug #1617008.

The processor and webapp need this:

queue.crashqueue_class=socorro.external.sqs.crashqueue.SQSCrashQueue
resource.boto.standard_queue
resource.boto.priority_queue
resource.boto.reprocessing_queue

The queue names should be those created in bug #1617008.

Note: We'll remove the queue.crashqueue_class after the migration and after we've removed the pubsub code and changed the default.

If we're using different AWS credentials for S3 and SQS access, we'll additionally need these set:

queue.access_key
queue.secret_access_key

To clarify, when you say "queue name" do you really mean the queue's name? I'm wondering if you alternately want the queue's URL or queue's ARN.

Depends on: 1617008

I literally mean "the queue's name". The code looks up the queue's url based on the queue's name.

For example, when I set up my local dev environment to use real AWS SQS queues, I configured it this way:

resource.boto.access_key=xxx
secrets.boto.secret_access_key=xxx
resource.boto.region=us-east-1
resource.boto.standard_queue=willkg_socorro_standard
resource.boto.priority_queue=willkg_socorro_priority
resource.boto.reprocessing_queue=willkg_socorro_reprocessing

Thanks. A few more questions

Does the collector actually need CRASHPUBLISH_ACCESS_KEY and CRASHPUBLISH_SECRET_ACCESS_KEY? My hope is that I can leave this out and it will use the IAM instance profile. It is using that in order to upload to S3 now.

Which of these settings can I put in place in advance of the migration, and which ones can only be added or changed as part of the migration?

I see we have queue.crashqueue_class for the processor, so I'm guessing I could add the other settings but leave that one until we want it to start reading from SQS. For the collector, I see we have a CRASHPUBLISH_CLASS setting that sounds similar, but that wasn't mentioned above.

Flags: needinfo?(willkg)

(In reply to Brian Pitts from comment #3)

Does the collector actually need CRASHPUBLISH_ACCESS_KEY and CRASHPUBLISH_SECRET_ACCESS_KEY? My hope is that I can leave this out and it will use the IAM instance profile. It is using that in order to upload to S3 now.

If that iam instance profile has access to both S3 and SQS, then that'll work fine and you can leave these keys out.

Which of these settings can I put in place in advance of the migration, and which ones can only be added or changed as part of the migration?

You can add them all now except for the CRASHPUBLISH_CLASS and queue.crashqueue_class which switch the collector, processor, and webapp over to using the AWS SQS queue code.

I see we have queue.crashqueue_class for the processor, so I'm guessing I could add the other settings but leave that one until we want it to start reading from SQS. For the collector, I see we have a CRASHPUBLISH_CLASS setting that sounds similar, but that wasn't mentioned above.

My bad--we'll also need to set CRASHPUBLISH_CLASS.

In full and taking the above into account, we need this for the collector:

CRASHPUBLISH_CLASS=antenna.ext.sqs.crashpublish.SQSCrashPublish
CRASHPUBLISH_REGION
CRASHPUBLISH_QUEUE_NAME

and this for the processor/webapp:

queue.crashqueue_class=socorro.external.sqs.crashqueue.SQSCrashQueue
resource.boto.standard_queue
resource.boto.priority_queue
resource.boto.reprocessing_queue
Flags: needinfo?(willkg)
Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.