Closed Bug 1391771 Opened 7 years ago Closed 7 years ago

get processor working in docker container

Categories

(Socorro :: General, task)

task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: willkg, Assigned: willkg)

References

Details

Attachments

(1 file)

To run the processor in the local development environment, you do something like:

$ docker-compose up processor

That works fine--the processor comes up and is very chatty and then goes into it's see if there's anything to do and then rest for 7 seconds loop.

If you add a crash id to the socorro.normal queue, then the processor will try to process the crash. That part works fine, too.

However, when it tries to retrieve the crash data, it gets a CrashNotFound kind of error.

I verified that the crash data is in the correct place in the S3 container, so I think this is either a bug or a configuration issue in the processor.

This bug covers figuring that out and fixing it.
I'll work on this next week. Pretty sure it's related to the source.resource_class and related configuration. I'll know more after some debugging time.
Assignee: nobody → willkg
Status: NEW → ASSIGNED
Bah--I can't get it out of my head.

First problem is some configuration-related issues. Need to use a different resource_class, calling_format, and temporary file storage location.

Second problem was that the fetch_crash_data script was putting the raw crash file in a slightly wrong place. I fixed that in the PR for that script.

Third problem is a bunch of db errors and then an S3 error. The db error is from some table missing. I need to figure out what creates that and make sure it gets run somewhere in setup. I think the S3 error is a configuration problem.

Most of these things are specific to the local development environment, but one of them will affect server environments, too. There might be other issues, but feels like we're getting pretty close.

I'm going to generalize this bug beyond just the S3 connection problem.
Summary: processor can't access raw crash data in docker local development environment → get processor working in docker container
Commit pushed to master at https://github.com/mozilla-services/socorro

https://github.com/mozilla-services/socorro/commit/a83c5a2849b2edce20fb63512810121d3b5ddd36
fixes bug 1391771 - fixes processor in local dev environment (#3929)

* fixes bug 1391771 - fixes processor in local dev environment

* fix configuration so processor uses S3 container
* add two more things to run_update_data.sh that maintain the appropriate
  weekly tables
* update localstack to 0.7.4

* Configure stackwalk to use /tmp instead of /mnt for symbols

* Redo weekly table creation

This drops the two crontabber jobs and replaces them with a script we can throw
away at some point when postgres is no longer used for crash storage.

The script creates the last 8 weeks of tables plus the next 2 weeks of tables.
That should be a sufficient amount of time that covers crashes we might be
debugging.

* Remove unused import
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: