Sometimes reports fail to process. The URL field above shows one such case. Usually it's because of where the crash happens that Socorro can't process it. We should show an error at the top of the report page when a report has failed to process correctly so those looking at it know the report is "finished" processing.
lars: I know the jobs table has a "success" field. Could we add that to the reports table, and propagate it over? We could also test it for NULL to fix bug 422945.
hmm, this discussion makes me think there is, perhaps, a flaw in my refactoring because I was unaware of a requirement. In my current code, the insertion of a record into the 'reports' table, as well as its subservient tables, happens within a transaction. If a failure of processing occurs, the transaction is rolled back - eliminating the record in the 'reports' table. In my code, failing processing means that there is no record for that report in the 'reports' table. This behavior is beneficial for 422945, because an outside observer will never see a partially finished report. A successful process will commit the whole thing at once in a completed state. If this behavior is incorrect, then refactoring the refactoring is in order. If the reports table is to include the results of failed processing, then we need to reexamine the relationship between the 'jobs' and 'reports' tables. The 'reports' table may take over much of the purpose of the 'jobs' table. The 'jobs' columns 'success', 'queuedDateTime', 'startedDateTime', 'completedDateTime' and 'message' (the explanation of a failure) may be more appropriate in the 'reports' table.
What you've implemented isn't any worse than what we have now, it just isn't better. The problem is that we have a class of users that expects to see *something* from their crash report. If their report fails to process, we ought to put something in the reports table so they can see that their report was in fact accepted and run through the processor, but there was an error processing it.
When I say "better" I mean from the end-user's point of view. Your code is in fact much better from a development standpoint. :)
I think I will add 'success', 'queuedDateTime', 'startedDateTime', 'completedDateTime' and 'message' to the 'reports' table. I will also take the insert into the reports table out of the transaction. Though this will mean some extra work backing out if a report fails due to a quit request to the processor. It does have some added advantages for gathering stats about the process. Until this point, I've avoided making modifications to the original tables because I feared the ramifications If I change the 'reports' table schema, do I need to revisit the .../model/__init__.py to change the SQLAlchemy model definitions? How are schema changes propagated to the database (I can see what appears to be a previous generation of schema changes in the aforementioned model definitions)? I had thought to write a one-off script that would change the database schema, but that, of course, would not change the Web app's view of the schema. Is there a better or more appropriate way? What is the future of the WebApp? Do we need to maintain it for a while, so it must live in parallel with my refactoring?
Assignee: nobody → lars
Priority: -- → P1
Target Milestone: --- → 0.5
The processor will now make an entry into the reports table even if processing that report failed. Four new columns have been added to the reports table: 'starteddatetime', 'completeddatetime', 'success', 'message'. 'starteddatetime' – copied from the temporary entry in the 'jobs' table, this value indicates the time that processor started the job. ' completeddatetime' – copied from the temporary entry in the 'jobs' table, this value indicates the time that the processor completed the job. 'success' – a boolean indicating if the processor completed the job successfully or not 'message' – a message indicating any error conditions that occurred during the processing of the dump. This generally corresponds directly with an exception being thrown in the Python program during processing. The breakpad_stackwalk program does not pass error messages on to the processor. If a dump fails due to a problem encountered by breakpad_stackwalk, this error is not reflected in this value.
This is done, pushed on Wednesday.
Status: NEW → RESOLVED
Last Resolved: 11 years ago
Resolution: --- → FIXED
Component: Socorro → General
Product: Webtools → Socorro
You need to log in before you can comment on or make changes to this bug.