https://rpm.newrelic.com/accounts/677903/applications/4180461/traced_errors/3879605775/similar_errors https://rpm.newrelic.com/accounts/677903/applications/5585473/traced_errors/3879605775/similar_errors?original_error_id=3879605775 Will take a look over the next day or two, presuming it persists and no one else beats me to it.
I ran a manage.py cycle_data manually on stage+prod yesterday, which succeeded, and today's automatic job didn't result in an exceptions on New Relic. Will see how it goes for the next couple of days. Suspect this may just need bug 1165984 fixing, or else perhaps defragging the artifact tables (bug 1161618) will reduce size on disk enough that the operations against them are quicker.
This is still occurring :-(
Looking at DB node disk usage, cycle-data is still running enough that we're not getting runaway usage, so this can be made a P2.
Priority: P1 → P2
This seems to have gone away now :-)
Status: ASSIGNED → RESOLVED
Last Resolved: 3 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.