Closed Bug 736110 Opened 14 years ago Closed 13 years ago

Add logging in django-mysql-pool to exceptions raised by connection.ping()

Categories

(addons.mozilla.org Graveyard :: Code Quality, defect)

x86
macOS
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: kumar, Assigned: kumar)

Details

Attachments

(2 files)

The pool uses connection.ping() to decide if it can re-use a connection. In bug 734922 there is a reproducible case where Django's connection.ping() raises an exception, ignores it, and closes the connection. This causes all other queries to fail. We should override the ping and add logging to find out what that exception is. Django code to override: https://github.com/django/django/blob/16e3c6e9a6c882a0a636506d8bd605e89e8851a6/django/db/backends/mysql/base.py#L351
Assignee: nobody → kumar.mcmillan
Target Milestone: --- → 6.4.7
Target Milestone: 6.4.7 → 6.4.8
Added some comments to bug 734922 and removed the log code. I think the log code shouldn't be logging the database name unless its connected, which might be a vicious loop. Perhaps just logging the db name isn't that smart :) The django connection.ping catches the DatabaseError, because if the database isn't there, you'll get a database error which is rather sad and generic. Ideally somewhere down in the database adaptor there should be more specific DatabaseNotThereError that we can catch.
mcdavis, could you try uploading your addon jar again? We just deployed a change that hopefully lets the database pool regenerate the connection properly.
(In reply to Kumar McMillan [:kumar] from comment #3) > mcdavis, could you try uploading your addon jar again? We just deployed a > change that hopefully lets the database pool regenerate the connection > properly. Sure. I just tried and it looks like an improvement (no longer getting server status 500) but it hung somewhere during validation. See the screenshot; it got that far then went no farher. I let it sit there for 20 to 30 minutes but that's as far as it got. I tried a second time a few minutes later with the same result.
Target Milestone: 6.4.8 → 6.4.7
Target Milestone: 6.4.7 → 6.4.8
(In reply to mcdavis941 (sporadically reading bugmail) from comment #4) > I just tried and it looks like an improvement (no longer getting > server status 500) but it hung somewhere during validation. FYI, I just tried again with the same result (reaches validation step then hangs during validation) but from other bugs it looks like there may be other issues with the cluster at the moment which are unrelated to this bug.
I waited this week until it looked like most of the celery issues were taken care of, take save cycles for all of us, before coming back to this. So I tried again three times this morning (or maybe yesterday, it was a big day and I can't remember now) and all of those failed with server errors reported in the upload panel like we've being seeing all along. This evening I tried again and had one success surrounded by failures both before and after. The one success did result in the new version showing in the list of versions for the add-on. The screenshot represents this evening's attempts. (I don't have a need to upload anything at this point .. just seeing if the logging shows anything illuminating.)
Target Milestone: 6.4.8 → 6.4.9
I created this bug as a reminder to fix the logging issue. That landed in https://github.com/andymckay/django-mysql-pool/commit/ce4883f68c402b7c9375fc37c360e428c0ab76b7 mcdavis, feel free to file a new bug if you run into problems
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Product: addons.mozilla.org → addons.mozilla.org Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: