Closed
Bug 417062
Opened 18 years ago
Closed 18 years ago
graphs-stage refusing connections.
Categories
(mozilla.org Graveyard :: Server Operations, task)
mozilla.org Graveyard
Server Operations
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: anodelman, Assigned: xb95)
Details
All the machines that are reporting to graphs-stage are have their data dropped. The error message is:
DEBUG: process_Request line: (1040, 'Too many connections')
DEBUG: process_Request line:
DEBUG: process_Request line:
DEBUG: process_Request line: <!-- The above is a description of an error in a Python program, formatted
DEBUG: process_Request line: for a Web browser because the 'cgitb' module was enabled. In case you
DEBUG: process_Request line: are not reading this in a Web browser, here is the original traceback:
DEBUG: process_Request line:
DEBUG: process_Request line: Traceback (most recent call last):
DEBUG: process_Request line: File "/var/www/html/graphs/bulk.cgi", line 10, in ?
DEBUG: process_Request line: from graphsdb import db
DEBUG: process_Request line: File "/var/www/html/graphs/utils/../graphsdb.py", line 5, in ?
DEBUG: process_Request line: db = MySQLdb.connect("localhost","graph","gr4ph","graphs_stage")
DEBUG: process_Request line: File "/var/www/html/graphs/utils/../databases/mysql.py", line 20, in connect
DEBUG: process_Request line: return GraphConnection(*args,**kwargs)
DEBUG: process_Request line: File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 164, in __init__
DEBUG: process_Request line: super(Connection, self).__init__(*args, **kwargs2)
DEBUG: process_Request line: OperationalError: (1040, 'Too many connections')
Updated•18 years ago
|
OS: Mac OS X → All
Hardware: PC → All
Comment 1•18 years ago
|
||
For whoever takes this /data (where mysql is) is out of disk space.
Comment 2•18 years ago
|
||
/data was full, not sure if that's the cause of the "too many connections" but that's where mysql lives too.
Created a 50GB disk and am moving /data/mysql to it.
Do you expect this to keep growing? Or is there some sort of data reclaimation process?
Assignee: server-ops → mrz
Comment 3•18 years ago
|
||
[16:39] <alice> i can hack a bunch of stuff out that is no longer in use, but the thing is still going to be big
[16:40] <alice> being stage, it's not *supposed* to have any information on it that we really care to keep
[16:40] <alice> so we probably need to come up with reasonable plans to trim the db on some sort of schedule
[16:41] <mrz_mac> i'll set a disk monitor on it for now
[16:42] <alice> cool. as long as we get some warning we can file other bugs to trim down the db
[16:42] <alice> i'll get this on my todo list so that it gets investigated
Comment 4•18 years ago
|
||
nagios setup and mysql back up.
Status: NEW → RESOLVED
Closed: 18 years ago
Resolution: --- → FIXED
| Reporter | ||
Comment 5•18 years ago
|
||
Still aren't getting any data through, though the error message has now changed:
DEBUG: process_Request line: </table> </table> </table> </table> </table> </font> </font> </font><pre>Traceback (most recent call last):
DEBUG: process_Request line: File "/var/www/html/graphs/bulk.cgi", line 120, in ?
DEBUG: process_Request line: (type, tbox, testname, "perf", "branch="+branch, branch, date))
DEBUG: process_Request line: File "/var/www/html/graphs/utils/../databases/mysql.py", line 16, in execute
DEBUG: process_Request line: return MySQLdb.cursors.Cursor.execute(self, query, args)
DEBUG: process_Request line: File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 163, in execute
DEBUG: process_Request line: self.errorhandler(self, exc, value)
DEBUG: process_Request line: File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler
DEBUG: process_Request line: raise errorclass, errorvalue
DEBUG: process_Request line: InternalError: (145, "Table './graphs_stage/dataset_info' is marked as crashed and should be repaired")
DEBUG: process_Request line: </pre>
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
| Assignee | ||
Comment 7•18 years ago
|
||
MyISAM is not very resilient against the disk filling up while it's using it. I'm repairing the tables now. Note that this can lead to corrupt data in the columns, or incomplete data (whatever was most recently inserted is the most likely trouble spot).
In a normal production environment I'd recommend restoring from backup and replaying the binlogs, but in this case it's a staging server so I don't think it's too important (and doubt we have backups anyway).
I am checking through and running repairs on the whole database. With the amount of data in question this is going to take some time... I'll update again when the process has finished.
I definitely recommend you hold off on trying to use the database until this is done. Thanks!
Status: NEW → ASSIGNED
| Assignee | ||
Comment 8•18 years ago
|
||
All good now, tables repaired and it's accepting data again. Please let us know of any further issues!
Status: ASSIGNED → RESOLVED
Closed: 18 years ago → 18 years ago
Resolution: --- → FIXED
Updated•11 years ago
|
Product: mozilla.org → mozilla.org Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•