MySQL in recovery mode. Not able to login

Running Omeka 1.5.3 with MySQL InnoDB. No one is able to login to Omeka administration panel.

It appears that something bad happen to the database. In the MySQL log file, the following error message is repeated endlessly:

InnoDB: A new raw disk partition was initialized or
InnoDB: innodb_force_recovery is on: we do not allow
InnoDB: database modifications by the user. Shut down
InnoDB: mysqld and edit my.cnf so that newraw is replaced
InnoDB: with raw, and innodb_force_... is removed.

It looks like the MySQL database is in recovery mode and when attempting to start without force_recovery it fails.

We were able to backup and restore the databases without using the forced recovery mode to start them. One thing I did note is the mysqld process is using a lot of CPU which is causing the site to be a really sluggish. It seams that OMEKA is really hammering the DB.

Something looks wrong. Tried starting and stopping mysqls and httpd. The website take 2-4 minutes to respond.
From a mysql request, got a:
ERROR 1040 (08004): Too many connections

After some investigating I believe the issue is from something omeka is doing. The mysql process list shows many sql operations by user omeka_cshl after starting the mysql service that don’t seem to end. I tested disconnecting from the network and mysql ran without hitting the high CPU usage and there were no jobs from the omeka_cshl user in mysql, however when enabling the network connection again the jobs for omeka_cshl started appearing again and the CPU usage spiked back to 100%. Is there is some indexing operation going on and it will return to normal after it finished or if there are any tweaks we can implement to mitigate the high CPU usage?

Note that even after our backup/restore of the MySQL, we still need to run in recovery mode to allow the site to display existing content.

The way this stands right now our Omeka server is forced to run mysql in recovery mode. When mysql is not run in recovery mode the user omeka_cshl starts numerous mysql queries, some of which never finish and eventually crash mysql, bringing down the site and eventually the server.

If I remove network access from the server and allow mysql to run normally the omeka_cshl user does not create queries, pointing to an issue with omeka. Do you know anything about how it works or why it is spawning all the mysql queries?

Is the some type of Omeka DB verifier / restore untilitythat might fix some inconsistent structure within the database?

Any ideas on how we can proceed?

It would be difficult to say if Omeka has anything specific to do with this problem or if you just have other problems going on on your server; needing innodb_force_recovery is a bad sign generally, you might have a failing disk or some other server problem. Even very slow queries shouldn’t be crashing your server.

As for what Omeka’s doing, I can’t really say sight unseen. We don’t really do “indexing” type processes unless they’re requested by the user, nothing automatic. It’s possible if your site’s getting tons of traffic that you could have too many or too slow connections but you’d have to have a pretty significant amount of traffic for that to be the case… it’s also possible that the structure of the database or indexes has some problem now but I’m not sure what that would be. Something like missing indexes would hurt performance, for sure, but I’m not sure if it would hurt this badly.

You could try enabling the slow query log so MySQL logs queries that are taking a long time, that could help, or looking at the regular MySQL logs to see if they indicate some problem. You definitely don’t want to be running the server with innodb_force_recovery on generally, it’s only intended for temporary use to take dumps of the database.