Session destroy errors and garbage collection

Hello,

I recently noticed that the sessions table in our omeka-s installation was quite enormous, which led me to this thread. So I truncated the table, but it was growing again pretty quickly, so I think I’ll need to set up some cron job. However I looked into my systems garbage collection and it says it should be on. Here are the php settings:

session.gc_divisor => 1000 => 1000
session.gc_maxlifetime => 1440 => 1440
session.gc_probability => 1 => 1

Then I looked at my php error logs and noticed tons of this:

PHP Warning:  session_destroy(): Session object destruction failed in /var/www/html/omeka-s/vendor/laminas/laminas-session/src/SessionManager.php on line 202

Could this be related to why the garbage collection is not doing it’s job? Or am I just being overrun by bots and the garbage collection can’t keep up?

Thanks,

Joseph Anderson

I’m seeing this commit, which might be the reason for the errors I’m seeing?? But still wondering about why the garbage collection doesn’t seem to be working.

Hmm. The commit you referenced is a fix to a bug that’s responsible for some “phantom” PHP errors, but not the specific one that you’re reporting here with session_destroy. This session_destroy warning would happen when we return false from the destroy method in that handler: i.e., we couldn’t find or failed to delete the given session.

That in turn isn’t the same as the session garbage collection, which deletes many sessions at once and doesn’t call destroy.

I’m not able to reproduce a problem with the GC (or destroy, for that matter) currently. The problem we typically see is, as a prior thread mentioned, that some hosts disable automatic GC in favor of a cron, and that cron doesn’t know to look at Omeka’s sessions table. That’s different from what you’re describing here.

What are your other session INI settings? Maybe there’s something else there that’s causing this issue.

Hi John, here are those settings:

session.auto_start => Off => Off
session.cache_expire => 180 => 180
session.cache_limiter => nocache => nocache
session.cookie_domain => no value => no value
session.cookie_httponly => no value => no value
session.cookie_lifetime => 0 => 0
session.cookie_path => / => /
session.cookie_secure => 0 => 0
session.gc_divisor => 1000 => 1000
session.gc_maxlifetime => 1440 => 1440
session.gc_probability => 1 => 1
session.lazy_write => On => On
session.name => PHPSESSID => PHPSESSID
session.referer_check => no value => no value
session.save_handler => files => files
session.save_path => no value => no value
session.serialize_handler => php => php
session.sid_bits_per_character => 5 => 5
session.sid_length => 26 => 26
session.upload_progress.cleanup => On => On
session.upload_progress.enabled => On => On
session.upload_progress.freq => 1% => 1%
session.upload_progress.min_freq => 1 => 1
session.upload_progress.name => PHP_SESSION_UPLOAD_PROGRESS => PHP_SESSION_UPLOAD_PROGRESS
session.upload_progress.prefix => upload_progress_ => upload_progress_
session.use_cookies => 1 => 1
session.use_only_cookies => 1 => 1
session.use_strict_mode => 0 => 0
session.use_trans_sid => 0 => 0

The only other thing that I can think of is that we’re also using the SiteSlugAsSubdomain module, so we’ve also explicitly set session.cookie_domain in a .user.ini file to our main domain name because there were issues with user logins not carrying over from admin to our sites, but not sure if that would cause any issue here.

Thanks,

Joseph

Thanks for sharing that… I don’t think anything you’ve mentioned here would obviously cause a problem… the cookie_domain shouldn’t be causing this as far as I can think. I was thinking maybe you could have a nonstandard sid_length that was causing odd session ids, but those settings look pretty typical to me.

Just to rule out the randomness as the issue, if you set the gc_divisor to 1 (which makes session GC run on every request), do the old sessions get cleared out? Or not?

Ok, I gave that a try, but it didn’t seem to delete any of the sessions that were in the table. So I’m not sure what’s going on. I’m in a selinux environment so maybe that’s causing some issue perhaps?

I was able to separately set up a nightly cron job that deletes everything but the last day’s sessions, and that seems to be doing the trick. I also blocked the AhrefsBot that was steamrolling through our site and causing all the sessions. I guess that’s fine enough, but I do wonder about these errors I’m seeing, if it might be some other issue.

Thanks for your help with this!