Out-of-memory error using CSV Import module

Omeka S (v3.0.1) is running on RHEL inside an OpenShift container. I’m running into an out-of-memory error when trying to upload a csv with image media using the CSV Import module (v2.2.0). The image media are specified as urls pointing to dropbox.

Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 1052672 bytes)
in /opt/app-root/src/omeka-s/application/src/File/TempFile.php on line 300

First priority, getting the job to run to completion.

Is there a way to change the php memory limit using the Omeka S config? I had to modify omeka-s/modules/CSVImport/src/Job/Import.php to increase the memory programmatically for testing since I currently don’t have the ability to change the config of the underlying php container image. Hopefully this question will become moot shortly since I’m working on getting access.

Can someone confirm that background job processes are spun up using the php command line executable? Programmatically boosting the apache php memory limit did not propagate to the background job process, htop seems to have reported the process as being started by /usr/bin/php, but code I added to omeka-s/application/data/scripts/perform-job.php for debugging purposes did not appear to ever execute.

Second priority, what is the root cause?

I added logging to omeka-s/modules/CSVImport/src/Job/Import.php at line 196 to capture the current and peak memory usage after processing each batch of rows over the course of the job. The OpenShift instance is obviously keep much more of the allocated data around.

OpenShift job log
2021-04-28T19:20:32+00:00 NOTICE (5): memory limit: 512M
2021-04-28T19:22:42+00:00 NOTICE (5): rows 10 alloc 18501224 peak 28531760
2021-04-28T19:25:19+00:00 NOTICE (5): rows 20 alloc 28879664 peak 38112496
2021-04-28T19:28:26+00:00 NOTICE (5): rows 30 alloc 48621720 peak 57904368
2021-04-28T19:30:39+00:00 NOTICE (5): rows 40 alloc 61429352 peak 70151072
2021-04-28T19:32:29+00:00 NOTICE (5): rows 50 alloc 72336048 peak 81820904
2021-04-28T19:35:13+00:00 NOTICE (5): rows 60 alloc 88629568 peak 97957240
2021-04-28T19:38:32+00:00 NOTICE (5): rows 70 alloc 110428928 peak 119831656
2021-04-28T19:39:46+00:00 ERR (3): lsolesen\pel\PelDataWindowOffsetException:...
2021-04-28T19:43:46+00:00 NOTICE (5): rows 80 alloc 138591312 peak 148181144
2021-04-28T19:47:44+00:00 NOTICE (5): rows 90 alloc 160044384 peak 169438752
2021-04-28T19:50:18+00:00 NOTICE (5): rows 100 alloc 172813984 peak 181165696
2021-04-28T19:53:51+00:00 NOTICE (5): rows 110 alloc 191592152 peak 200983696
2021-04-28T19:54:20+00:00 NOTICE (5): rows 120 alloc 194770048 peak 204165176
Local VM job log
2021-04-28T18:37:10+00:00 NOTICE (5): memory limit: 128M
2021-04-28T18:37:33+00:00 NOTICE (5): rows 10 alloc 19601200 peak 29505280
2021-04-28T18:38:02+00:00 NOTICE (5): rows 20 alloc 19765864 peak 30351352
2021-04-28T18:38:48+00:00 NOTICE (5): rows 30 alloc 19996952 peak 30588016
2021-04-28T18:39:26+00:00 NOTICE (5): rows 40 alloc 20164000 peak 30731072
2021-04-28T18:39:56+00:00 NOTICE (5): rows 50 alloc 20317640 peak 30884352
2021-04-28T18:40:36+00:00 NOTICE (5): rows 60 alloc 20501536 peak 31088472
2021-04-28T18:41:03+00:00 ERR (3): Error downloading... (403 forbidden)
2021-04-28T18:41:57+00:00 NOTICE (5): rows 70 alloc 20718624 peak 31307312
2021-04-28T18:43:23+00:00 NOTICE (5): rows 80 alloc 20997704 peak 31551480
2021-04-28T18:44:13+00:00 NOTICE (5): rows 90 alloc 21208176 peak 31776952
2021-04-28T18:44:47+00:00 NOTICE (5): rows 100 alloc 21366320 peak 31931896
2021-04-28T18:45:56+00:00 NOTICE (5): rows 110 alloc 21576656 peak 32165000
2021-04-28T18:46:05+00:00 NOTICE (5): rows 120 alloc 21548776 peak 32165000

The footprint of all of the images is ~200mb. Individually, they’re 1-2mb. Only guess I have is that the original images (or a mix of originals and thumbnails) are being kept around in memory after they’ve been processed.

I haven’t been able to replicate this issue on my local development instance. There are some differences in the environment (e.g. Ubuntu vs RHEL, 7.4.x vs 7.3.x, image magick vs gd), but nothing that I would know to point to as the obvious source of the problem.

I’ve also run into an occasional error on the OpenShift instance that I haven’t encountered locally that deals with gd not being passed full image data payloads(?). Doesn’t seem likely that it’s part of this problem, but throwing it out there just in case. Omeka S is configured to use the laminas curl adapter for http requests.

2021-04-28T19:39:46+00:00 ERR (3): lsolesen\pel\PelDataWindowOffsetException: Offset -1 not within [0, 520947] in /opt/app-root/src/omeka-s/vendor/lsolesen/pel/src/PelDataWindow.php:243
Stack trace:
#0 /opt/app-root/src/omeka-s/vendor/lsolesen/pel/src/PelDataWindow.php(312): lsolesen\pel\PelDataWindow->validateOffset(-1)
#1 /opt/app-root/src/omeka-s/vendor/lsolesen/pel/src/PelJpeg.php(249): lsolesen\pel\PelDataWindow->getByte(-1)
#2 /opt/app-root/src/omeka-s/vendor/lsolesen/pel/src/PelJpeg.php(286): lsolesen\pel\PelJpeg->load(Object(lsolesen\pel\PelDataWindow))
#3 /opt/app-root/src/omeka-s/vendor/lsolesen/pel/src/PelJpeg.php(123): lsolesen\pel\PelJpeg->loadFile('/tmp/omekaLltXK...')
#4 /opt/app-root/src/omeka-s/application/src/File/Thumbnailer/Gd.php(64): lsolesen\pel\PelJpeg->__construct('/tmp/omekaLltXK...')
#5 /opt/app-root/src/omeka-s/application/src/File/TempFile.php(263): Omeka\File\Thumbnailer\Gd->setSource(Object(Omeka\File\TempFile))
#6 /opt/app-root/src/omeka-s/application/src/File/TempFile.php(439): Omeka\File\TempFile->storeThumbnails()
#7 /opt/app-root/src/omeka-s/application/src/Media/Ingester/Url.php(72): Omeka\File\TempFile->mediaIngestFile(Object(Omeka\Entity\Media), Object(Omeka\Api\Request), Object(Omeka\Stdlib\ErrorStore), true, true, true, true)
#8 /opt/app-root/src/omeka-s/application/src/Api/Adapter/MediaAdapter.php(159): Omeka\Media\Ingester\Url->ingest(Object(Omeka\Entity\Media), Object(Omeka\Api\Request), Object(Omeka\Stdlib\ErrorStore))
#9 /opt/app-root/src/omeka-s/application/src/Api/Adapter/AbstractEntityAdapter.php(590): Omeka\Api\Adapter\MediaAdapter->hydrate(Object(Omeka\Api\Request), Object(Omeka\Entity\Media), Object(Omeka\Stdlib\ErrorStore))
#10 /opt/app-root/src/omeka-s/application/src/Api/Adapter/ItemAdapter.php(240): Omeka\Api\Adapter\AbstractEntityAdapter->hydrateEntity(Object(Omeka\Api\Request), Object(Omeka\Entity\Media), Object(Omeka\Stdlib\ErrorStore))
#11 /opt/app-root/src/omeka-s/application/src/Api/Adapter/AbstractEntityAdapter.php(590): Omeka\Api\Adapter\ItemAdapter->hydrate(Object(Omeka\Api\Request), Object(Omeka\Entity\Item), Object(Omeka\Stdlib\ErrorStore))
#12 /opt/app-root/src/omeka-s/application/src/Api/Adapter/AbstractEntityAdapter.php(318): Omeka\Api\Adapter\AbstractEntityAdapter->hydrateEntity(Object(Omeka\Api\Request), Object(Omeka\Entity\Item), Object(Omeka\Stdlib\ErrorStore))
#13 /opt/app-root/src/omeka-s/application/src/Api/Manager.php(224): Omeka\Api\Adapter\AbstractEntityAdapter->create(Object(Omeka\Api\Request))
#14 /opt/app-root/src/omeka-s/application/src/Api/Manager.php(78): Omeka\Api\Manager->execute(Object(Omeka\Api\Request))
#15 /opt/app-root/src/omeka-s/application/src/Api/Adapter/AbstractEntityAdapter.php(363): Omeka\Api\Manager->create('items', Array, Array, Array)
#16 /opt/app-root/src/omeka-s/application/src/Api/Manager.php(227): Omeka\Api\Adapter\AbstractEntityAdapter->batchCreate(Object(Omeka\Api\Request))
#17 /opt/app-root/src/omeka-s/application/src/Api/Manager.php(97): Omeka\Api\Manager->execute(Object(Omeka\Api\Request))
#18 /opt/app-root/src/omeka-s/modules/CSVImport/src/Job/Import.php(360): Omeka\Api\Manager->batchCreate('items', Array, Array, Array)
#19 /opt/app-root/src/omeka-s/modules/CSVImport/src/Job/Import.php(260): CSVImport\Job\Import->create(Array)
#20 /opt/app-root/src/omeka-s/modules/CSVImport/src/Job/Import.php(197): CSVImport\Job\Import->processBatchData(Array)
#21 /opt/app-root/src/omeka-s/application/src/Job/DispatchStrategy/Synchronous.php(34): CSVImport\Job\Import->perform()
#22 /opt/app-root/src/omeka-s/application/src/Job/Dispatcher.php(105): Omeka\Job\DispatchStrategy\Synchronous->send(Object(Omeka\Entity\Job))
#23 /opt/app-root/src/omeka-s/application/data/scripts/perform-job.php(44): Omeka\Job\Dispatcher->send(Object(Omeka\Entity\Job), Object(Omeka\Job\DispatchStrategy\Synchronous))
#24 {main}

So:

  1. The jobs do get run using the PHP CLI by default, so it’s the CLI’s configured memory limit that’s relevant.

  2. I suspect that GD vs. ImageMagick is actually the relevant difference here: in some configurations, the GD module can count the memory usage of loaded files against the PHP memory limit: this can be particularly bad because that’s the uncompressed size that would be counted there, so it could be quite a bit larger than the on-disk size in some cases. For the default ImageMagick configuration, none of the memory usage is counted against that limit since the processing happens in a separate process.

    You don’t specify which is which but if the problematic one is using GD that would be my guess.

  3. The GD thumbnailer is written to clear both the loaded originals and the generated thumbnails out of memory when they’re done or when there are errors in the process. Of course, it’s possible there’s some issue here where it’s not happening as it should, but I’m not aware of one. Your test output definitely seems to indicate some kind of memory leak, though.

    My immediate area of suspicion as to a possible culprit if the leak is in our GD code somewhere would be the stuff that deals with the ICC profile, which was added later. There’s also the possibility that the leak is within GD itself, though that’s probably less likely.

You can read the comparisons as VM vs OpenShift.

  1. Thanks for confirming. I had perform-job.php writing info to a text file, but nothing was ever created. In any case, the container image appears to have apache and the cli pull from the same php.ini file. Will boost the memory in the interim since the total amount of data we’re working with right now is not egregious.

  2. The OpenShift instance is configured to use GD and is the one exhibiting the issue. I was told in passing that there was a reason why the container image was configured to use GD instead of image magick, but I wasn’t told the details. Will investigate on that front.

  3. Will attempt switching local development instance to GD to see if I can replicate the error. If I can (and have time to investigate), it will at least provide an instance I can debug more easily.

Thanks for the leads.

This topic was automatically closed 250 days after the last reply. New replies are no longer allowed.