Omeka-S use case

Question about the technical use case for Omeka-S, trying to refresh/update my conception of when to use and recommend Omeka-S moving forward:

Stated use cases on homepage:
Omeka: For individual projects and educators.
Omeka-S: For institutions managing a sharable resource pool across multiple sites.

From that framework, any individual project should choose Omeka - fairly obvious. But, for our individual project (Edison Papers) some of the newer Omeka-S features were appealing:

… Omeka S is a next-generation web publishing platform for institutions interested in connecting digital cultural heritage collections with other resources online.

… Connect to the Semantic Web: Publish items with linked open data.

Specifically, IIIF and linked data. While IIIF has the same level of support in both Classic and Semantic, the metadata profiling and collection mgmt was clearly better in Omeka-S, and the outputs would clearly integrate better into an aggregator site/service.

However, we’ve continued to encounter the problem I’ve posted about (repeatedly, sorry) in the forums with 150k items and query execution speed. I noticed in doing some Googling recently that someone had described Omeka-S (back in 2017?) as “a platform for a network of exhibits”, which I hadn’t seen or heard in that particular way before.

Based on my encounter/experience with the application code and how “universal” everything is - would it be safe at this point to say that the primary use case for Omeka-S is not browsing/searching the full collection of items (aka the shareable resource pool) and that the web/interface/queries/etc are optimized for sub-sets (aka exhibits) of that full pool?

The alternative is that I am continuing to experience local config/system issues which are making it seem that way!

Aside from the developers, would love to hear from anyone else using Omeka-S for an individual project with a large (100k+) item pool.

Hello @benbakelaar,

I support Omeka Classic and am working on new projects with Omeka-S for the University Libraries Special Collections at Bowling Green State University. One of the Omeka-S projects that is nearing completion involves 460,000 records.

In my opinion, Omeka Classic is still the best fit for objects from the Special Collections that have been digitized. As the metadata used in an Omeka Classic Item is primarily Dublin Core, it is a good fit for describing individual physical objects and displaying the digitized representations. Other types of data are a little more awkward to fit into Omeka Classic.

Omeka-S has been beneficial for when there are relationships between individual items. One example has been concert programs. With Omeka Classic, you are essentially limited to describing the item as a physical object and displaying a scan of the program. With Omeka-S, you can establish links from or to the programs, such as an item for each performer. Then you would be able to view a performer and see which concerts they were featured on. Similarly, you could create items for works, composers, venues, instruments, etc. that can all be linked together.

In terms of connecting to other resources online, the primary benefit of Omeka-S is that URLs are first class values, so it becomes easier to specify relationships with other databases. Omeka-S is based around RDF and provides JSON-LD representations, so there is the possibility of integration with other tools, though so far I haven’t come across a particularly compelling advantage to those aspects.

In terms of aggregation, we haven’t had too much of an issue with using the OAI-PMH Repository for Omeka Classic with our hub for DPLA and a few other harvesters. Again, this is primarily because the sites all basically work on Dublin Core metadata.

Our Omeka-S site with 460,000 records has metadata which largely does not match Dublin Core, and we ended up basically creating our own ontology/vocabulary for it. We did experience performance issues, particularly with parts of the admin interface that would attempt to perform counts of values for each result row in some of the lists. Browsing and searching were also a bit slow in the public interface, and the current built-in search is not very flexible for full text.

We eventually resolved the performance issues for public users by leveraging the forked Search and Solr Modules from Daniel-KM. The 2.0 release of Omeka-S is also supposed to have improved searching, so you may want to wait for that release before deciding how to proceed.

While the original intention of Omeka-S was to have several sites all operated from the same installation, we have found that too unwieldy and have a separate installation for each site. I think ultimately we will use Omeka for projects where Dublin Core suffices, and Omeka-S for larger projects that have different metadata demands.

2 Likes

The main search improvement you’ll see in the upcoming release of S is “fulltext” indexing replacing the current default quite inefficient “contains” search (which can’t leverage indexes at all).

There are some other changes as well but nothing that would have a particularly effect on performance.

1 Like

Awesome! That’s great to hear, that will be a welcome addition.

@kloor thanks so much for this insight! All of the DH projects I’ve been working on have metadata demands that go beyond Dublin Core (even the 55 term extended DCMI) so it’s good to hear that is one of your suggested use cases for Omeka-S.

Also good to hear you experienced the same performance issues. Of course, they can be overcome with Solr. Unfortunately, the one project I am working on can’t afford to run Solr internally or to pay for it externally. And the other will prefer to use AWS ElasticSearch rather than Solr, which doesn’t have an Omeka-S search plugin/integration. And we don’t have technical expertise to write our own plugins - so we are just stuck in the middle a bit :slight_smile: As with all open-source projects, not all needs can be addressed in core or even with the community-contributed plugins.