Accessing big images through IIIF manifests is extremely slow

With IIIF Server and Image Server, opening images through manifests in an external Mirador Viewer page is extremely slow, especially with bigger images.
Same images opened in the Opeaseadragon default viewer opens very quickly and the zoom is extremely fluid.
These images are already tiled, but it seems that every time I access through IIIF manifest, images slices are dynamically created via vips converter.
I’m wondering if I made a wrong configuration of the Image Server module and if it’s true that I should be able to use image tiles already there, instead of dynamically create image slices for every request.
Please see this item as an example:
and click on the button “ Confronta immagini in Mirador” to open the image in Mirador. You should see the extreme difference in accessing the tile image.
Hope someone can give me an help or suggest how to config or what to check.

There is an important update of the two modules (iiif server 3.6.12 and image server 3.6.13). Check that autotiling is set and tiles all missing media via the bulk job in image server config. Try to choose “deepzoom” mode (the default one).

Thanks @Daniel_KM, I did that but the loading via IIIF still seems to be slow, even if better than before. I have a question about the autotiling option: to activate that option, as you suggest, do I have to check that option or uncheck it? The description in the Image Server config form it’s not so clear to me:

Tiling service
Tile images manually and not automatically on save
If unset, to run the task below will be required to create tiles. It is recommended to set automatic tiling once all existing items are tiled to avoid to overload the server. So bulk tile all items first below.


I can confirm that, after a massive tiling of images, there is no on the flight conversion anymore, but the access through IIIF manifest to images is still very slow and in case of big images, external Mirador Viewer goes often in timeout for a bunch of tiles.
Looking at the browser console, I can see very long time to get image tiles and many timeouts.
Any experience with such a problem? Any suggestion? Maybe Image Server + IIIF Server are not able to handle very high resolution images?

I tried to investigate more and it looks to me the responsible for the slowdown is the IIIF Server, even I cannot be sure as I’m not completely aware of how IIIF Server and Image Server work together.
I try to explain.
I made a test accessing this item: where the image media is opened directly with Openseadragon. Then I open the same media in another browser tab, using Mirador through the button “Confronta immagini in Mirador”.
With the two tab active, I opened the development tool of the browser (F12) and I made a recording of the same zoom in both the interface.
What I saw then is that in Mirador the number of calls are exatly two times those in Openseadragon.
In fact, while the request in Openseadragon default viewer are directly to the tiles of the image, using IIIF Manifest in Mirador request a double call for each tile, where the first is for the IIIF default.jpg resource of the specific tile, and the second is the redirect to the real image.
This is an example of a tile call:

Openseadrogn (not using IIIF)
13:10:52.862 GET
[HTTP/1.1 200 OK 102ms]

Mirador (using IIIF)
[HTTP/1.1 302 Found 961ms]

[HTTP/1.1 200 OK 65ms]

As one can see, when IIIF Manifest is involved, the call for the default.jpg took about ten times that for the real image, and this seems to be the reason of the slowdown and timeouts I see.

Is this behavior expected? Is there any trick to make that process faster?

Another strange things I noticed, is that Mirador, using the IIIF Manifest provided by IIIF Server in OMEKA S, seem to load nonexistent tiles at the border of the image that are not called when same image is opened in the default Openseadragon viewer.

Hope someone can take a look at this as this behavior seems to limit the real usage of Omeka S with IIIF, at least in my case.

I’ve just checked other IIIF servers and I don’t see the same behavior when calling for resources like /iiif/2/14377/4096,16384,2048,2048/256,/0/default.jpg, i.e. I don’t see the same systematic redirection I see in Omeka.

Is there anyone can help me understand if this is the expected behavior of IIIF Server / Image Server when providing IIIF Manifest and images tiles?

Is there anyone can confirm this is the expected or non expected behavior? Help would be much appreciated, thanks!!

Not sure if this is related or not, but I’m having trouble with slow image processing (Omeka S with an aAWS S3 bucket on Reclaim) and believe it has to do with VIPS. The iiif image server is trying to use VIPS but it can’t be installed it because it isn’t in the CentOS 7 repos. Thinking about using DerivativeImages instead for caching images instead? Any help is much appreciated.

Iiif Server and Image Server are working fine: I use them in many libraries, with or without Cantaloupe, with shared hosts or big servers, with small or big images, with small of big audiences. But I never checked Amazon.

Some recommandations to avoid issues:

For image server:

  • Tiles are like a cache for images, so tiling is recommended (and now enabled by default), in particular for small servers, except if your images are small (less than 1 to 10MB), or if you don’t care of people with expensive or low network speed or small computer. Of course, run the job to create all tiles for all existing images.
  • If the files are not on the same server than Omeka, in particular when a network disk is mounted, try to install an image server on it (so generally Cantaloupe or IIPImage), because the module should fetch the whole file before extracting the requested tile (even if vips is smarter in that case). Of course, if the image is tiled, this is generally not an issue, but note that users can ask for special sizes not tiled.
  • Check if your server has a graphic gpu, in which case the dynamic tiling is a lot quicker. And if the cpu is quicker to convert the iiif url into the path of the url than the gpu to create the tile, you may use tiles.
  • Try to use DeepZoom instead of Zoomify, the latter has an issue to get some tiles.
  • Try to use vips, a lot quicker replacement for ImageMagick and GD. it is less known than them, but used in more and more places, in particular wikipedia. There was a pull request for it (Integrate vips, a quick thumbnailer that manages smart gravity for square thumbnails by Daniel-KM · Pull Request #1683 · omeka/omeka-s · GitHub).
  • Of course, if vips is not installed, set the right processor in the config of image server.
  • Check if all images have metadata (the dimensions x and y for the main files and the derivatives): there is a job for it, else the module will have to determine them each time.

For iiif server:

  • Try to use iiif version 2 (for presentation) and iiif v2 or v3 (for image), that are better handled by viewers . Even if last versions of IIIF Server can cache manifests via module Derivative Media. This point is less an issue with last and futur improvements.

OpenSeadragon is the engine used by Mirador, UniversalViewer, Diva and a lot of viewers. In practice, OpenSeadragon checks if there is a tiled ini file and uses it directly, without using the iiif presentation, so it can load images directly (apache/nginx serve tiles without php processing).

The IIIF viewers don’t hack the IIIF manifests so they use standard urls, so there may be a redirect to the real tiles. And I always try to implement the standards strictly, so this is the standard: Image API 3.0 — IIIF | International Image Interoperability Framework. The fact that the urls are standard is one of the main points of IIIF, so they are sharable, so libraries are not locked in a proprietary format like users in proprietary office suite or operating system. Nevertheless, there may be issue, normally, the same image should not be called twice. The time to get the real path to the tiles may be improved.

1 Like

Thank so much for the long list of suggestions @Daniel_KM. I’ll try to double check everything again. It’s anyway a good news that my problems are only “mine” and not related to Omeka itself.
I will try to post a feedback as soon as possible.