Building a Machine Generated Annotations Pipeline

Joshua Gomez - UCLA (USA), Kristian Allen - UCLA (USA), Mark Matney - UCLA (USA), Sharon Shafer - UCLA (USA), Tinuola Awopetu - UCLA (USA)

Presentation type: Presentation


With the abundance of image material available in archives and digital libraries, accurate and useful metadata has proven critical for search and browse operations. With the scale of images UCLA is ingesting into our IIIF environment, traditional image level metadata attached by catalogers can be time consuming. UCLA is building a proof of concept automated metadata tagging pipeline using off the shelf machine learning models with a future possibility of plugging in domain specific relevant models.

This talk will outline UCLA’s experience with integrating IIIF assets into an automated pipeline workflow, using a message queue, cloud ML/AI image services, an annotation storage server, and a search engine. We will also present an analysis of the validity and usefulness of several cloud Computer Vision models for tagging and improving the relevancy of search results.

For this experiment, we have opted for a tool set that is simple to get up and running quickly, with as little customization as possible. We use open source systems for the primary components, including Cantaloupe as the IIIF image server, Elucidate for Annotations storage, and Blacklight & Solr for the search engine and user interface. For image tagging, we use the free cloud services available from Amazon, Google, Microsoft, and Clarafai. To orchestrate the pipeline we use the Python package Celery with RabbitMQ.

The pipeline workflow consists of harvesting images via IIIF protocols, running the images through multiple image tagging services, storing those tags using the Web Annotations data model and protocols, and then using those annotations to augment the item metadata in the search engine. We will then analyze the change in search results to determine which, if any, of the tagging services improved relevancy ranking, and if so, by how much.

The goals of the experiment are to expose our team to the Web Annotations protocol and learn what would be required to integrate IIIF and Web Annotations in an automated production pipeline. We have mixed expectations regarding the usefulness of the commercial image tagging services. The metadata produced by such services may be too generic for scholars using a digital library, but may be helpful for users from the general public. The experiment will help us identify weaknesses in the commercial services and surface areas of opportunity in training our own models.


  • Annotation, including full-text or academic use cases,
  • Using IIIF material for Machine Learning and AI,
  • IIIF Implementation Spectrum: large-scale or small-scale projects


  • Machine Learning,
  • Artificial Intelligence,
  • Web Annotations,
  • automation,
  • image processing,
  • image pipeline