2022 IIIF Annual Conference Schedule
The outline schedules of the showcase and conference are available below, with specific conference presentations and abstracts listed further down. The exact timing of each day’s schedule is still subject to change.
We are offering sponsorship for the 2022 IIIF conference and the benefits and costs can be seen on the sponsorship page. If you are interested in becoming a sponsor please contact email@example.com.
The IIIF Annual Conference is generously supported by the following Conference Sponsors:
The IIIF showcase is free and open to the public. Attend this event to learn more about IIIF, including an overview of what it does, use cases, how you can implement IIIF at your institution, and how you can contribute to the community.
See the showcase page for the detailed schedule of speakers.
- 12pm-1pm: Check-in opens for showcase and conference registrants
- 1pm-5pm: Showcase (including coffee break)
IIIF Consortium reception
- 6-8pm: Consortium reception (open to all attendees affiliated with a IIIF Consortium member institution, to be held in an outdoor, covered location)
Conference day 1
The IIIF conference is open to advanced registrants.
|8:00 - 9:00||Check-in|
|09:00 - 09:30||Welcome and opening remarks||Josh Hadro|
|09:30 - 10:00||State of the IIIF Universe||Josh Hadro|
|10:00 - 10:30||IIIF 3D - Developing New Dimensions & Technical Specification||Ronald S. Haynes|
|10:30 - 11:00||Break with Tea and Coffee|
|11:00 - 11:30||An update on building an open-source IIIF stack and integrating Mirador 3 at Harvard||Chip Goines, Katie Amaral, Doug Simon, Phil Plencner|
|11:30 - 12:00||A New Manifest Editor (and more)||Tom Crane, Stephen Fraser|
|12:00 - 1:30||Lunch break (on your own)|
|01:30 - 02:00||Using IIIF to teach Digital Humanities: advancing digital literacy in higher education||Davy Verbeke, Lise Foket, Eef Rombaut, Frederic Lamsens, Christophe Verbruggen|
|02:00 - 02:30||From text to image: linking TEI-XML and IIIF via expert sourced annotations and IIIF Change Discovery||Matthew McGrattan|
|02:30 - 03:00||USE ME: progressive integration of IIIF with new software services at the Getty||Stefano Cossu, David Newbury|
|03:00 - 03:30||Exhibit: easy-to-use tools for promoting engagement and learning with your online digital collections.||Edward Silverton|
|3:30 - 4:00||Break with Tea and Coffee|
|04:00 - 04:30||Using IIIF Images in Visual Essays||Ron Snyder|
|04:30 - 05:00||From Manuscript to Transcription and Back Again: Closing the Virtuous Circle with Houghton MS Lat. 5||Laura Morreale, Sean Gilsdorf|
|05:00 - 05:30||Creating a Scholar’s Manifest: incorporating user-contributed annotations and metadata||Debra Cashion, Ben Bakelaar|
|6:00 - 8:00||General conference reception, open to all registered participants (to be held outdoors on the Harvard quad, under a tent in case of rain)|
Conference day 2
The IIIF conference is open to advanced registrants.
Welcome and opening remarksRecording on Youtube
Josh Hadro, Managing Director of the IIIF Consortium, welcomes attendees to the event in partnership with Harvard University and MIT.
State of the IIIF UniverseRecording on Youtube
This session will provide an update on the status of the many IIIF Community Groups currently meeting to discuss issues relating to their field of practice.
IIIF 3D - Developing New Dimensions & Technical SpecificationRecording on Youtube
With growing, global efforts for further establishing IIIF use for 2D images and Audio/Video (A/V) data, there is increasing commitment to develop similar standards for 3D content. The IIIF 3D Community Group is collaboratively considering common challenges and potential solutions with major 3D developers and researchers, and has formed the IIIF 3D Technical Specification Group, engaging even more widely with specialists and representatives across user communities and standards bodies. The complementary work of the CG and TSG is considering ways not only to suitably extend IIIF into the 3rd (physical) dimension, but also to explore digital opportunities to try to address the concerns of decolonisation and digital/object repatriation. In addition, the group will be considering creative options to support incorporation of 2D and A/V with 3D data, enabling combinations to form digital dioramas, scene and soundscape reconstructions, and the potential to help build an inclusive metaverse. Please do join us!
An update on building an open-source IIIF stack and integrating Mirador 3 at HarvardRecording on Youtube
In 2020, Harvard University embarked on building an open source, institutional-level IIIF stack for its library and other departments. This was a momentous task, as Harvard's current main IIIF endpoint, iiif.lib.harvard.edu, hosts more than 60 million digitized images and approximately 250,000 page-turned objects. In this talk, we will give an update on its progress and its impending release as well as discuss replacing Mirador 2 with Mirador 3 as the default viewer for all IIIF content hosted by the university.
A New Manifest Editor (and more)Recording on Youtube
Why a new Manifest Editor?
As the use of IIIF grows, so the need for IIIF content creation tools grows.
Most IIIF manifests are expected to produce consistent results in standard IIIF viewers.
But increasingly, manifests are also being created for very specific user interfaces. An early example is Wellcome Collection’s Sleep Stories , from 2016. In 2017, Digirati worked on IIIF Manifest-driven narratives for the V&A , and in 2018 for Technical University, Delft.
This work produced a IIIF Manifest Editor  that in normal, default mode produces IIIF Presentation 3 Manifests, but can be extended with plugins to produce IIIF Manifests with particular structures and custom behavior properties, to drive bespoke viewing experiences - slideshows, guided viewing and the complex digital exhibition layouts seen in the Delft examples.
Now, in partnership with the UK Towards a National Collection project  and Delft University of Technology Library, we are building a new Manifest Editor framework, that can accommodate many of the use cases we have seen emerging over the last 6 years.
This aims to provide a general purpose tool that is:
- Suitable for creating general purpose manifests
- Great for learning IIIF concepts
- Configurable to create manifests for specific target environments
- Easily integrated into diverse production workflows
In this presentation, we will explore the features of Manifest Editor and look at how it can be adapted for specialist use cases.
Read more at https://github.com/digirati-co-uk/iiif-manifest-editor
Using IIIF to teach Digital Humanities: advancing digital literacy in higher educationRecording on Youtube
The benefits of IIIF for Digital Humanities (DH) scholarship are well known: accessibility to images, metadata and annotation possibilities, and ample visual display options. However, whilst the cultural heritage sector and DH research centers have broadly adopted IIIF, its integration into teaching practices has remained rather limited. At the same time, DH scholars are calling for more extensive adoption of their field within Humanities scholarship. Teaching DH requires a hybrid approach covering various digital tools to assist research, educational, and science communication purposes, which is often accompanied by underscoring the usefulness, openness, diversity, and collaboration-possibilities of the tools in question (Batterhill & Ross, 2017). Rather than learn a fixed set of computational hard skills, students need to contextualize digital technologies, adopt a tool-critical attitude and apply them to a domain-specific context (Armaselu, 2021; Kuhn, 2019).
IIIF is therefore a common denominator in Ghent Centre For Digital Humanities’ educational innovation project to structurally embed DH and sustainably enhance digital competences within the Faculty of Arts and Philosophy, including teacher education. Not only does IIIF’s flexibility and compliance with various DH applications greatly assist a provisioned hybrid approach of teaching, but the open image-standard also ‘stitches together technology’ for DH overall (Emanuel, 2018). This presentation, which is based on various classroom-case studies, focus group discussions, and questionnaires, explores how IIIF helped foster a shared faculty-wide framework for basic and advanced digital competences of students, (prospective) teachers, and faculty staff. Ranging from teacher education to art history and Lingala language, diverse courses found common ground in IIIF and compatible tools, such as Madoc, Omeka S, Storiiies and Exhibit.so. Blended learning modules that combined workshops, video tutorials, manuals, best practices, and literature provided assistance during the courses. This presentation discusses how IIIF allowed teachers and students to relate DH to their domain-specific knowledge, and how this resulted in a more widespread integration of both IIIF and DH across Ghent University.
From text to image: linking TEI-XML and IIIF via expert sourced annotations and IIIF Change DiscoveryRecording on Youtube
Many institutions have both:
1. transcripts for manuscripts or printed books, encoded as TEI-XML
2. digital surrogates for those same manuscripts or printed books, provided as IIIF
In ideal world, it would be possible to display IIIF and TEI-XML side by side in the same discovery environment, using a shared model for document structure to drive navigation for both IIIF and TEI-XML, so that users, can, for example, compare different witnesses to the same passage of text or read text and view images together. However, IIIF surrogates and TEI-XML encoded texts have often been created at different times, by different teams, as part of different workflows and there may not be identifiers shared across both data models that facilitate machine linking between text and image. In the absence of machine methods for linking, manually linking text with image is often a labour intensive process which requires expert knowledge of IIIF, or TEI-XML, or both. In this presentation, I will present one approach to resolving this problem, in which TEI-XML is used as the source of truth for text and document structure, and this text and document structure is fed—via a machine readable API—into a crowdsourcing environment in which expert users can link text, document structure and IIIF resources together without knowing anything about the underlying IIIF or TEI representations of the manuscript(s) in question. The eventual output of this process is discoverable via the IIIF Change Discovery API, and transformed into manifests with valid IIIF Ranges that reflect the underlying structure of theTEI-XML and which can be used to:
* find all of the witnesses to a particular text
* navigate directly to a particular section of one or more manuscripts
* display text and image side-by-side
I will demonstrate the workflow for content creation, and the search and discovery environment that can be used to search and browse texts and images together.
USE ME: progressive integration of IIIF with new software services at the GettyRecording on Youtube
Two years into the launch of an institution-wide IIIF delivery system and the related ETL pipelines to generate IIIF master images and manifests, two ongoing challenges have been unfolding in the broader Getty Digital development plans: making use of these IIIF services by default, and consequently, improving those services in a continuous fashion.
This 20-minute presentation will offer an overview of the integration of various Getty software development projects with the existing IIIF and Linked Open Data infrastructure, including our public LOD access endpoints, data analysis and annotation projects, and public-facing websites and research tools. The goal is to highlight how Getty Digital has come past the initial investment of building infrastructure and beginning to reap the long-term benefits of such investment.
Exhibit: easy-to-use tools for promoting engagement and learning with your online digital collections.Recording on Youtube
Compounded by the Covid-19 pandemic, GLAM organisations have in recent years been facing outreach challenges in an increasingly competitive digital attention economy.
Exhibit (exhibit.so) is a user-friendly tool for promoting engagement and learning with your online digital collections. Built using the Universal Viewer to display IIIF content, users can create presentations, tell stories, craft quizzes, and more, using high-resolution images and 3D models.
Developed by Mnemoscene, the first iteration in 2021 responded directly to The University of St Andrew’s need to engage students with their digital collections during the Covid-19 lockdown. Since then, with support from the Universal Viewer Steering group and feedback from a growing number of adopters including the British Library and Royal Pavilion and Museums Trust, a range of new features and a sustainability strategy have been developed.
From individuals, to large institutions, we’ll discuss how you can get started with Exhibit and share plans for future development.
Using IIIF Images in Visual EssaysRecording on Youtube
This presentation will provide an overview of an online toolset enabling the creation of interactive Visual Essays using images available from image sharing sites such as Wikimedia Commons, Flickr, and JSTOR Community Collections.
The toolset includes:
- Semantic search for finding open access images using the Wikidata 'depicts' property and automatically generating IIIF manifests
- An annotation tool for associating W3C Web Annotations with image regions
- An editor for generating visual essays using an enhanced version of the Markdown language enabling IIIF images to be embedded in the image and interactions between the text and images
- A rendering engine for displaying the visual essays using any Markdown document as input
From Manuscript to Transcription and Back Again: Closing the Virtuous Circle with Houghton MS Lat. 5Recording on Youtube
In Spring 2022, a group of Harvard-based medievalists collaboratively transcribed and prepared a digital edition of Houghton Library MS Lat. 5, a thirteenth-century manuscript containing a copy of Conrad of Saxony’s Commentary on the Holy Places. Undertaken as part of a Harvard paleography course, the collaborative transcription and editing of Conrad’s Commentary relied principally on the IIIF-compliant images that were first created, then cataloged, and now housed at the institution’s library. To allow for group transcription, course instructor Sean Gilsdorf worked with Harvard librarian Sara Powell to upload the IIIF-compliant images to Houghton Library’s account in FromThePage, a crowdsourcing platform that easily integrates IIIF images into a collaborative transcription interface. Once the project participants had completed the transcription following protocols developed by project consultant Laura Morreale, the data was exported and then prepared for online editing and publication at a subsequent workshop with Sarah Lang (University of Graz).
Since this intense study of Houghton MS Lat. 5 was undertaken entirely within the Harvard environment, and interoperability is a key feature of the IIIF system, project participants were eager to share their work and make their new edition of the Commentary available to the manuscript’s future users. Despite the local nature of all the resources used to complete the project, however, common workflows do not yet exist within the Harvard ecosystem to complete the “virtuous circle”—that is, to return the annotations on the text back to the cache of IIIF-compliant images that accompany the manuscript description in the library catalog. This is neither surprising nor unexpected: the full range of IIIF capabilities has yet to be understood, much less exploited, by researchers who depend upon the technology for their work.
The 2022 IIIF Conference offers a valuable opportunity not simply to present this IIIF use case, but to discuss it and brainstorm solutions among members of the IIIF community gathered on the Harvard campus. We propose a talk describing this Harvard IIIF-based project, including the perspectives of the project leaders and the Harvard librarians who facilitated the work, followed by a “birds of a feather” session for conference participants working at the intersection of IIIF technology and archival studies. This latter session will solicit discussion of the problems and opportunities raised by this particular case study, and ways to address them drawn from the session participants’ own work. Our main goal is to determine how projects like ours might be brought into a productive relationship with the resources that generated them, what the IIIF “virtuous cycle” will actually look like, and how it could be accomplished in cooperation with other stakeholders on campus. More broadly, we hope that the work done with Houghton MS Lat. 5 can serve as a model for other institutions and researchers engaged in similar projects using IIIF images and tools.
Creating a Scholar’s Manifest: incorporating user-contributed annotations and metadataRecording on Youtube
This presentation describes a workflow for a scholar to take a IIIF object such as a manuscript, add annotations via our implementation of Simple Annotation Server, and additionally incorporate user-contributed structured metadata using our Community Catalog tool. We combine these multiple facets of source and user-generated information into a prototyped manifest which can represent a significant amount of scholarly work. We will outline the API properties used and editorial decisions made in how we format this combined Scholar’s Manifest.
IIIF as an enabler for presentation and participation in the public history project Gent Gemapt.Recording on Youtube
Gent Gemapt (2020-2023) is a public history pilot project that relies on IIIF for the integration, presentation, and participative enrichment of heritage. The premise of Gent Gemapt is that all types of heritage can be mapped and that place is an ideal concept to structure and discover a city’s history. Gent Gemapt aims to merge heritage collections with historical maps in order to connect collections, the urban landscape, and the people of Ghent.
Since archival and heritage collections are often fragmented over different GLAMs, we need a consistent way to organise the data. To do this spatially, we create a geographical index - a gazetteer - which uniquely describes each place (e.g. streets, squares, bodies of water, public buildings) with an identifier and additional information. Additionally, IIIF provides a framework to unify collections across GLAMs.
In Gent Gemapt, the heritage partners have diverging digital infrastructures, metadata schemes and levels of IIIF implementation, which complicates the extraction of common properties such as dates, references to places, identifiers, coordinates etc. For each contributing partner, we add a metadata mapping service to solve this issue.
We use Omeka S as the framework to integrate the gazetteer, the collection data and the historical maps. Places and manifests of collection objects are loaded into Omeka S as items and referenced to each other. Historical maps are georeferenced, converted to tile services and added as Omeka items. An API is built on top of Omeka S to feed a rich user interface where users can navigate through the historical map layers, query the places and explore the collections in IIIF.
The next step for Gent Gemapt is to use the Madoc platform to evolve from a presentation platform to a participative one where Ghentians can enrich the location-based collections by enriching metadata, annotating images, transcribing documents, and georeferencing historical maps. Next to presenting the enriched collections in Gent Gemapt, a future challenge will be to let the enriched collections be received and validated by the collection managers as well.
Restoring a Space for Paul DunbarRecording on Youtube
Paul Laurence Dunbar was the first African-American poet to garner national critical acclaim. Born in Dayton, Ohio, in 1872, Dunbar penned a large body of dialect poems, standard English poems, essays, novels, and short stories before he died at the age of 33. His work often addressed the difficulties encountered by members of his race and the efforts of African-Americans to achieve equality in America. He was praised both by the prominent literary critics of his time and his literary contemporaries but the resources related to his works and life are often included in minority author exhibitions, aggregated genre-based collections, or disregarded altogether. The Dunbar Library and Archive (DLA) joins existing remote IIIF collections of images, manuscripts, scores, and performances with textual resources, geographic locations, and bibliographic records. In addition to enriching descriptions and relationships with new annotations, new authority entities are being created for the supporting people, places, and events surrounding his life. Students and community members are coordinating with the U.S. National Parks Service and community groups to digitize many artifacts and resources that are challenging to access. The University of Dayton offers a model for a focused and distributed open archive that offers better representation of the subject and his diverse communities, functions as a research portal for scholars and journalists, and encourages sharing and creation among teachers and members of the public. The team at Saint Louis University is building this data entry platform and public website as a reusable architecture on open and available tools including their TPEN transcription tool, public RERUM repository, as well as IIIF and W3C Web Annotation standards. The first phase of this project completes this August and the public site development continues through June 2023.
Mapping Color in History: Using IIIF to Annotate Historical Objects with Pigment DataRecording on Youtube
Mapping Color in History (MCH) is a digital humanities project which examines scientific data drawn from material analyses of pigments in Asian paintings from a historical perspective. Existing pigment databases and publications are centered around the pigment and are difficult to use for non-specialists. MCH builds on this pigment and element data which has been collected by art conservation scientists during XRF, infrared, and Raman forensic analyses, but makes it easier for art history researchers to use the data. MCH also goes further than existing pigment databases by foregrounding the historical objects and presenting scientific pigment data in that context. Each work in the platform includes extensive bibliographic and art history metadata. This facilitates geospatial and temporal analyses, such as tracking where and when pigments first became available to artists by plotting their appearances on maps and timelines.
The visual display of information is central to Mapping Color in History. IIIF is a natural technology to use for a project which relies on visual works from numerous museum and library digital collections. Some institutions, such as the Harvard Art Museums, already provide resources via IIIF, and these were easy to integrate into MCH. However, many other institutions do not yet provide IIIF resources. MCH utilizes a new IIIF ingest pipeline provided by the Harvard Library Technical Services team (LTS) which provides image servers and infrastructure as a service to numerous university clients. This centralized IIIF infrastructure allows new projects across the university to integrate IIIF rapidly while staying focused on application features rather than IIIF-specific implementation details and maintenance. Our process starts by securing image rights from a holding institution and obtaining high resolution files. Mapping Color researchers provide the image files and associated metadata. The MCH application deposits the images in an S3 bucket, generates a manifest, validates the manifest, and submits an ingest request to the LTS ingest service via JSON endpoint with JWT authentication. The ingest service then creates and activates URNs for the manifest and all referenced assets and serves images and manifests.
IIIF also allows us to deeply integrate pigment analysis data with the object digital representations. Mapping Color in History uses a CatchPy annotation server, which allows researchers to annotate analyses as layers on a canvas representing the work. We have extended the Mirador 3 mirador-annotations plugin to include an adapter for CatchPy and to allow researchers to also capture the visible color and analysis methodology associated with each annotation point. This links the annotations back to our application data, so end users can click on a visible color and have it highlighted within Mirador. The UX is much more usable for researchers and casual users than typical pigment databases, as it is easy to see exactly where pigments appear on the work and compare analyses of different paintings.
Miiify: distributed crowdsourced annotationsRecording on Youtube
Miiify is an experimental annotation server that is based on the W3C Web Annotation Protocol. Rather than rely on running a centralised infrastructure, Miiify adopts a distributed approach to collaboration using a peer review process facilitated on GitHub. Each user interacts with their own instance of Miiify using a web interface that supports annotating content such as images. Contributions are then submitted back to the main GitHub repository through a pull request.
Miiify is built using Irmin, a distributed database technology following the same principles as Git. Some key features of Miiify include:
• Native Git support (no external database required)
• No requirement to support user authentication or accounts
• Provenance through Git commit history
• Light-weight (Docker image less than 60MB)
The ability to read and write data in the Git format means that there is no dependency on an external database technology. The contents of the Git repository (the annotations) can be navigated as a tree structure corresponding directly to the structure of the JSON. Running a dedicated centralised crowdsourcing service requires user account management. This will include governing what privileges users have to alter data on that platform. Using Git means we can take advantage of the identity management available in platforms such as GitHub. GitHub conforms to GDPR regulations by limiting the amount of personal data required to use the service. For example, to contribute on GitHub only a valid email address is required which can provide a level of anonymity for a user. However, any changes to data made by users are visible through the review process these platforms facilitate. The complete history of changes are maintained which allows data to be reverted to any known previous version. Miiify has been built with the OCaml toolchain which compiles to efficient native code capable of running within a light-weight Docker container or even within a unikernel directly on a hypervisor or bare metal.
An open source, IIIF-enabled, natural history collection platformRecording on Youtube
With the launch of the Natural History Museum of Denmark's collection platform (https://collections.snm.ku.dk/en) both researchers and members of the general public can explore the collections in intimate detail from anywhere in the world. There are hundreds of thousands of objects to explore with more being added frequently by the museum.
In this lightning talk I will explain how the collection weaves together images and metadata from the museum's in-house collection management system and the Global Biodiversity Information Facility (GBIF) to serve both the scientific community and the general public.
We hope this open source project, launched in partnership with Cogapp last year, can act as a catalyst for other natural history museums and GBIF contributors to present their collections online.
Etu - An open source image centric IIIF universal adaptorRecording on Youtube
Considering the spectrum of IIIF user privacy preference, there are two ends that are not adequately served with current IIIF open source software community. One are those who are extremely sensitive about the image content, thus, prefer to leave everything in his/her hard disc. Another are those who care nothing about the content and wish those images could be spread as wide as possible and preserved as long as possible. Etu managed to take care of these two extremes and developed a unique image centric way to enable IIIF from hard disc images with only a couple of command lines. Our goal is to preserve the legacy image organization as much as possible and shield the end user with the complex details as setting up image server, presentation server and so on. The software is the result of collaboration work of IIIF China community members and we are glad to share it with the global community and help the adaption of IIIF in the wider spectrum regardless of user privacy preference.
Designing a proof-of-concept for the implementation of the Presentation API 3.0 and Change Discovery API 1.0 within the DaSCH Service Platform (DSP)Recording on Youtube
The Swiss National Data & Service Center for the Humanities (DaSCH), a IIIF-C full member since 2017 has implemented the Simple Image Presentation Interface (SIPI), a IIIF Image API 3.0 Server that DaSCH maintained with the help of the Digital Humanities Lab of the University of Basel, within the DaSCH Service Platform (DSP), a software framework used for storing, sharing, and working with primary sources and data in the humanities.
DaSCH is in the process of designing a proof-of-concept for the deployment of the IIIF Presentation API 3.0 and the IIIF Change Discovery API 1.0, on the basis of the existing resources and projects hosted on DSP. For the Presentation API, templates will be first created manually with the help of the cookbook recipes. Then, scripts enabling the semi-automation for generating IIIF resources will be designed. As for the Change Discovery, DaSCH already provides an Event log of created resources. Thus an assessment on how the ActivityStreams endpoint could be built on top of it will be carried out.
The lightning talk will explore the process and challenges involved in deploying these IIIF APIs and how they could be implemented within DSP to allow users to further contextualise cultural heritage resources on and off the platform as well as enabling improved aggregation thereof.
IIIF - Could it be time to swap the Image for Information?Recording on Youtube
IIIF has been developing now for over 10 years, evolving from a collaborative method for presenting images on the web into a robust, well documented, global framework for sharing and linking images, annotations, video, audio, and soon, hopefully, even 3D models. So it seems about time to start to consider, at what stage might IIIF transition into the International Information Interoperability Framework? This lighting talk will introduce a new simple IIIF Collection and Manifest presentation tool, based around Mirador 3, which uses the existing presentation API to link sets of images to PDF versions of related publications, but also explores how the presentation API could be used, or potentially, extended to share further data sets and also support the presentation of simple interactive tables or even graphs. Pushing even further the idea that the documented structure for IIIF could be reused or extended to share additional types of data and demonstrate a working example of why it could be time to swap "Image" for "Information".
Bringing IIIF to the DSpace communityRecording on Youtube
Starting with version 7.2 DSpace, provides basic support for IIIF out of box. The work was the result of a joint effort between Willamette University and 4Science. Willamette University had begun work on DSpace version 7 and IIIF as a way to enhance access to digital content that was being hosted on two local systems. A key objective was to replace this existing infrastructure with a single, community-supported solution. 4Science since 2017 had been developing an addon for DSpace (starting from version 5) to support IIIF, easily integrated with a set of external Image Servers such as Cantaloupe or Digilib. On the basis of these experiences, an effective collaboration started, aimed at integrating IIIF support in the official DSpace codebase.
DSpace 7 now allows institutions to upload images in DSpace, getting automatically a IIIF manifest for the item based on item and bitstream (images) level metadata; in this way the TOC can be easily managed. Ideally, any IIIF compliant image server can be used, although instructions and full configuration examples are provided for Cantaloupe. Experimental support for the IIIF Search API is also available and it is expected to be refined in future releases.
The presentation will introduce the available features, the architecture, the tools and strategies that can help institutions to deal with large collections using bulk imports.
Transcribing Primary Sources using FairCopy and IIIFRecording on Youtube
FairCopy is a simple and powerful tool for reading, transcribing, and encoding primary sources using the TEI Guidelines. FairCopy can import IIIF manifests as a starting point for transcription. Users can then highlight zones on each surface and link them to the transcription. FairCopy exports valid TEI XML which is linked back to the original IIIF endpoints. In this lightning talk, we will demonstrate IIIF functionality in FairCopy.
An improvement for Video annotation on MiradorRecording on Youtube
As a result of the IIIF Presentation API version 3 (henceforth, IIIFv3), we've gotten a set of rules for annotating video. However, there is still not enough implementation to treat it easily. Under such situation, Mirador version 3 finally implements the ability to show video. Therefore, we started to develop a function to embed annotations to Mirador version 3's video. As a result, not only text but also images can be pasted on the video only by writing a JSON file compliant with IIIFv3. This also means that you can re-edit a video that has already been published. In addition, we have improved the subtitle display for ELAN's WebVTT. This allowed us to improve the interoperability of Video through IIIF.
Enriched Art with IIIF: Improving searchability of art collections through machine learningRecording on Youtube
Art exhibition catalogs provide documentation for items displayed during an exhibition at museums or art galleries. The contents of these catalogs are important for GLAM institutions, as many of the works described in these catalogs are now part of their collections. Moreover, art historians use these catalogs to research the exposition of works, investigate particular curators, galleries, artists, and discover general trends over time. Even though a large portion of the catalogs have been digitized, their contents are not transcribed, making automated processes such as full-text search difficult. And although there exist several end-to-end Optical Character Recognition (OCR) systems which can automate the transcription process, these often require technical knowledge for the selection of pre- and post-processing methods or retraining of the underlying OCR model. Thus the use of these sources remains a largely manual and time-intensive task.
During this talk, I will present a semi-automated pipeline which is able to extract metadata from exhibition catalogs. I investigate this using 19th-century art exhibition catalogs of - predominantly Flemish - fine arts museums. This is done by implementing a general OCR model (Tesseract OCR) which is adapted to the characteristics of the art catalogs. The OCR text output is then transformed into IIIF annotations, which can be loaded as part of a IIIF manifest into Madoc - an open source, IIIF-based platform for (participatory) annotation, transcription and showcasing of digital collections. In Madoc, the OCR results can be further corrected by volunteers or domain experts. Afterward, the corrected text data is datamined using machine learning techniques such as Named Entity Recognition (NER) and Document Layout Analysis. The extracted keywords are then linked to external authorities such as RKD Artists and Art and Architecture Thesaurus (AAT). Finally, this linked information is embedded in the IIIF manifest. This method assists in reusing, sharing and querying the extracted information without requiring a heavy technical background. Thanks to this enriched Linked Open Data, art historians will be able to compile exhibition timelines for artworks and discover previously unknown patterns between artworks based on their exhibition history.
Chronophage: Leveraging IIIF standards to build the CoKL DatabaseRecording on Youtube
The Corpus Kalendarium Database, or CoKL DB*, is a relational database of the devotional calendars from Books of Hours. As of February 2022, it contains 376 transcribed calendars and metadata about a further 267 manuscripts, yielding 137,605 calendar entries in total. The project is built from, and only possible with, the large scale manuscript digitization work at repositories worldwide, and the IIIF standard of presenting those digital facsimiles. Although not all of the digital manuscripts added come from IIIF-enabled collections, the standard enables easier discovery and the transcription of those manuscripts. This will be a case study of a big data digital humanities project only possible due to the the wide scale use of the IIIF manifest standard by libraries, museums, and other manuscript holding institutions.
DetektIIIF, a IIIF browser extensionRecording on Youtube
More and more museums, libraries, galleries, and archives support IIIF and enable scholars and culture enthusiasts to work with data from heterogeneous sources. But, how IIIF content is displayed on web pages, how links to manifests and collections are presented, still varies from institution to institution. Furthermore, copying URLs to JSON files is not a familiar way of working for many users. The detektIIIF browser extension attempts to provide users with consistent, convenient access to IIIF resources on websites. DetektIIIF tries to automatically detect IIIF resources and displays them in an orderly manner. The extension also examines some quality features of IIIF implementations and displays the details. At first, only a prototype version of detektIIIF for Chrome existed for a few years. On the initiative of the Zentralbibliothek Zürich, seige.digital GbR developed a more advanced version of detektIIIF with improved features, compatibility with Firefox and Chrome, and an interface that can be customized through themes for institutional use. DetektIIIF is open source software. This lightning talk demonstrates the functionality and the architecture of the extension, invites to use detektIIIF and to participate in its further development.
Fun With IIIFRecording on Youtube
Give me your fun, your frivolous,
Your huddled experiments yearning to breathe free,
The wretched refuse of your latest hackathons.
Send these, the lightheartedness, tempest-tost to me,
I lift my image server beside the golden door!
Reaching out to the crowds: IIIF as a scholarly practiceRecording on Youtube
We believe that IIIF has a huge potential for scholars of the (digital) humanities which might not yet have been exploited. User-created IIIF collections could serve as a means to make accessible and share research data, which could ultimately lead to a more open scholarly practice surrounding the whole data life cycle. From a UX perspective, we identified two main hurdles: first, IIIF tools have to be integrated better in order to serve a possibly seamless user experience as well as to support scholars in “getting-the-job-done”. Secondly, among humanities scholars, there might be a lack of awareness of the potentials of IIIF tools which might derive from relatively high technical barriers with little benefit for users having only little or no technical knowledge.
We developed our use case “Storing and sharing user-created IIIF collections” from those considerations, with the goal to make use of tools and components developed by the IIIF community, and to create a Mirador Viewer workspace usable for an audience with little or no tech-background. Our solution design is as followed:
1. A Mirador Viewer serves as central workspace which can be directly accessed from e-manuscripta and e-rara
2. Users coming from those platforms can gather IIIF manifests in the same Mirador instance
3. The browser-plugin ZB-detektIIIF by Leander Seige and the ZB-Lab is a tool that allows users to create IIIF collections in JSON-format
4. For autumn 2022, the development of a storage plugin is planned with API to GitHub to allow users to store and share their IIIF collections
With this presentation we would like to showcase the possible potential
a) of IIIF collections as a means for an Open Scholarly practice (including the ensuing technical aspects)
b) of how to better represent scholarly real world scenarios in a IIIF environment (including the whole data life cycle)
c) of the Mirador Viewer as an integration platform for distributed IIIF components within an organizational digital heritage setting
d) of widening the audience through a holistic user experience including user flow and awareness.
About the authors:
The ZB-Lab of the Zentralbibliothek Zürich (Central Library Zurich) was only recently founded with the goal to make our digital assets more accessible and usable for a wider audience. With its dual role as public and research library, the ZB Zurich wants to address respective user groups equally, hoping to bring forward the digital turn in a user-friendly way whilst benefiting from higher user interaction. The main function of the ZB-Lab therefore is to search, find, and create digital service solutions that take into account the potential of the ZB’s digital assets as well as its user needs.
IIIF A/V in Practice: An Overview of IIIF Usage in Avalon Media SystemRecording on Youtube
Avalon Media System is an open source system for managing and providing access to collections of digital audio and video based on the Samvera technology stack. In use at over a dozen institutions, IIIF support has become an important part of Avalon’s strategy for the presentation of digital media assets and interoperability with other digital repository systems. To this end, IIIF APIs have become key players across components within the application.
In this session we will share an overview of IIIF integration and usage across software developed by and with the Avalon team. This includes IIIF Presentation 3 manifest generation for digital objects in Avalon Media System; development on the IIIF Timeliner tool and its integration into Avalon; a new video player designed for rich presentation of IIIF Presentation 3 manifests, built using React and Video.js; and contributions to community cookbook recipes and use cases for IIIF APIs.
AMPlifying Interoperability: IIIF Integrations for the Audiovisual Metadata PlatformRecording on Youtube
Since 2018, the Indiana University (IU) Libraries have been working with partners at AVP, New York Public Library (NYPL), and the University of Texas at Austin, with support from the Andrew W. Mellon Foundation, to build the Audiovisual Metadata Platform, or AMP. AMP is an open-source software platform that allows archivists and librarians to define and execute custom workflows that combine artificial intelligence and machine learning services with human expertise in order to more efficiently and effectively create metadata in support of discovery, identification, navigation, and rights determination. One of the features of AMP is the ability for implementers to add their own “adapters,” which could be additional machine learning tools or data output generators that map from AMPs internal JSON schema to another data format, such as IIIF. In this session, we will share an overview of the AMP platform, illustrate potential use cases for implementation, demonstrate the use of the Avalon project’s IIIF-based Timeliner tool for output correction, and discuss the potential of AMP for exporting machine learning annotations as IIIF manifests for use in emerging audiovisual platforms and players that support the IIIF Annotation format, such as AudiAnnotate and Aviary.
The Front End TheoryRecording on Youtube
Digital collection front ends tend to lean on the “viewer” as the primary and sometimes only way for a user to interact with a IIIF manifest. This overview will dive into how we are seeking to make more modular our implementation of the IIIF Presentation 3.0 API with works, collections, and the web sites that showcase them. Our approach has led us to create more modular IIIF viewing tools such as Clover (a Manifest items tool), Bloom (a Collection items tool), and Nectar (a IIIF descriptive property primitive library) that extend the idea of a client and viewer.
Tools to be discussed:
Tour of the Leventhal Map and Education Center, Boston Public Library
This field trip will take conference participants to the historic Boston Public Library, the nation’s oldest major municipal public library, located in Copley Square in Boston’s Back Bay neighborhood. At the BPL, participants will have an opportunity to tour the Leventhal Map & Education Center, the library’s Digital Lab, and the art and architecture of the Central Library, including the historic 1895 McKim Building and the recently-renovated Boylston Street Building. At the Leventhal Center, participants will learn about initiatives leveraging IIIF-powered map collections, including georeferencing, digital exhibitions, and narrative tools with maps. In the Digital Lab, participants will get to see the BPL’s imaging studio, a major component of the Digital Commonwealth service hub that digitizes materials for cultural organizations across Massachusetts. After the tour, participants will find many restaurants in the Copley Square area and easy public transit to the airport and train stations.
Back to schedule
Harvard Art Museum tour
Join Jeff Steward, Director of Digital Infrastructure and Emerging Technology, for a nerd’s eye view of the Harvard Art Museums. We’ll tour the museum top to bottom making stops in the Lightbox Gallery for a sneak peak of the upcoming data visualization project Processing the Page: Computer Vision and Otto Piene’s Sketchbooks; we’ll swing by one of our fine art photo studios to catch a glimpse of how we transform physical art to IIIFed resources; then we’ll visit a few galleries to marvel at art and play a game in which we’ll pit our sense of aesthetics, art appreciation and interpretation against an AI’s.
Back to schedule
Harvard Widener Library
The Harry Elkins Widener Memorial Library is Harvard University's flagship library.
Built with a gift from Eleanor Elkins Widener, the library is a memorial to her son, Harry, Class of 1907. Harry was an enthusiastic young bibliophile who perished aboard the Titanic.
It had been Harry's plan to donate his personal collection to the University once it provided a suitable alternative to the outdated and inadequate library then located in Gore Hall. Mrs. Widener fulfilled her son's dream by building a facility of monumental proportions, with over 50 miles of shelves and the capacity to hold over three million volumes.
The library opened in 1915, but Harvard's collections continued to grow at an astounding rate. By the late 1930s, Widener's shelves were at capacity. Space was at a premium for staff and patrons as well as books, which led the administration to begin a lengthy decentralization process. Over time Harvard built several new libraries to house its increasingly specialized collections.
Widener Library ushered in the new millennium in the midst of its greatest change since opening in 1915. From 1999 to 2004, the building underwent extensive renovations to ensure the long-term preservation and security of collections, and to increase user space.
Back to schedule
Harvard Houghton Library
Houghton Library opened in 1942 to provide a dedicated home for Harvard Library’s rapidly growing collections of rare books and manuscripts. Since then, it has become known as a research center and a setting for hands-on learning, exhibitions, and lectures and other public programs.
Houghton is not just a place that keeps books; it is a notable site of human activity that both reflects and contributes to the interconnectivity of Harvard Library as a whole.