Enriching Historic Photography with Structured Data using Image Region Segmentation

Taylor Arnold, Lauren Tilton


Abstract
Cultural institutions such as galleries, libraries, archives and museums continue to make commitments to large scale digitization of collections. An ongoing challenge is how to increase discovery and access through structured data and the semantic web. In this paper we describe a method for using computer vision algorithms that automatically detect regions of “stuff” — such as the sky, water, and roads — to produce rich and accurate structured data triples for describing the content of historic photography. We apply our method to a collection of 1610 documentary photographs produced in the 1930s and 1940 by the FSA-OWI division of the U.S. federal government. Manual verification of the extracted annotations yields an accuracy rate of 97.5%, compared to 70.7% for relations extracted from object detection and 31.5% for automatically generated captions. Our method also produces a rich set of features, providing more unique labels (1170) than either the captions (1040) or object detection (178) methods. We conclude by describing directions for a linguistically-focused ontology of region categories that can better enrich historical image data. Open source code and the extracted metadata from our corpus are made available as external resources.
Anthology ID:
2020.ai4hi-1.1
Volume:
Proceedings of the 1st International Workshop on Artificial Intelligence for Historical Image Enrichment and Access
Month:
May
Year:
2020
Address:
Marseille, France
Venues:
AI4HI | LREC | WS
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
1–10
Language:
English
URL:
https://www.aclweb.org/anthology/2020.ai4hi-1.1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.ai4hi-1.1.pdf