A Benchmark for Structured Procedural Knowledge Extraction from Cooking Videos

Frank F. Xu, Lei Ji, Botian Shi, Junyi Du, Graham Neubig, Yonatan Bisk, Nan Duan


Abstract
Watching instructional videos are often used to learn about procedures. Video captioning is one way of automatically collecting such knowledge. However, it provides only an indirect, overall evaluation of multimodal models with no finer-grained quantitative measure of what they have learned. We propose instead, a benchmark of structured procedural knowledge extracted from cooking videos. This work is complementary to existing tasks, but requires models to produce interpretable structured knowledge in the form of verb-argument tuples. Our manually annotated open-vocabulary resource includes 356 instructional cooking videos and 15,523 video clip/sentence-level annotations. Our analysis shows that the proposed task is challenging and standard modeling approaches like unsupervised segmentation, semantic role labeling, and visual action detection perform poorly when forced to predict every action of a procedure in a structured form.
Anthology ID:
2020.nlpbt-1.4
Volume:
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | nlpbt
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
30–40
Language:
URL:
https://www.aclweb.org/anthology/2020.nlpbt-1.4
DOI:
10.18653/v1/2020.nlpbt-1.4
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.nlpbt-1.4.pdf
Optional supplementary material:
 2020.nlpbt-1.4.OptionalSupplementaryMaterial.pdf