Visual Question Generation from Radiology Images

Mourad Sarrouti, Asma Ben Abacha, Dina Demner-Fushman


Abstract
Visual Question Generation (VQG), the task of generating a question based on image contents, is an increasingly important area that combines natural language processing and computer vision. Although there are some recent works that have attempted to generate questions from images in the open domain, the task of VQG in the medical domain has not been explored so far. In this paper, we introduce an approach to generation of visual questions about radiology images called VQGR, i.e. an algorithm that is able to ask a question when shown an image. VQGR first generates new training data from the existing examples, based on contextual word embeddings and image augmentation techniques. It then uses the variational auto-encoders model to encode images into a latent space and decode natural language questions. Experimental automatic evaluations performed on the VQA-RAD dataset of clinical visual questions show that VQGR achieves good performances compared with the baseline system. The source code is available at https://github.com/sarrouti/vqgr.
Anthology ID:
2020.alvr-1.3
Volume:
Proceedings of the First Workshop on Advances in Language and Vision Research
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | ALVR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12–18
Language:
URL:
https://www.aclweb.org/anthology/2020.alvr-1.3
DOI:
10.18653/v1/2020.alvr-1.3
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.alvr-1.3.pdf
Video:
 http://slideslive.com/38929760