Understanding Task Design Trade-offs in Crowdsourced Paraphrase Collection

Youxuan Jiang, Jonathan K. Kummerfeld, Walter S. Lasecki


Abstract
Linguistically diverse datasets are critical for training and evaluating robust machine learning systems, but data collection is a costly process that often requires experts. Crowdsourcing the process of paraphrase generation is an effective means of expanding natural language datasets, but there has been limited analysis of the trade-offs that arise when designing tasks. In this paper, we present the first systematic study of the key factors in crowdsourcing paraphrase collection. We consider variations in instructions, incentives, data domains, and workflows. We manually analyzed paraphrases for correctness, grammaticality, and linguistic diversity. Our observations provide new insight into the trade-offs between accuracy and diversity in crowd responses that arise as a result of task design, providing guidance for future paraphrase generation procedures.
Anthology ID:
P17-2017
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
103–109
Language:
URL:
https://www.aclweb.org/anthology/P17-2017
DOI:
10.18653/v1/P17-2017
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/P17-2017.pdf
Presentation:
 P17-2017.Presentation.pdf
Dataset:
 P17-2017.Datasets.zip
Video:
 https://vimeo.com/234958413