Jeffrey P. Bigham

Also published as: Jeffrey Bigham


2019

pdf bib
Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References
Prakhar Gupta | Shikib Mehri | Tiancheng Zhao | Amy Pavel | Maxine Eskenazi | Jeffrey Bigham
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation. Existing metrics have been shown to correlate poorly with human judgement, particularly in open-domain dialog. One alternative is to collect human annotations for evaluation, which can be expensive and time consuming. To demonstrate the effectiveness of multi-reference evaluation, we augment the test set of DailyDialog with multiple references. A series of experiments show that the use of multiple references results in improved correlation between several automatic metrics and human judgement for both the quality and the diversity of system output.

2013

pdf bib
Text Alignment for Real-Time Crowd Captioning
Iftekhar Naim | Daniel Gildea | Walter Lasecki | Jeffrey P. Bigham
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2006

pdf bib
Names and Similarities on the Web: Fact Extraction in the Fast Lane
Marius Paşca | Dekang Lin | Jeffrey Bigham | Andrei Lifchits | Alpa Jain
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics