Matt Huenerfauth


2020

pdf bib
An Isolated-Signing RGBD Dataset of 100 American Sign Language Signs Produced by Fluent ASL Signers
Saad Hassan | Larwan Berke | Elahe Vahdani | Longlong Jing | Yingli Tian | Matt Huenerfauth
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives

We have collected a new dataset consisting of color and depth videos of fluent American Sign Language (ASL) signers performing sequences of 100 ASL signs from a Kinect v2 sensor. This directed dataset had originally been collected as part of an ongoing collaborative project, to aid in the development of a sign-recognition system for identifying occurrences of these 100 signs in video. The set of words consist of vocabulary items that would commonly be learned in a first-year ASL course offered at a university, although the specific set of signs selected for inclusion in the dataset had been motivated by project-related factors. Given increasing interest among sign-recognition and other computer-vision researchers in red-green-blue-depth (RBGD) video, we release this dataset for use by the research community. In addition to the RGB video files, we share depth and HD face data as well as additional features of face, hands, and body produced through post-processing of this data.

2019

pdf bib
Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken Dialogues
Sushant Kafle | Cissi Ovesdotter Alm | Matt Huenerfauth
Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies

Prosodic cues in conversational speech aid listeners in discerning a message. We investigate whether acoustic cues in spoken dialogue can be used to identify the importance of individual words to the meaning of a conversation turn. Individuals who are Deaf and Hard of Hearing often rely on real-time captions in live meetings. Word error rate, a traditional metric for evaluating automatic speech recognition (ASR), fails to capture that some words are more important for a system to transcribe correctly than others. We present and evaluate neural architectures that use acoustic features for 3-class word importance prediction. Our model performs competitively against state-of-the-art text-based word-importance prediction models, and it demonstrates particular benefits when operating on imperfect ASR output.

2018

pdf bib
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts
Sushant Kafle | Matt Huenerfauth
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
Continuous Profile Models in ASL Syntactic Facial Expression Synthesis
Hernisa Kacorri | Matt Huenerfauth
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Bridging the gap between sign language machine translation and sign language animation using sequence classification
Sarah Ebling | Matt Huenerfauth
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies

pdf bib
Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data
Matt Huenerfauth | Pengfei Lu | Hernisa Kacorri
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies

pdf bib
Evaluating a Dynamic Time Warping Based Scoring Algorithm for Facial Expressions in ASL Animations
Hernisa Kacorri | Matt Huenerfauth
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies

2012

pdf bib
Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data
Pengfei Lu | Matt Huenerfauth
Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies

2010

pdf bib
Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research
Pengfei Lu | Matt Huenerfauth
Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies

pdf bib
A Comparison of Features for Automatic Readability Assessment
Lijun Feng | Martin Jansche | Matt Huenerfauth | Noémie Elhadad
Coling 2010: Posters

2009

pdf bib
Cognitively Motivated Features for Readability Assessment
Lijun Feng | Noémie Elhadad | Matt Huenerfauth
Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)

2007

pdf bib
Design and Evaluation of an American Sign Language Generator
Matt Huenerfauth | Liming Zhou | Erdan Gu | Jan Allbeck
Proceedings of the Workshop on Embodied Language Processing

2006

pdf bib
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium
Matt Huenerfauth | Bo Pang | Mitch Marcus
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Doctoral Consortium

2005

pdf bib
American Sign Language Generation: Multimodal NLG with Multiple Linguistic Channels
Matt Huenerfauth
Proceedings of the ACL Student Research Workshop

2004

pdf bib
A Multi-Path Architecture for Machine Translation of English Text into American Sign Language Animation
Matt Huenerfauth
Proceedings of the Student Research Workshop at HLT-NAACL 2004