Using Alternate Representations of Text for Natural Language Understanding

Venkat Varada, Charith Peris, Yangsook Park, Christopher Dipersio


Abstract
One of the core components of voice assistants is the Natural Language Understanding (NLU) model. Its ability to accurately classify the user’s request (or “intent”) and recognize named entities in an utterance is pivotal to the success of these assistants. NLU models can be challenged in some languages by code-switching or morphological and orthographic variations. This work explores the possibility of improving the accuracy of NLU models for Indic languages via the use of alternate representations of input text for NLU, specifically ISO-15919 and IndicSOUNDEX, a custom SOUNDEX designed to work for Indic languages. We used a deep neural network based model to incorporate the information from alternate representations into the NLU model. We show that using alternate representations significantly improves the overall performance of NLU models when training data is limited.
Anthology ID:
2020.nlp4convai-1.1
Volume:
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | NLP4ConvAI | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://www.aclweb.org/anthology/2020.nlp4convai-1.1
DOI:
10.18653/v1/2020.nlp4convai-1.1
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.nlp4convai-1.1.pdf
Video:
 http://slideslive.com/38929631