Japanese Predicate Conjugation for Neural Machine Translation

Michiki Kurosawa, Yukio Matsumura, Hayahide Yamagishi, Mamoru Komachi


Abstract
Neural machine translation (NMT) has a drawback in that can generate only high-frequency words owing to the computational costs of the softmax function in the output layer. In Japanese-English NMT, Japanese predicate conjugation causes an increase in vocabulary size. For example, one verb can have as many as 19 surface varieties. In this research, we focus on predicate conjugation for compressing the vocabulary size in Japanese. The vocabulary list is filled with the various forms of verbs. We propose methods using predicate conjugation information without discarding linguistic information. The proposed methods can generate low-frequency words and deal with unknown words. Two methods were considered to introduce conjugation information: the first considers it as a token (conjugation token) and the second considers it as an embedded vector (conjugation feature). The results using these methods demonstrate that the vocabulary size can be compressed by approximately 86.1% (Tanaka corpus) and the NMT models can output the words not in the training data set. Furthermore, BLEU scores improved by 0.91 points in Japanese-to-English translation, and 0.32 points in English-to-Japanese translation with ASPEC.
Anthology ID:
N18-4014
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
June
Year:
2018
Address:
New Orleans, Louisiana, USA
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
100–105
Language:
URL:
https://www.aclweb.org/anthology/N18-4014
DOI:
10.18653/v1/N18-4014
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/N18-4014.pdf