Multi-Task Sequence Prediction For Tunisian Arabizi Multi-Level Annotation

Elisa Gugliotta, Marco Dinarelli, Olivier Kraif


Abstract
In this paper we propose a multi-task sequence prediction system, based on recurrent neural networks and used to annotate on multiple levels an Arabizi Tunisian corpus. The annotation performed are text classification, tokenization, PoS tagging and encoding of Tunisian Arabizi into CODA* Arabic orthography. The system is learned to predict all the annotation levels in cascade, starting from Arabizi input. We evaluate the system on the TIGER German corpus, suitably converting data to have a multi-task problem, in order to show the effectiveness of our neural architecture. We show also how we used the system in order to annotate a Tunisian Arabizi corpus, which has been afterwards manually corrected and used to further evaluate sequence models on Tunisian data. Our system is developed for the Fairseq framework, which allows for a fast and easy use for any other sequence prediction problem.
Anthology ID:
2020.wanlp-1.16
Volume:
Proceedings of the Fifth Arabic Natural Language Processing Workshop
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Venues:
COLING | WANLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
178–191
Language:
URL:
https://www.aclweb.org/anthology/2020.wanlp-1.16
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.wanlp-1.16.pdf