AraWEAT: Multidimensional Analysis of Biases in Arabic Word Embeddings

Anne Lauscher, Rafik Takieddin, Simone Paolo Ponzetto, Goran Glavaš


Abstract
Recent work has shown that distributional word vector spaces often encode human biases like sexism or racism. In this work, we conduct an extensive analysis of biases in Arabic word embeddings by applying a range of recently introduced bias tests on a variety of embedding spaces induced from corpora in Arabic. We measure the presence of biases across several dimensions, namely: embedding models (Skip-Gram, CBOW, and FastText) and vector sizes, types of text (encyclopedic text, and news vs. user-generated content), dialects (Egyptian Arabic vs. Modern Standard Arabic), and time (diachronic analyses over corpora from different time periods). Our analysis yields several interesting findings, e.g., that implicit gender bias in embeddings trained on Arabic news corpora steadily increases over time (between 2007 and 2017). We make the Arabic bias specifications (AraWEAT) publicly available.
Anthology ID:
2020.wanlp-1.17
Volume:
Proceedings of the Fifth Arabic Natural Language Processing Workshop
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Venues:
COLING | WANLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
192–199
Language:
URL:
https://www.aclweb.org/anthology/2020.wanlp-1.17
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.wanlp-1.17.pdf