#MeToo Alexa: How Conversational Systems Respond to Sexual Harassment

Amanda Cercas Curry, Verena Rieser


Abstract
Conversational AI systems, such as Amazon’s Alexa, are rapidly developing from purely transactional systems to social chatbots, which can respond to a wide variety of user requests. In this article, we establish how current state-of-the-art conversational systems react to inappropriate requests, such as bullying and sexual harassment on the part of the user, by collecting and analysing the novel #MeTooAlexa corpus. Our results show that commercial systems mainly avoid answering, while rule-based chatbots show a variety of behaviours and often deflect. Data-driven systems, on the other hand, are often non-coherent, but also run the risk of being interpreted as flirtatious and sometimes react with counter-aggression. This includes our own system, trained on “clean” data, which suggests that inappropriate system behaviour is not caused by data bias.
Anthology ID:
W18-0802
Original:
W18-0802v1
Version 2:
W18-0802v2
Volume:
Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing
Month:
June
Year:
2018
Address:
New Orleans, Louisiana, USA
Venues:
EthNLP | NAACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7–14
Language:
URL:
https://www.aclweb.org/anthology/W18-0802
DOI:
10.18653/v1/W18-0802
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W18-0802.pdf