Detecting Sarcasm in Conversation Context Using Transformer-Based Models

Adithya Avvaru, Sanath Vobilisetty, Radhika Mamidi


Abstract
Sarcasm detection, regarded as one of the sub-problems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.
Anthology ID:
2020.figlang-1.15
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | Fig-Lang | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
98–103
Language:
URL:
https://www.aclweb.org/anthology/2020.figlang-1.15
DOI:
10.18653/v1/2020.figlang-1.15
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.figlang-1.15.pdf