Neural Sarcasm Detection using Conversation Context

Nikhil Jaiswal


Abstract
Social media platforms and discussion forums such as Reddit, Twitter, etc. are filled with figurative languages. Sarcasm is one such category of figurative language whose presence in a conversation makes language understanding a challenging task. In this paper, we present a deep neural architecture for sarcasm detection. We investigate various pre-trained language representation models (PLRMs) like BERT, RoBERTa, etc. and fine-tune it on the Twitter dataset. We experiment with a variety of PLRMs either on the twitter utterance in isolation or utilizing the contextual information along with the utterance. Our findings indicate that by taking into consideration the previous three most recent utterances, the model is more accurately able to classify a conversation as being sarcastic or not. Our best performing ensemble model achieves an overall F1 score of 0.790, which ranks us second on the leaderboard of the Sarcasm Shared Task 2020.
Anthology ID:
2020.figlang-1.11
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | Fig-Lang | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
77–82
Language:
URL:
https://www.aclweb.org/anthology/2020.figlang-1.11
DOI:
10.18653/v1/2020.figlang-1.11
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.figlang-1.11.pdf
Video:
 http://slideslive.com/38929701