Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media

Xiangjue Dong, Changmao Li, Jinho D. Choi


Abstract
We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.
Anthology ID:
2020.figlang-1.38
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | Fig-Lang | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
276–280
Language:
URL:
https://www.aclweb.org/anthology/2020.figlang-1.38
DOI:
10.18653/v1/2020.figlang-1.38
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.figlang-1.38.pdf
Video:
 http://slideslive.com/38929707