Bagging BERT Models for Robust Aggression Identification

Julian Risch, Ralf Krestel


Abstract
Modern transformer-based models with hundreds of millions of parameters, such as BERT, achieve impressive results at text classification tasks. This also holds for aggression identification and offensive language detection, where deep learning approaches consistently outperform less complex models, such as decision trees. While the complex models fit training data well (low bias), they also come with an unwanted high variance. Especially when fine-tuning them on small datasets, the classification performance varies significantly for slightly different training data. To overcome the high variance and provide more robust predictions, we propose an ensemble of multiple fine-tuned BERT models based on bootstrap aggregating (bagging). In this paper, we describe such an ensemble system and present our submission to the shared tasks on aggression identification 2020 (team name: Julian). Our submission is the best-performing system for five out of six subtasks. For example, we achieve a weighted F1-score of 80.3% for task A on the test dataset of English social media posts. In our experiments, we compare different model configurations and vary the number of models used in the ensemble. We find that the F1-score drastically increases when ensembling up to 15 models, but the returns diminish for more models.
Anthology ID:
2020.trac-1.9
Volume:
Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying
Month:
May
Year:
2020
Address:
Marseille, France
Venues:
LREC | TRAC | WS
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
55–61
Language:
English
URL:
https://www.aclweb.org/anthology/2020.trac-1.9
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.trac-1.9.pdf