How Effectively Can Machines Defend Against Machine-Generated Fake News? An Empirical Study

Meghana Moorthy Bhat, Srinivasan Parthasarathy


Abstract
We empirically study the effectiveness of machine-generated fake news detectors by understanding the model’s sensitivity to different synthetic perturbations during test time. The current machine-generated fake news detectors rely on provenance to determine the veracity of news. Our experiments find that the success of these detectors can be limited since they are rarely sensitive to semantic perturbations and are very sensitive to syntactic perturbations. Also, we would like to open-source our code and believe it could be a useful diagnostic tool for evaluating models aimed at fighting machine-generated fake news.
Anthology ID:
2020.insights-1.7
Volume:
Proceedings of the First Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–53
Language:
URL:
https://www.aclweb.org/anthology/2020.insights-1.7
DOI:
10.18653/v1/2020.insights-1.7
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.insights-1.7.pdf