Document Classification for COVID-19 Literature

Bernal Jiménez Gutiérrez, Juncheng Zeng, Dongdong Zhang, Ping Zhang, Yu Su


Abstract
The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset. We find that pre-trained language models outperform other models in both low and high data regimes, achieving a maximum F1 score of around 86%. We note that even the highest performing models still struggle with label correlation, distraction from introductory text and CORD-19 generalization. Both data and code are available on GitHub.
Anthology ID:
2020.nlpcovid19-acl.3
Volume:
Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020
Month:
July
Year:
2020
Address:
Online
Venues:
ACL | NLP-COVID19
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://www.aclweb.org/anthology/2020.nlpcovid19-acl.3
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.nlpcovid19-acl.3.pdf