TLDR: Extreme Summarization of Scientific Documents

Isabel Cachola, Kyle Lo, Arman Cohan, Daniel Weld


Abstract
We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SCITLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SCITLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at https://github.com/allenai/scitldr.
Anthology ID:
2020.findings-emnlp.428
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4766–4777
Language:
URL:
https://www.aclweb.org/anthology/2020.findings-emnlp.428
DOI:
10.18653/v1/2020.findings-emnlp.428
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.findings-emnlp.428.pdf
Optional supplementary material:
 2020.findings-emnlp.428.OptionalSupplementaryMaterial.zip