Exploring the Boundaries of Low-Resource BERT Distillation

Moshe Wasserblat, Oren Pereg, Peter Izsak


Abstract
In recent years, large pre-trained models have demonstrated state-of-the-art performance in many of NLP tasks. However, the deployment of these models on devices with limited resources is challenging due to the models’ large computational consumption and memory requirements. Moreover, the need for a considerable amount of labeled training data also hinders real-world deployment scenarios. Model distillation has shown promising results for reducing model size, computational load and data efficiency. In this paper we test the boundaries of BERT model distillation in terms of model compression, inference efficiency and data scarcity. We show that classification tasks that require the capturing of general lexical semantics can be successfully distilled by very simple and efficient models and require relatively small amount of labeled training data. We also show that the distillation of large pre-trained models is more effective in real-life scenarios where limited amounts of labeled training are available.
Anthology ID:
2020.sustainlp-1.5
Volume:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
35–40
Language:
URL:
https://www.aclweb.org/anthology/2020.sustainlp-1.5
DOI:
10.18653/v1/2020.sustainlp-1.5
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/2020.sustainlp-1.5.pdf