AdapterDrop: On the Efficiency of Adapters in Transformers

Pages: 7930 - 7946
Published: Aug 26, 2021
Abstract
Transformer models are expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop...
Paper Details
Title
AdapterDrop: On the Efficiency of Adapters in Transformers
Published Date
Aug 26, 2021
Pages
7930 - 7946
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.