Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks

Volume: 33, Issue: 1, Pages: 366 - 377
Published: Jan 1, 2022
Abstract
Knowledge distillation (KD) from a "teacher" neural network and transfer of the knowledge to a small student network is done to improve the performance of the student network. This method is one of the most popular techniques to lighten convolutional neural networks (CNNs). Many KD algorithms have been proposed recently, but they still cannot properly distill essential knowledge of the teacher network, and the transfer tends to depend on the...
Paper Details
Title
Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks
Published Date
Jan 1, 2022
Volume
33
Issue
1
Pages
366 - 377
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.