Skip to Main content Skip to Navigation
New interface
Conference papers

Élagage de réseaux profond de neurones par dégradation sélective des pondérations

Abstract : Deep neural networks are the standard in machine learning. However, to achieve the best performance, they require millions of trainable parameters, resulting in computationally and memory intensive architectures, and therefore not well suited to certain application contexts such as embedded systems. Parameter pruning during training is a frequently used methodology to reduce these costs, but it induces new problems: sudden performance collapse at high pruning rates, discontinuities between training phases... In this paper we introduce Selective Weight Decay (SWD), a method inspired by Lagrangian smoothing and allowing a progressive and continuous pruning during training. We show on standard datasets the ability of this method to achieve the best performances, especially at the highest pruning rates.
Document type :
Conference papers
Complete list of metadata
Contributor : Hugo Tessier Connect in order to contact the contributor
Submitted on : Wednesday, June 15, 2022 - 1:43:04 PM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM


Files produced by the author(s)


  • HAL Id : hal-03695958, version 1


Hugo Tessier, Vincent Gripon, Mathieu Léonardon, Matthieu Arzel, Thomas Hannagan, et al.. Élagage de réseaux profond de neurones par dégradation sélective des pondérations. GRETSI 2022, Sep 2022, Nancy, France. ⟨hal-03695958⟩



Record views


Files downloads