Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/53568
Title: Compressing a convolution neural network based on quantization
Authors: Pertsau, D.
Lukashevich, M.
Kupryianava, D.
Keywords: публикации ученых;neural network;quantization-aware training;quantization
Issue Date: 2023
Publisher: Belarusian State University
Citation: Pertsau, D. Compressing a convolution neural network based on quantization / D. Pertsau, M. Lukashevich, D. Kupryianava // Pattern Recognition and Information Processing. Artificial Intelliverse: Expanding Horizons : Proceedings of the 16th International Conference, Minsk, October 17–19, 2023 / ed.: A. Nedzved. A. Belotserkovsky / Belarusian State University. – Minsk, 2023. – P. 269–272.
Abstract: Modern deep neural network models contain a large number of parameters and have a significant size. In this paper we experimentally investigate approaches to compression of convolutional neural network. The results showing the efficiency of quantization of the model while maintaining high accuracy are obtained.
URI: https://libeldoc.bsuir.by/handle/123456789/53568
Appears in Collections:Публикации в изданиях Республики Беларусь

Files in This Item:
File Description SizeFormat 
Percev_Compressing.pdf271.56 kBAdobe PDFView/Open
Show full item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.