Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/54314
Title: Compressing a convolution neural network based on quantization
Authors: Pertsau, D.
Lukashevich, M.
Kupryianava, D.
Keywords: материалы конференций;convolution neural network;quantization;quantization-aware training
Issue Date: 2023
Publisher: BSU
Citation: Pertsau, D. Compressing a convolution neural network based on quantization / D. Pertsau, M. Lukashevich, D. Kupryianava // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 269–272.
Abstract: Modern deep neural network models contain a large number of parameters and have a significant size. In this paper we experimentally investigate approaches to compression of convolutional neural network. The results showing the efficiency of quantization of the model while maintaining high accuracy are obtained.
URI: https://libeldoc.bsuir.by/handle/123456789/54314
Appears in Collections:Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)

Files in This Item:
File Description SizeFormat 
Pertsau_Compressing.pdf431.94 kBAdobe PDFView/Open
Show full item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.