Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/53568
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPertsau, D.-
dc.contributor.authorLukashevich, M.-
dc.contributor.authorKupryianava, D.-
dc.coverage.spatialМинскen_US
dc.date.accessioned2023-11-13T05:54:46Z-
dc.date.available2023-11-13T05:54:46Z-
dc.date.issued2023-
dc.identifier.citationPertsau, D. Compressing a convolution neural network based on quantization / D. Pertsau, M. Lukashevich, D. Kupryianava // Pattern Recognition and Information Processing. Artificial Intelliverse: Expanding Horizons : Proceedings of the 16th International Conference, Minsk, October 17–19, 2023 / ed.: A. Nedzved. A. Belotserkovsky / Belarusian State University. – Minsk, 2023. – P. 269–272.en_US
dc.identifier.urihttps://libeldoc.bsuir.by/handle/123456789/53568-
dc.description.abstractModern deep neural network models contain a large number of parameters and have a significant size. In this paper we experimentally investigate approaches to compression of convolutional neural network. The results showing the efficiency of quantization of the model while maintaining high accuracy are obtained.en_US
dc.language.isoenen_US
dc.publisherBelarusian State Universityen_US
dc.subjectпубликации ученыхen_US
dc.subjectneural networken_US
dc.subjectquantization-aware trainingen_US
dc.subjectquantizationen_US
dc.titleCompressing a convolution neural network based on quantizationen_US
dc.typeArticleen_US
Appears in Collections:Публикации в изданиях Республики Беларусь

Files in This Item:
File Description SizeFormat 
Percev_Compressing.pdf271.56 kBAdobe PDFView/Open
Show simple item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.