Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/54314
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPertsau, D.-
dc.contributor.authorLukashevich, M.-
dc.contributor.authorKupryianava, D.-
dc.coverage.spatialМинскen_US
dc.date.accessioned2024-02-22T06:14:29Z-
dc.date.available2024-02-22T06:14:29Z-
dc.date.issued2023-
dc.identifier.citationPertsau, D. Compressing a convolution neural network based on quantization / D. Pertsau, M. Lukashevich, D. Kupryianava // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 269–272.en_US
dc.identifier.urihttps://libeldoc.bsuir.by/handle/123456789/54314-
dc.description.abstractModern deep neural network models contain a large number of parameters and have a significant size. In this paper we experimentally investigate approaches to compression of convolutional neural network. The results showing the efficiency of quantization of the model while maintaining high accuracy are obtained.en_US
dc.language.isoenen_US
dc.publisherBSUen_US
dc.subjectматериалы конференцийen_US
dc.subjectconvolution neural networken_US
dc.subjectquantizationen_US
dc.subjectquantization-aware trainingen_US
dc.titleCompressing a convolution neural network based on quantizationen_US
dc.typeArticleen_US
Appears in Collections:Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)

Files in This Item:
File Description SizeFormat 
Pertsau_Compressing.pdf431.94 kBAdobe PDFView/Open
Show simple item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.