DC Field | Value | Language |
dc.contributor.author | Pertsau, D. | - |
dc.contributor.author | Lukashevich, M. | - |
dc.contributor.author | Kupryianava, D. | - |
dc.coverage.spatial | Минск | en_US |
dc.date.accessioned | 2023-11-13T05:54:46Z | - |
dc.date.available | 2023-11-13T05:54:46Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Pertsau, D. Compressing a convolution neural network based on quantization / D. Pertsau, M. Lukashevich, D. Kupryianava // Pattern Recognition and Information Processing. Artificial Intelliverse: Expanding Horizons : Proceedings of the 16th International Conference, Minsk, October 17–19, 2023 / ed.: A. Nedzved. A. Belotserkovsky / Belarusian State University. – Minsk, 2023. – P. 269–272. | en_US |
dc.identifier.uri | https://libeldoc.bsuir.by/handle/123456789/53568 | - |
dc.description.abstract | Modern deep neural network models contain a large number of parameters and have a significant size. In this paper we experimentally investigate approaches to compression of convolutional neural network. The results showing the efficiency of quantization of the model while maintaining high accuracy are obtained. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Belarusian State University | en_US |
dc.subject | публикации ученых | en_US |
dc.subject | neural network | en_US |
dc.subject | quantization-aware training | en_US |
dc.subject | quantization | en_US |
dc.title | Compressing a convolution neural network based on quantization | en_US |
dc.type | Article | en_US |
Appears in Collections: | Публикации в изданиях Республики Беларусь
|