Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/54457
Title: Neural Networks Interpretation Improvement
Authors: Kroshchanka, A.
Golovko, V.
Keywords: материалы конференций;deep neural network;pretraining;Explainable AI
Issue Date: 2023
Publisher: BSU
Citation: Kroshchanka, A. Neural Networks Interpretation Improvement / A. Kroshchanka, V. Golovko // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 45–48.
Abstract: The paper is devoted to studying the issues of interpretability of neural network models. Particular attention is paid to the training of heavy models with a large number of parameters. A generalized approach for pretraining deep models is proposed, which allows achieving better performance in final accuracy and interpreting the model output and can be used when training on small datasets. The effectiveness of the proposed approach is demonstrated on examples of training deep neural network models using the MNIST dataset. The obtained results can be used to train fully connected type of layers and other types of layers after applying of flatting operation.
URI: https://libeldoc.bsuir.by/handle/123456789/54457
Appears in Collections:Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)

Files in This Item:
File Description SizeFormat 
Kroshchanka_Neural.pdf323.35 kBAdobe PDFView/Open
Show full item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.