Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/54449
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDiamond, J.-
dc.coverage.spatialМинскen_US
dc.date.accessioned2024-03-01T07:40:20Z-
dc.date.available2024-03-01T07:40:20Z-
dc.date.issued2023-
dc.identifier.citationDiamond, J. Learnable Global Layerwise Nonlinearities Without Activation Functions / J. Diamond // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 273–278.en_US
dc.identifier.urihttps://libeldoc.bsuir.by/handle/123456789/54449-
dc.description.abstractIn machine learning and neural networks, non- linear transformations have been pivotal in capturing intricate patterns within data. These transformations are traditionally instantiated via activation functions such as Rectifi ed Linear Unit (ReLU), Sigmoid, and Hyperbolic Tangent (Tanh). In this work, we introduce DiagonalizeGNN, an approach that changes the introduction of non-linearities in Graph Neural Networks (GNNs). Unlike traditional methods that rely on pointwise activation functions, DiagonalizeGNN employs Singular Value Decomposition (SVD) to incorporate global, non-piecewise non- linearities across an entire graph’s feature matrix. We provide the formalism of this method and empirical validation on a synthetic dataset, we demonstrate that our method not only achieves comparable performance to existing models but also offers additional benefi ts such as higher stability and potential for capturing more complex relationships. This novel approach opens up new avenues for research and offers signifi cant implications for the future of non-linear transformations in machine learning.en_US
dc.language.isoenen_US
dc.publisherBSUen_US
dc.subjectматериалы конференцийen_US
dc.subjectactivation functionsen_US
dc.subjectneural networksen_US
dc.subjectnonlinearityen_US
dc.titleLearnable Global Layerwise Nonlinearities Without Activation Functionsen_US
dc.typeArticleen_US
Appears in Collections:Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)

Files in This Item:
File Description SizeFormat 
Justin_Diamond_Learnable.pdf431.95 kBAdobe PDFView/Open
Show simple item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.