Title: | Learnable Global Layerwise Nonlinearities Without Activation Functions |
Authors: | Diamond, J. |
Keywords: | материалы конференций;activation functions;neural networks;nonlinearity |
Issue Date: | 2023 |
Publisher: | BSU |
Citation: | Diamond, J. Learnable Global Layerwise Nonlinearities Without Activation Functions / J. Diamond // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 273–278. |
Abstract: | In machine learning and neural networks, non-
linear transformations have been pivotal in capturing intricate
patterns within data. These transformations are traditionally
instantiated via activation functions such as Rectifi ed Linear
Unit (ReLU), Sigmoid, and Hyperbolic Tangent (Tanh). In this
work, we introduce DiagonalizeGNN, an approach that changes
the introduction of non-linearities in Graph Neural Networks
(GNNs). Unlike traditional methods that rely on pointwise
activation functions, DiagonalizeGNN employs Singular Value
Decomposition (SVD) to incorporate global, non-piecewise non-
linearities across an entire graph’s feature matrix. We provide
the formalism of this method and empirical validation on a
synthetic dataset, we demonstrate that our method not only
achieves comparable performance to existing models but also
offers additional benefi ts such as higher stability and potential for
capturing more complex relationships. This novel approach opens
up new avenues for research and offers signifi cant implications
for the future of non-linear transformations in machine learning. |
URI: | https://libeldoc.bsuir.by/handle/123456789/54449 |
Appears in Collections: | Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)
|