Vestnik KRAUNC. Fiz.-Mat. Nauki. 2022. vol. 38. no. 1. P. 54–73. ISSN 2079-6641

Contents of this issue

Read Russian Version US Flag

INFORMATION AND COMPUTATION TECHNOLOGIES

MSC 68U20

Research Article

Some aspects of approximation and interpolation of functions artificial neural networks

V. A. Galkin¹², T. V. Gavrilenko¹², A. D. Smorodinov¹²

¹Surgut Branch of SRISA 628426, Surgut, Energetikov st., 4, Russia
²Surgut State University, 628412, Surgut, Lenina st., 1, Russia

E-mail: Sachenka_1998@mail.ru

The article deals with the issues of approximation and interpolation of functions f(x) = |x|, f(x) = sin(x), f(x) =1/(1+25x²) with the help of neural networks from those constructed on the basis of the Kolmogorov-Arnold and Tsybenko theorems. problems in training a neural network based on the initialization of weight coefficients in a random way are shown. The possibility of training a neural network to work with a variety is shown.

Keywords: approximation of functions, interpolation of functions, artificial neural networks, Tsybenko’s theorem, Kolmogorov-Arnold’s theorem.

DOI: 10.26117/2079-6641-2022-38-1-54-73

Original article submitted: 22.03.2022

Revision submitted: 04.04.2022

For citation. Galkin V. A., Gavrilenko T. V., Smorodinov A. D. Some aspects of approximation and interpolation of functions artificial neural networks. Vestnik KRAUNC. Fiz.-mat. nauki. 2022, 38: 1, 54-73. DOI: 10.26117/2079-6641-2022-38-1-54-73

Competing interests. The author declares that there are no conflicts of interest regarding authorship and publication.

Contribution and Responsibility. The author contributed to this article. The author is solely responsible for providing the final version of the article in print. The author approved the final version of the manuscript.

The content is published under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/deed.ru)

© Galkin V. A., Gavrilenko T. V., Smorodinov A. D., 2022

Funding. The publication was made within the framework of the state task of the Federal State Institution FNTs NIISI RAS (Performance of fundamental scientific research GP 47) on topic No. 0580-2021-0007 «Development of methods for mathematical modeling of distributed systems and corresponding calculation methods».

References

  1. Braun J., Griebel M.On a constructive proof of Kolmogorov’s superposition theorem, Constructive Approximation journal, 2009. vol. 30, pp. 653 doi:10.1007/s00365-009-9054-2.
  2. Cybenko, G. V. Approximation by Superpositions of a Sigmoidal function, Mathematics of Control Signals and Systems, 1989. vol. 2, pp. 303–314.
  3. Sprecher, David A.On the Structure of Continuous Functions of Several Varia, Trans. Amer. Math. Soc., 1965, pp. 340–355.
  4. Funahashi K.On the Approximate Realization of Continuous Mappings by Neural Networks, Neural Networks, 1989. vol. 2, pp. 183–192.
  5. He K., Zhang X., Ren S., Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, arXiv:1502.01852, 2015.
  6. Liang S., Srikant R.Why deep neural networks for function approximation?, Published as a conference paper at ICLR, 2017.
  7. Hanin B. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations, Mathematics, 2019. vol. 7, 992.
  8. Liu B., Liang Y. Optimal function approximation with ReLU neural networks, Neurocomputing, 2021. vol. 435, pp. 216–227.
  9. Almira J. M., Lopez-de-Teruel P. E., Romero-L´opez D. J., Voigtlaender F. Negative results for approximation using single layer and multilayer feedforward neural networks, Journal of Mathematical Analysis and Applications, 2021. vol. 494, no. 1, 124584.
  10. Guliyev N. J., Ismailov V. E.On the approximation by single hidden layer feedforward neural networks with fixed weights, Neural Networks, 2018. vol. 98, pp. 296–304.
  11. Guliyev N. J., Ismailov V. E. Approximation capability of two hidden layer feedforward neural networks with fixed weights, Neurocomputing, 2018. vol. 316, pp. 262–269.
  12. Kolmogorov A. N.O predstavlenii nepreryvnykh funktsiy mnogikh peremennykh v vide superpozitsii nepreryvnykh funktsiy odnoy peremennoy, Doklady AN SSSR, 1957. Т. 114, С. 953-956 (In Russian).
  13. Arnold V. I.O predstavlenii funktsiy neskol’kikh peremennykh v vide superpozitsii funktsiy men’shego chisla peremennykh, Matematicheskoye prosveshcheniye, 1958. Т. 3, С. 41–61 (In Russian).

Galkin Valery Alekseevich – Doctor of Physical and Mathematical Sciences, Professor, Surgut State University; Director, Branch of SRISA, Surgut, Russia, ORCID 0000-0002-9721-4026.


Gavrilenko Taras Vladimirovich – PhD (Tech.), docent, Surgut State University; Deputy Director, Branch of SRISA, Surgut, Russia, ORCID 0000-0002-3243-2751.


Smorodinov Aleksandr Denisovich – Postgraduate Student of the Department of Applied Mathematics, Lecturer of the Department of ASOIU, Surgut State University»; Engineer of the Department of Biophysics and Neurocybernetics, Branch of SRISA, Surgut, Russia, ORCID 0000-0002-9324-1844.


Download article: Galkin V. A., Gavrilenko T. V., Smorodinov A. D. Some aspects of approximation and interpolation of functions artificial neural networks.