Research article

Rate of approximaton by some neural network operators

  • Received: 26 July 2024 Revised: 21 October 2024 Accepted: 25 October 2024 Published: 07 November 2024
  • MSC : 41A25, 41A30, 47A58

  • First, we construct a new type of feedforward neural network operators on finite intervals, and give the pointwise and global estimates of approximation by the new operators. The new operator can approximate the continuous functions with a very good rate, which can not be obtained by polynomial approximation. Second, we construct a new type of feedforward neural network operator on infinite intervals and estimate the rate of approximation by the new operators. Finally, we investigate the weighted approximation properties of the new operators on infinite intervals and show that our new neural networks are dense in a very wide class of functional spaces. Thus, we demonstrate that approximation by feedforward neural networks has some better properties than approximation by polynomials on infinite intervals.

    Citation: Bing Jiang. Rate of approximaton by some neural network operators[J]. AIMS Mathematics, 2024, 9(11): 31679-31695. doi: 10.3934/math.20241523

    Related Papers:

  • First, we construct a new type of feedforward neural network operators on finite intervals, and give the pointwise and global estimates of approximation by the new operators. The new operator can approximate the continuous functions with a very good rate, which can not be obtained by polynomial approximation. Second, we construct a new type of feedforward neural network operator on infinite intervals and estimate the rate of approximation by the new operators. Finally, we investigate the weighted approximation properties of the new operators on infinite intervals and show that our new neural networks are dense in a very wide class of functional spaces. Thus, we demonstrate that approximation by feedforward neural networks has some better properties than approximation by polynomials on infinite intervals.



    加载中


    [1] G. A. Anastassiou, Univariate hyperbolic tangent neural network approximation, Math. Comput. Model., 53 (2011), 1111–1132. https://doi.org/10.1016/j.mcm.2010.11.072 doi: 10.1016/j.mcm.2010.11.072
    [2] G. A. Anastassiou, Multivariate sigmoidal neural networks approximation, Neural Netw., 24 (2011), 378–386. https://doi.org/10.1016/j.neunet.2011.01.003 doi: 10.1016/j.neunet.2011.01.003
    [3] F. L. Cao, T. F. Xie, Z. B. Xu, The estimate for approximation error of neural networks: A constructive approach, Neurocomputing, 71 (2008), 626–630. https://doi.org/10.1016/j.neucom.2007.07.024 doi: 10.1016/j.neucom.2007.07.024
    [4] F. L. Cao, Y. Q. Zhang, Z. R. He, Interpolation and rates of convergence for a class of neural networks, Appl. Math. Model., 33 (2009), 1441–1456. https://doi.org/10.1016/j.apm.2008.02.009 doi: 10.1016/j.apm.2008.02.009
    [5] F. L. Cao, Z. C. Li, J. W. Zhao, K. Lv, Approximation of functions defined on full axis of real by a class of neural networks: Density, complexity and constructive algorithm, Chinese J. Comput., 35 (2012), 786–795. http://dx.doi.org/10.3724/SP.J.1016.2012.00786 doi: 10.3724/SP.J.1016.2012.00786
    [6] Z. X. Chen, F. L. Cao, The approximation operators with sigmoidal functions, Comput. Math. Appl., 58 (2009), 758–765. https://doi.org/10.1016/j.camwa.2009.05.001 doi: 10.1016/j.camwa.2009.05.001
    [7] D. X. Zhou, Universality of deep convolutional neural networks, Appl. Comput. Harmon. Anal., 48 (2019), 787–794. https://doi.org/10.1016/j.acha.2019.06.004 doi: 10.1016/j.acha.2019.06.004
    [8] C. K. Chui, S. B. Lin, B. Zhang, D. X. Zhou, Realization of spatial sparseness by deep ReLU nets with massive data, IEEE Trans. Neural Netw. Learn. Syst., 33 (2022), 229–243. https://doi.org/10.1109/TNNLS.2020.3027613 doi: 10.1109/TNNLS.2020.3027613
    [9] X. Liu, Approximating smooth and sparse functions by deep neural networks: Optimal approximation rates and saturation, J. Complexity, 79 (2023), 101783. https://doi.org/10.1016/j.jco.2023.101783 doi: 10.1016/j.jco.2023.101783
    [10] D. X. Zhou, Theory of deep convolutional neural networks: Downsampling, Neural Netw., 124 (2020), 319–327. https://doi.org/10.1016/j.neunet.2020.01.018 doi: 10.1016/j.neunet.2020.01.018
    [11] D. X. Zhou, Deep distributed convolutional neural networks: Universality, Anal. Appl., 16 (2018), 895–919. https://doi.org/10.1142/s0219530518500124 doi: 10.1142/s0219530518500124
    [12] G. S. Wang, D. S. Yu, L. M. Guan, Neural network interpolation operators of multivariate functions, J. Comput. Appl. Math., 431 (2023), 115266. https://doi.org/10.1016/j.cam.2023.115266 doi: 10.1016/j.cam.2023.115266
    [13] D. S. Yu, Approximation by Neural networks with sigmoidal functions, Acta. Math. Sin. English Ser., 29 (2013), 2013–2026. https://doi.org/10.1007/s10114-013-1730-2 doi: 10.1007/s10114-013-1730-2
    [14] D. S. Yu. Approximation by neural networks with sigmoidal functions, Acta. Math. Sin. English Ser., 29 (2013), 2013–2026. https://doi.org/10.1007/s10114-013-1730-2 doi: 10.1007/s10114-013-1730-2
    [15] D. S. Yu, F. L. Cao, Construction and approximation rate for feedforward neural networks operators with sigmoidal functions, J. Comput. Appl. Math., 453 (2025), 116150. https://doi.org/10.1016/j.cam.2024.116150 doi: 10.1016/j.cam.2024.116150
    [16] D. S. Yu, Y. Zhao, P. Zhou, Error estimates for the modified truncations of approximate approximation with Gaussian kernels, Calcolo, 50 (2013), 195–208. https://doi.org/10.1007/s10092-012-0064-2 doi: 10.1007/s10092-012-0064-2
    [17] I. E. Gopenguz, A theorem of A. F. Timan on the approximation of functions by polynomials on a finite segment, Math. Notes Acad. Sci. USSR 1, 1 (1967), 110–116. https://doi.org/10.1007/BF01268059 doi: 10.1007/BF01268059
    [18] D. S. Yu, S. P. Zhou, Approximation by rational operators in $L^{p}$ spaces, Math. Nachr., 282 (2009), 1600–1618. https://doi.org/10.1002/mana.200610812 doi: 10.1002/mana.200610812
    [19] Z. Ditzian, V. Totik, Moduli of smoothness, New York: Springer, 1987. https://doi.org/10.1007/978-1-4612-4778-4
    [20] G. Mastroianni, J. Szabados, Balázs-Shepard operators on infinite intervals, Ⅱ, J. Approx. Theory, 90 (1997), 1–8. https://doi.org/10.1006/jath.1996.3075 doi: 10.1006/jath.1996.3075
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(175) PDF downloads(32) Cited by(0)

Article outline

Figures and Tables

Figures(10)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog