期刊文献+

神经网络的加权本质逼近阶

The Essential Order of Approximation with Weights of Neural Networks
在线阅读 下载PDF
导出
摘要 证明了具有单一隐层的神经网络在L_w^q的逼近,获得了网络逼近的上界估计和下界估计.这一结果揭示了神经网络在加权逼近的意义下,网络的收敛阶与隐层单元个数之间的关系,为神经网络的应用提供了重要的理论基础. This paper presents the approximation ability of a feedforward neural network with a single hidden layer in L^qω, including the estimation of its approximation upper and lower bounds. Under the principle of the weighted approximation, the work shows the rela- tionship between the approximation precision of an underlying feedforward neural network and the number of hidden nodes. The crucial point provides a theoretical foundation for the applications of feedforward neural networks.
出处 《数学年刊(A辑)》 CSCD 北大核心 2009年第6期741-750,共10页 Chinese Annals of Mathematics
基金 国家973计划(No2007CB311000) 国家自然科学基金(No10726040 No10701062 No10826081) 教育部科学技术重点项目(No108176) 中国博士后基金(No20080431237) 重庆市科委自然科学基金(NoCSTC2009BB2306)资助的项目
关键词 逼近估计 神经网络 JACOBI权 Approximation estimation, Neural networks, Jacobi weights
  • 相关文献

参考文献6

二级参考文献26

  • 1陈天平.Approximation Problems in System Identification With Neural Networks[J].Science China Mathematics,1994,37(4):414-421. 被引量:8
  • 2陈天平.神经网络及其在系统识别应用中的逼近问题[J].中国科学(A辑),1994,24(1):1-7. 被引量:50
  • 3陈天平,中国科学.A,1994年,24卷,1期,1页
  • 4Zhou Dingxuan,J Approx Theory,1994年,76卷,403页
  • 5Zhou Dingxuan,数学学报,1992年,35卷,3期,331页
  • 6Berens H,Indag Math,1991年,2卷,4期,411页
  • 7Ito Y, Saito K. Superposition of linearly independent functions and finite mappings by neural networks. Math Sci, 1996, 21(1) : 27-33
  • 8Pinkus A. Approximation theory by superposition of sigmoidal and radial basis functions. Adv Appl Math, 1992, 13 (3):350-373
  • 9Huang GB, Babri HA. Feedforward neural networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Networks, 1998, 9(1):224-229
  • 10Sartori MA, Antsaklis PJ. A simple method to derive bounds on the size and to train muhilayer neural networks. IEEE Trans Neural Networks, 1991, 2(4):467-471

共引文献82

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部