期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
带惩罚项与随机输入的BP神经网络在线梯度学习算法的收敛性 被引量:1
1
作者 鲁慧芳 吴微 李正学 《Journal of Mathematical Research and Exposition》 CSCD 北大核心 2007年第3期643-653,共11页
本文对三层BP神经网络中带有惩罚项的在线梯度学习算法的收敛性问题进行了研究,在网络训练每一轮开始执行之前,对训练样本随机进行重排,以使网络学习更容易跳出局部极小,文中给出了误差函数的单调性定理以及该算法的弱收敛和强收敛性定理。
关键词 BP神经网络 在线梯度法 收敛性 惩罚项 随机输入
在线阅读 下载PDF
CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS 被引量:3
2
作者 邵红梅 吴微 李峰 《Numerical Mathematics A Journal of Chinese Universities(English Series)》 SCIE 2005年第1期87-96,共10页
Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, a... Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results. 展开更多
关键词 前馈神经网络系统 收敛 随机变量 单调性 有界性原理 在线梯度计算
在线阅读 下载PDF
Convergence of On-Line Gradient Methods for Two-Layer Feedforward Neural Networks
3
作者 李正学 吴微 张宏伟 《Journal of Mathematical Research and Exposition》 CSCD 北大核心 2001年第2期12-12,共1页
A discussion is given on the convergence of the on-line gradient methods for two-layer feedforward neural networks in general cases. The theories are applied to some usual activation functions and energy functions.
关键词 on-line gradient method feedforward neural network convergence.
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部