摘要
为了解决传统BP(back-propagation)算法收敛速度慢,训练得到的网络性能较差的问题,在借鉴生理学中“选择性注意力模型”的基础上,将遗传算法与误差放大的BP学习算法进行了有机的融合,提出了基于注意力模型的快速混合学习算法.该算法的核心在于将单独的BP训练过程划分为许多小的切片,并对每个切片进行误差放大的训练和竞争淘汰机制的选择.通过发现收敛速率较快的个体和过滤陷入局部极值的个体,来保证网络训练的成功率和实现快速向全局最优区域逼近的目的.仿真结果表明,该算法有效地解决了传统BP算法中由于初始权值的随机性造成的训练失败问题,并能有效解决饱和区域引起的后期训练缓慢问题,在不增加网络隐层节点数的情况下,显著地提高了网络的收敛精度和泛化能力.这将使神经网络在众多实际的分类问题上具有更广泛的应用前景.
A hybrid algorithm based on attention model (HAAM) is proposed to speed up the training of back-propagation neural networks and improve the performances. The algorithm combines the genetic algorithm with the BP algorithm based on magnified error signal. The key to this algorithm lies in the partition of the BP training process into many chips with each chip trained by the BP algorithm. The chips in the same iteration are optimized by the GA operators, and those in different iterations constitute the whole training. Therefore, the HAAM obtains the ability of searching the global optimum solution relying on these operations, and it is easy to be parallelly processed. The simulation experiments show that this algorithm can effectively avoid failure training caused by randomizing the initial weights and thresholds, and solve the slow convergence problem resulted from the Flat-Spots when the error signal becomes too small. Moreover, this algorithm improves the generalization of BP network by improving the training precision instead of adding hidden neurons.
出处
《软件学报》
EI
CSCD
北大核心
2005年第6期1073-1080,共8页
Journal of Software
基金
国家自然科学基金~~
关键词
BP算法
人工神经网络
注意力模型
遗传算法
饱和区域
局部极值
back-propagation algorithm
artificial neural network
attention model
genetic algorithm
Flat-Spots
local optimum