摘要
SMO算法是目前解决支持向量机训练问题的一种十分有效的方法,但是当面对大样本数据时,SMO训练速度十分缓慢。首先,分析了SMO迭代过程中目标函数值的变化情况,进而提出以目标函数值的改变量作为算法终止的判定条件和在SMO迭代后期改变SMO的循环条件两种策略。在几个著名的数据集的试验结果表明,该方法可以大大缩短SMO的训练时间,特别适用于大样本数据。
At present sequential minimal optimization (SMO) algorithm is a very efficient method for training support vector machines (SVM). However, the training speed of SMO is very slow for the large-scale datasets. Analyzing the varieties of the objective function in SMO iterations, the two novel improved SMO algorithms were propose, where the changed dition of SMO'S iteration and the results show that value of the objective function is taken as the termination condition and change the conin the latter stage of SMO. Experiments on several benchmark datasets have been done the training time of the proposed algorithm is reduced greatly, especially for the largescale problems.
出处
《广东技术师范学院学报》
2008年第9期30-33,共4页
Journal of Guangdong Polytechnic Normal University
基金
国家自然科学基金(10471045
60433020)
广东省自然科学基金(970472
000463
04020079)
霍英东基金(91005)
教育部人文社科基金(2005-241)
广东省科技攻关项目(2005B10101010)
广州市天河区科技攻关项目(051G041)
华南理工大学自然科学基金(B13-E5050190)
关键词
支持向量机
SMO
目标函数改变量
support vector machine
sequential minimal optimization algorithm
the variation of objection function