期刊文献+

引入抗遮挡机制的SiamVGG网络目标跟踪算法 被引量:13

SiamVGG Network Target Tracking Algorithm with Anti-occlusion Mechanism
在线阅读 下载PDF
导出
摘要 抗遮挡在视频目标跟踪中是一个极具挑战的研究问题。在目标跟踪过程中,目标在被部分遮挡或者完全遮挡的情况下,使得跟踪模型的漂移导致目标跟丢。为了解决这一问题,本文提出了引入抗遮挡机制的SiamVGG网络目标跟踪算法,通过对网络输出置信图的峰值和连通域的变化规律分析,设置不同的跟踪模式,分别是正常跟踪、部分遮挡、完全遮挡和遮挡丢失,然后根据不同的模式选择不同的跟踪策略。相比于其他的跟踪算法,本文算法采用了SiamVGG网络作为目标跟踪的框架并对遮挡问题进行了分析和校正,有效避免了在遮挡情况下目标跟丢。通过在OTB-50、OTB-100和OTB-2013三个基准数据集上进行了实验,验证了本文算法在抗遮挡问题的有效性和鲁棒性。 Anti-occlusion is a very challenging research problem in video target tracking.During the target tracking process,when the target is partially occluded or completely occluded,the drift of the tracking model leads to the target loss.In order to solve this problem,this paper proposes a SiamVGG network target tracking algorithm with anti-occlusion mechanism.By analyzing the variation of the peak and connected domain of the output confidence map of the network,different tracking modes are set,which are normal tracking,partial occlusion,full occlusion and occlusion loss.Then we choose different tracking strategies according to different modes.Compared with other tracking algorithms,this algorithm adopts the SiamVGG network as the target tracking framework to analyze and correct the occlusion problem,effectively avoiding the target loss in the occlusion situation.Basing on the three benchmark datasets of OTB-50,OTB-100 and OTB-2013,we have verified the effectiveness and robustness of the algorithm proposed on the anti-occlusion problem.
作者 陈富健 谢维信 Chen Fujian;Xie Weixin(Key Laboratory of ATR National Defense Science and Technology, Shenzhen University, Shenzhen, Guangdong 518060, China)
出处 《信号处理》 CSCD 北大核心 2020年第4期562-571,共10页 Journal of Signal Processing
关键词 抗遮挡 SiamVGG网络 目标跟踪 峰值 连通域 anti-occlusion SiamVGG network target tracking peak connected domain
  • 相关文献

参考文献2

二级参考文献14

  • 1Ross D A, I,im J W, Lin R S, etal. lncremenlal learn- ing for robust visual tracking[ J ]. International Journal of Computer Vision, 2008,77( 1 ) : 125-141.
  • 2Mei X, Ling tt B. Robust visual tracking using 11 mini- mization [ C ]/,/IEEE International Conference on Comput- er Vision, 2009 : 1436-1443.
  • 3Babenko B, Yang M H, Belongie S. Robust object track- ing with online multiple instance learning [ J ]. IEEE Transactions on Pattern Analysis and Machine lntelli- gem'e, 2011,33(8) :1619-1632.
  • 4Hare S, Saffari A, Torr P H S. Struck: structured output tracking with kernels [ C ]//IEEE International Conference on Computer Vision, 2011:263-270.
  • 5Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-de- tection[ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012,34( 7 ) : 1409-1422.
  • 6Danelljan M, Khan F S, Felsberg M, et al. Adaptive color attribules for real-time visual tracking[ C ] //IEEE Confer- ence on Computer Vision and Pattern Recognition, 2014: 1090-1097.
  • 7Zhang K H, Zhang I,, Yang M H, et al. Fast tracking via spatio-temporal context learning [ C ]//ECCV, 2014 : 127-141.
  • 8Yi K M, Jeong H, H B, et al. Initialization-inserrsitive vis- ual tracking through voting with s'~ient hx~al feature, s [ C]// IEEE International Conference on Computer Vision, 2013:2912-2919.
  • 9Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model [ C ]//IEEE Con- ference on Computer Vision and Pattern Recognition, 2012 : 1822-1829.
  • 10Adam A, Rivlin E, Shimshoni I. Robusl fi'agments-based tracking using the integral histogram [ C ]//IEEE Confer- enee on Computer Vision and Pattern Recognition, 2006: 798- 805.

共引文献14

同被引文献142

引证文献13

二级引证文献24

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部