Robust visual tracking based on scale invariance and deep learning
Robust visual tracking based on scale invariance and deep learning作者机构:Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia School of Computer Science Beijing University of Posts and Telecommunications Beijing 100876 China Department of Electronics Engineering Pusan National University Busan 46241 Korea
出 版 物:《Frontiers of Computer Science》 (中国计算机科学前沿(英文版))
年 卷 期:2017年第11卷第2期
页 面:230-242页
核心收录:
学科分类:081104[工学-模式识别与智能系统] 08[工学] 080401[工学-精密仪器及机械] 0804[工学-仪器科学与技术] 080402[工学-测试计量技术及仪器] 0811[工学-控制科学与工程]
基 金:This work was supported by the National Natural Science Foundation of China (Grant Nos. 61320106006 61532006 61502042)
主 题:visual tracking SURF mean shift particle filter neural network
摘 要:Visual tracking is a popular research area in com- puter vision, which is very difficult to actualize because of challenges such as changes in scale and illumination, rota- tion, fast motion, and occlusion. Consequently, the focus in this research area is to make tracking algorithms adapt to these changes, so as to implement stable and accurate vi- sual tracking. This paper proposes a visual tracking algorithm that integrates the scale invariance of SURF feature with deep learning to enhance the tracking robustness when the size of the object to be tracked changes significantly. Particle filter is used for motion estimation. The co^fidence of each parti- cle is computed via a deep neural network, and the result of particle filter is verified and corrected by mean shift because of its computational efficiency and insensitivity to external interference. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that the pro- posed tracking algorithm performs favorably against several state-of-the-art methods throughout the challenging factors in visual tracking, especially for scale variation.