Robust and efficient edge-based visual odometry
Robust and efficient edge-based visual odometry作者机构:State Key Laboratory of Virtual Reality Technology and SystemsBeihang UniversityBeijing 100191China Institute of Computing TechnologyChinese Academy ofSciencesBeijing 100190China
出 版 物:《Computational Visual Media》 (计算可视媒体(英文版))
年 卷 期:2022年第8卷第3期
页 面:467-481页
核心收录:
学科分类:08[工学] 080203[工学-机械设计及理论] 0802[工学-机械工程]
基 金:National Key R&D Program of China under Grant No.2018YFB2100601 National Natural Science Foundation of China under Grant Nos.61872024 and 61702482
主 题:visual odometry(VO) edge structure distance transform low-texture
摘 要:Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous ***,it is still quite challenging for stateof-the-art approaches to handle low-texture *** this paper,we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera *** contrast to direct methods,we choose reprojection error to construct the optimization energy,which can effectively cope with illumination *** distance transform map built upon edge detection for each frame is used to improve tracking efficiency.A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the *** on public datasets show that the method is comparable to stateof-the-art methods in terms of tracking accuracy,while being faster and more robust.