Recurrent 3D attentional networks for end-to-end active object recognition
Recurrent 3D attentional networks for end-to-end active object recognition作者机构:School of ComputerNational University of Defense Technology Department of Computer Science and Electrical & Computer EngineeringUniversity of Maryland Visual Computing Research CenterShenzhen University
出 版 物:《Computational Visual Media》 (计算可视媒体(英文版))
年 卷 期:2019年第5卷第1期
页 面:91-103页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 080202[工学-机械电子工程] 081104[工学-模式识别与智能系统] 08[工学] 0804[工学-仪器科学与技术] 0835[工学-软件工程] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by National Natural Science Foundation of China (Nos. 61572507, 61622212, and 61532003) supported by the China Scholarship Council
主 题:active object recognition recurrent neural network next-best-view 3D attention
摘 要:Active vision is inherently attention-driven:an agent actively selects views to attend in order to rapidly perform a vision task while improving its internal representation of the scene being *** by the recent success of attention-based models in 2D vision tasks based on single RGB images, we address multi-view depth-based active object recognition using an attention mechanism, by use of an end-to-end recurrent 3D attentional network. The architecture takes advantage of a recurrent neural network to store and update an internal representation. Our model,trained with 3D shape datasets, is able to iteratively attend the best views targeting an object of interest for recognizing it. To realize 3D view selection, we derive a 3D spatial transformer network. It is dierentiable,allowing training with backpropagation, and so achieving much faster convergence than the reinforcement learning employed by most existing attention-based models. Experiments show that our method, with only depth input, achieves state-of-the-art next-best-view performance both in terms of time taken and recognition accuracy.