Learning Hand Latent Features for Unsupervised 3D Hand Pose Estimation
作者机构:School of Information Science and TechnologyUniversity of Science and Technology of China230026China Sokoine University of AgricultureMorogoro3167Tanzania College of information and communication TechnologyUniversity of Dare-es-salaamDar-es-Salaam33335Tanzania
出 版 物:《Journal of Autonomous Intelligence》 (自主智能(英文))
年 卷 期:2019年第2卷第1期
页 面:1-10页
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 08[工学]
基 金:Chinese Academy of Science and The World Academy of Science Fundamental Research Funds for the Central Universities, (WK2350000002)
主 题:Hand Pose Estimation Convolutional Neural Networks Recurrent Neural Networks Human-machine Interaction Predictive Coding Unsupervised Learning
摘 要:Recent hand pose estimation methods require large numbers of annotated training data to extract the dynamic information from a hand ***,precise and dense annotation on the real data is difficult to come by and the amount of information passed to the training algorithm is significantly *** paper presents an approach to developing a hand pose estimation system which can accurately regress a 3D pose in an unsupervised *** whole process is performed in three ***,the hand is modelled by a novel latent tree dependency model (LTDM) which transforms internal joints location to an explicit ***,we perform predictive coding of image sequences of hand poses in order to capture latent features underlying a given image without supervision.A mapping is then performed between an image depth and a generated ***,the hand joints are regressed using convolutional neural networks to finally estimate the latent pose given some depth ***,an unsupervised error term which is a part of the recurrent architecture ensures smooth estimation of the final *** demonstrate the performance of the proposed system,a complete experiment was conducted on three challenging public datasets,ICVL,MSRA,and *** empirical results show the significant performance of our method which is comparable or better than the state-of-the-art approaches.