咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Interactive method research of... 收藏

Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model

作     者:ZHUANG HongChao XIA YiLu WANG Ning LI WeiHua DONG Lei LI Bo ZHUANG HongChao;XIA YiLu;WANG Ning;LI WeiHua;DONG Lei;LI Bo

作者机构:School of Mechanical EngineeringTianjin University of Technology and EducationTianjin 300222China School of Information Technology EngineeringTianjin University of Technology and EducationTianjin 300222China School of Automotive EngineeringHarbin Institute of Technology(Weihai)Weihai 264209China Tianjin Institute of Aerospace Mechanical and Electrical EquipmentTianjin 300458China School of Mechatronics EngineeringHarbin Institute of TechnologyHarbin 150000China 

出 版 物:《Science China(Technological Sciences)》 (中国科学(技术科学英文版))

年 卷 期:2023年第66卷第6期

页      面:1717-1733页

核心收录:

学科分类:08[工学] 0811[工学-控制科学与工程] 

基  金:supported by the National Natural Science Foundation of China(Grant No.51505335) the Industry University Cooperation Collaborative Education Project of the Department of Higher Education of the Chinese Ministry of Education(Grant No.202102517001) the Tianjin Postgraduate Scientific Research Innovation Project(Special Project of Intelligent Network Vehicle Connection)(Grant No.2021YJSO2S33),the Tianjin Postgraduate Scientific Research Innovation Project(Grant No.2021YJSS216) the Doctor Startup Project of Tianjin University of Technology and Education(Grant No.KYQD 1806)。 

主  题:human-robot interaction gesture and eye movement hybrid model YOLOv4 CBAM 

摘      要:The lightweight human-robot interaction model with high real-time,high accuracy,and strong anti-interference capability can be better applied to future lunar surface exploration and construction work.Based on the feature information inputted from the monocular camera,the signal acquisition and processing fusion of the astronaut gesture and eye-movement modal interaction can be performed.Compared with the single mode,the human-robot interaction model of bimodal collaboration can achieve the issuance of complex interactive commands more efficiently.The optimization of the target detection model is executed by inserting attention into YOLOv4 and filtering image motion blur.The central coordinates of pupils are identified by the neural network to realize the human-robot interaction in the eye movement mode.The fusion between the astronaut gesture signal and eye movement signal is performed at the end of the collaborative model to achieve complex command interactions based on a lightweight model.The dataset used in the network training is enhanced and extended to simulate the realistic lunar space interaction environment.The human-robot interaction effects of complex commands in the single mode are compared with those of complex commands in the bimodal collaboration.The experimental results show that the concatenated interaction model of the astronaut gesture and eye movement signals can excavate the bimodal interaction signal better,discriminate the complex interaction commands more quickly,and has stronger signal anti-interference capability based on its stronger feature information mining ability.Compared with the command interaction realized by using the single gesture modal signal and the single eye movement modal signal,the interaction model of bimodal collaboration is shorter about 79%to 91%of the time under the single mode interaction.Regardless of the influence of any image interference item,the overall judgment accuracy of the proposed model can be maintained at about 83%to 97%.The effectiveness of the proposed method is verified.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分