咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Learning discriminative repres... 收藏

Learning discriminative representation with global and fine-grained features for cross-view gait recognition

作     者:Jing Xiao Huan Yang Kun Xie Jia Zhu Ji Zhang 

作者机构:School of Computer ScienceSouth China Normal UniversityGuangzhouGuangdongChina Key Laboratory of Intelligent Education Technology and Application of Zhejiang ProvinceZhejiang Normal UniversityJinhuaZhejiangChina School of SciencesUniversity of Southern QueenslandToowoomba QldAustralia 

出 版 物:《CAAI Transactions on Intelligence Technology》 (智能技术学报(英文))

年 卷 期:2022年第7卷第2期

页      面:187-199页

核心收录:

学科分类:08[工学] 081101[工学-控制理论与控制工程] 0811[工学-控制科学与工程] 081102[工学-检测技术与自动化装置] 

基  金:Key Area Research and Development Program of Guangdong Province,Grant/Award Number:2019B111101001 Natural Science Foundation of Guangdong Province,Grant/Award Number:2018A030313318 

主  题:gait database grain 

摘      要:In this study,we examine the cross-view gait recognition *** existing methods establish global feature representation based on the whole human body ***,they ignore some important details of different parts of the human *** the latest literature,positioning partial regions to learn fine-grained features has been verified to be effective in human *** they only consider coarse fine-grained fea-tures and ignore the relationship between neighboring *** the above insights together,we propose a novel model called GaitGP,which learns both important details through fine-grained features and the relationship between neighboring regions through global *** GaitGP model mainly consists of the following two ***,we propose a Channel-Attention Feature Extractor(CAFE)to extract the global features,which aggregates the channel-level attention to enhance the spatial information in a novel convolutional ***,we present the Global and Partial Feature Combiner(GPFC)to learn different fine-grained features,and combine them with the global fea-tures extracted by the CAFE to obtain the relevant information between neighboring *** results on the CASIA gait recognition dataset B(CASIA-B),The OU-ISIR gait database,multi-view large population dataset,and The OU-ISIR gait database gait datasets show that our method is superior to the state-of-the-art cross-view gait recognition methods.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分