咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >PSC-Net: learning part spatial... 收藏

PSC-Net: learning part spatial co-occurrence for occluded pedestrian detection

PSC-Net: learning part spatial co-occurrence for occluded pedestrian detection

作     者:Jin XIE Yanwei PANG Hisham CHOLAKKAL Rao ANWER Fahad KHAN Ling SHAO Jin XIE;Yanwei PANG;Hisham CHOLAKKAL;Rao ANWER;Fahad KHAN;Ling SHAO

作者机构:Tianjin Key Laboratory of Brain-Inspired Artificial IntelligenceSchool of Electrical and Information EngineeringTianjin University 

出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))

年 卷 期:2021年第64卷第2期

页      面:31-43页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 080203[工学-机械设计及理论] 0835[工学-软件工程] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by National Natural Science Foundation of China (Grant No. 61632018) National Key R&D Program of China (Grant Nos. 2018AAA0102800, 2018AAA0102802) 

主  题:pedestrian detection graph convolutional network occlusion object detection feature extraction 

摘      要:Detecting pedestrians, especially under heavy occlusion, is a challenging computer vision problem with numerous real-world applications. This paper introduces a novel approach, termed as PSC-Net,for occluded pedestrian detection. The proposed PSC-Net contains a dedicated module that is designed to explicitly capture both inter and intra-part co-occurrence information of different pedestrian body parts through a graph convolutional network(GCN). Both inter and intra-part co-occurrence information contribute towards improving the feature representation for handling varying level of occlusions, ranging from partial to severe occlusions. Our PSC-Net exploits the topological structure of pedestrian and does not require part-based annotations or additional visible bounding-box(VBB) information to learn part spatial co-occurrence. Comprehensive experiments are performed on three challenging datasets: CityPersons, Caltech, and CrowdHuman datasets. Particularly, in terms of log-average miss rates and with the same backbone and input scale as those of the state-of-the-art MGAN, the proposed PSC-Net achieves absolute gains of 4.0% and 3.4% over MGAN on the heavy occlusion subsets of CityPersons and Caltech test sets, respectively.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分