咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Siamese transformer with hiera... 收藏

Siamese transformer with hierarchical concept embedding for fine-grained image recognition

Siamese transformer with hierarchical concept embedding for fine-grained image recognition

作     者:Yilin LYU Liping JING Jiaqi WANG Mingzhe GUO Xinyue WANG Jian YU 

作者机构:School of Computer and Information Technology Beijing Jiaotong University Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University Alibaba Group 

出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))

年 卷 期:2023年第66卷第3期

页      面:188-203页

核心收录:

学科分类:08[工学] 080203[工学-机械设计及理论] 0802[工学-机械工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:partly supported by National Key Research and Development Program of China (Grant No. 2020AAA0106800) Beijing Natural Science Foundation (Grant Nos. Z180006, L211016) National Natural Science Foundation of China (Grant No. 62176020) CAAI-Huawei Mind Spore Open Fund Chinese Academy of Sciences (Grant No. OEIP-O-202004) 

主  题:fine-grained image recognition transformer hierarchical concept embedding adaptive sampling Siamese network 

摘      要:Distinguishing the subtle differences among fine-grained images from subordinate concepts of a concept hierarchy is a challenging task. In this paper, we propose a Siamese transformer with hierarchical concept embedding(STr HCE), which contains two transformer subnetworks sharing all configurations,and each subnetwork is equipped with the hierarchical semantic information at different concept levels for fine-grained image embeddings. In particular, one subnetwork is for coarse-scale patches to learn the discriminative regions with the aid of the innate multi-head self-attention mechanism of the transformer. The other subnetwork is for finer-scale patches, which are adaptively sampled from the discriminative regions, to capture subtle yet discriminative visual cues and eliminate redundant information. STr HCE connects the two subnetworks through a score margin adjustor to enforce the most discriminative regions generating more confident predictions. Extensive experiments conducted on four commonly-used benchmark datasets, including CUB-200-2011, FGVC-Aircraft, Stanford Dogs, and NABirds, empirically demonstrate the superiority of the proposed STr HCE over state-of-the-art baselines.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分