咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Global-Attention-Based Neural ... 收藏

Global-Attention-Based Neural Networks for Vision Language Intelligence

Global-Attention-Based Neural Networks for Vision Language Intelligence

作     者:Pei Liu Yingjie Zhou Dezhong Peng Dapeng Wu Pei Liu;Yingjie Zhou;Dezhong Peng;Dapeng Wu

作者机构:the College of Computer ScienceSichuan UniversityChengdu 610065China the College of Computer ScienceSichuan UniversityChengdu 610065 Sichuan Zhiqian Technology Co.Ltd.Chengdu 610041 Shenzhen Peng Cheng LaboratoryShenzhen 518052China the Department of Electrical and Computer EngineeringUniversity of FloridaGainesville FL 32611 USA 

出 版 物:《IEEE/CAA Journal of Automatica Sinica》 (自动化学报(英文版))

年 卷 期:2021年第8卷第7期

页      面:1243-1252页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 080203[工学-机械设计及理论] 0835[工学-软件工程] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:the National Natural Science Foundation of China(61971296,U19A2078,61836011,61801315) the Ministry of Education and China Mobile Research Foundation Project(MCM20180405) Sichuan Science and Technology Planning Project(2019YFG0495,2021YFG0301,2021YFG0317,2020YFG0319,2020YFH0186) 

主  题:Global attention image captioning latent contribution 

摘      要:In this paper,we develop a novel global-attentionbased neural network(GANN)for vision language intelligence,specifically,image captioning(language description of a given image).As many previous works,the encoder-decoder framework is adopted in our proposed model,in which the encoder is responsible for encoding the region proposal features and extracting global caption feature based on a specially designed module of predicting the caption objects,and the decoder generates captions by taking the obtained global caption feature along with the encoded visual features as inputs for each attention head of the decoder *** global caption feature is introduced for the purpose of exploring the latent contributions of region proposals for image captioning,and further helping the decoder better focus on the most relevant proposals so as to extract more accurate visual feature in each time step of caption *** GANN is implemented by incorporating the global caption feature into the attention weight calculation phase in the word predication process in each head of the decoder *** our experiments,we qualitatively analyzed the proposed model,and quantitatively evaluated several state-of-the-art schemes with GANN on the MS-COCO *** results demonstrate the effectiveness of the proposed global attention mechanism for image captioning.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分