咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Learning Better Word Embedding... 收藏

Learning Better Word Embedding by Asymmetric Low-Rank Projection of Knowledge Graph

Learning Better Word Embedding by Asymmetric Low-Rank Projection of Knowledge Graph

作     者:Fei Tian Bin Gao En-Hong Chen Tie-Yah Liu 

作者机构:School of Computer Science and Technology University of Science and Technology of China Hefei 230027 China Microsoft Research Asia Beijing 100080 China 

出 版 物:《Journal of Computer Science & Technology》 (计算机科学技术学报(英文版))

年 卷 期:2016年第31卷第3期

页      面:624-634页

核心收录:

学科分类:0808[工学-电气工程] 07[理学] 081203[工学-计算机应用技术] 08[工学] 070104[理学-应用数学] 0835[工学-软件工程] 0701[理学-数学] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:natural language processing word embedding neural network knowledge graph 

摘      要:Word embedding, which refers to low-dimensional dense vector representations of natural words, has demon- strated its power in many natural language processing tasks. However, it may suffer from the inaccurate and incomplete information contained in the free text corpus as training data. To tackle this challenge, there have been quite a few studies that leverage knowledge graphs as an additional information source to improve the quality of word embedding. Although these studies have achieved certain success, they have neglected some important facts about knowledge graphs: 1) many relationships in knowledge graphs are many-to-one, one-to-many or even many-to-many, rather than simply one-to-one; 2) most head entities and tail entities in knowledge graphs come from very different semantic spaces. To address these issues, in this paper, we propose a new algorithm named ProjectNet. ProjectNet models the relationships between head and tail entities after transforming them with different low-rank projection matrices. The low-rank projection can allow non one- to-one relationships between entities, while different projection matrices for head and tail entities allow them to originate in different semantic spaces. The experimental results demonstrate that ProjectNet yields more accurate word embedding than previous studies, and thus leads to clear improvements in various natural language processing tasks.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分