咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >TibetanGoTinyNet:a lightweight... 收藏

TibetanGoTinyNet:a lightweight U-Net style network for zero learning of Tibetan Go

TibetanGoTinyNet:a lightweight U-Net style network for zero learning of Tibetan Go

作     者:Xiali LI Yanyin ZHANG Licheng WU Yandong CHEN Junzhi YU Xiali LI;Yanyin ZHANG;Licheng WU;Yandong CHEN;Junzhi YU

作者机构:Key Laboratory of Ethnic Language Intelligent Analysis and Security GovernanceMinistry of EducationMinzu University of ChinaBeijing 100081China School of Information EngineeringMinzu University of ChinaBeijing 100081China Department of Advanced Manufacturing and RoboticsCollege of EngineeringPeking UniversityBeijing 100871China 

出 版 物:《Frontiers of Information Technology & Electronic Engineering》 (信息与电子工程前沿(英文版))

年 卷 期:2024年第25卷第7期

页      面:924-937页

核心收录:

学科分类:081203[工学-计算机应用技术] 08[工学] 0835[工学-软件工程] 0714[理学-统计学(可授理学、经济学学位)] 0701[理学-数学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:the National Natural Science Foundation of China(Nos.62276285 and 62236011) the Major Projects of Social Science Fundation of China(No.20&ZD279) 

主  题:Zero learning Tibetan Go U-Net Self-attention mechanism Capsule network Monte-Carlo tree search 

摘      要:The game of Tibetan Go faces the scarcity of expert knowledge and research ***,we study the zero learning model of Tibetan Go under limited computing power resources and propose a novel scaleinvariant U-Net style two-headed output lightweight network *** lightweight convolutional neural networks and capsule structure are applied to the encoder and decoder of TibetanGoTinyNet to reduce computational burden and achieve better feature extraction *** autonomous self-attention mechanisms are integrated into TibetanGoTinyNet to capture the Tibetan Go board’s spatial and global information and select important *** training data are generated entirely from self-play *** achieves 62%–78%winning rate against other four U-Net style models including Res-UNet,Res-UNet Attention,Ghost-UNet,and Ghost *** also achieves 75%winning rate in the ablation experiments on the attention mechanism with embedded positional *** model saves about 33%of the training time with 45%–50%winning rate for different Monte–Carlo tree search(MCTS)simulation counts when migrated from 9×9 to 11×11 *** for our model is available at https://***/paulzyy/TibetanGoTinyNet.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分