咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Offline Pre-trained Multi-agen... 收藏

Offline Pre-trained Multi-agent Decision Transformer

作     者:Linghui Meng Muning Wen Chenyang Le Xiyun Li Dengpeng Xing Weinan Zhang Ying Wen Haifeng Zhang Jun Wang Yaodong Yang Bo Xu Linghui Meng;Muning Wen;Chenyang Le;Xiyun Li;Dengpeng Xing;Weinan Zhang;Ying Wen;Haifeng Zhang;Jun Wang;Yaodong Yang;Bo Xu

作者机构:Institute of AutomationChinese Academy of SciencesBeijing 100190China School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijing 100049China Shanghai Jiao Tong UniversityShanghai 200240China School of Future TechnologyUniversity of Chinese Academy of SciencesBeijing100049China Institute for AIPeking UniversityBeijing 100871China Department of Computer ScienceUniversity College LondonLondon WC1E 6BTUK 

出 版 物:《Machine Intelligence Research》 (机器智能研究(英文版))

年 卷 期:2023年第20卷第2期

页      面:233-248页

核心收录:

学科分类:0828[工学-农业工程] 08[工学] 081203[工学-计算机应用技术] 080203[工学-机械设计及理论] 0903[农学-农业资源与环境] 0802[工学-机械工程] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Linghui Meng was supported in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDA27030300) Haifeng Zhang was supported in part by the National Natural Science Foundation of China(No.62206289) 

主  题:Pre-training model multi-agent reinforcement learning(MARL) decision making transformer offline reinforcement learning 

摘      要:Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real *** a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the ***,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are *** this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of *** investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot *** start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline *** leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task *** the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and *** applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot *** the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分