咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Topology design and graph embe... 收藏

Topology design and graph embedding for decentralized federated learning

作     者:Yubin Duan Xiuqi Li Jie Wu 

作者机构:Department of Computer and Information SciencesTemple UniversityPhiladelphiaPA 19122USA 

出 版 物:《Intelligent and Converged Networks》 (智能与融合网络(英文))

年 卷 期:2024年第5卷第2期

页      面:100-115页

核心收录:

学科分类:070801[理学-固体地球物理学] 07[理学] 08[工学] 0708[理学-地球物理学] 0816[工学-测绘科学与技术] 

基  金:This work was supported in part by the National Science Foundation(NSF)(Nos.SaTC 2310298 CNS 2214940 CPS 2128378 CNS 2107014 CNS 2150152 CNS 1824440 CNS 1828363 and CNS 1757533) 

主  题:data heterogeneity decentralized federated learning graph embedding network topology 

摘      要:Federated learning has been widely employed in many applications to protect the data privacy of participating *** the dataset is decentralized among training devices in federated learning,the model parameters are usually stored in a centralized *** federated learning is easy to implement;however,a centralized scheme causes a communication bottleneck at the central server,which may significantly slow down the training *** improve training efficiency,we investigate the decentralized federated learning *** decentralized scheme has become feasible with the rapid development of device-to-device communication techniques under ***,the convergence rate of learning models in the decentralized scheme depends on the network topology *** propose optimizing the topology design to improve training efficiency for decentralized federated learning,which is a non-trivial problem,especially when considering data *** this paper,we first demonstrate the advantage of hypercube topology and present a hypercube graph construction method to reduce data heterogeneity by carefully selecting neighbors of each training device—a process that resembles classic graph *** addition,we propose a heuristic method for generating torus ***,we have explored the communication patterns in hypercube topology and propose a sequential synchronization scheme to reduce communication cost during training.A batch synchronization scheme is presented to fine-tune the communication pattern for hypercube *** on real-world datasets show that our proposed graph construction methods can accelerate the training process,and our sequential synchronization scheme can significantly reduce the overall communication traffic during training.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分