咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >A Convergence Study of SGD-Typ... 收藏

A Convergence Study of SGD-Type Methods for Stochastic Optimization

作     者:Tiannan Xiao Guoguo Yang 

作者机构:LMAM and School of Mathematical SciencesPeking UniversityBeijing 100871China 

出 版 物:《Numerical Mathematics(Theory,Methods and Applications)》 (高等学校计算数学学报(英文版))

年 卷 期:2023年第16卷第4期

页      面:914-930页

核心收录:

学科分类:07[理学] 0701[理学-数学] 070101[理学-基础数学] 

基  金:supported by the NSFC(Grant No.11825102) the China Postdoctoral Science Foundation(Grant No.2023M730093) the National Key R&D Program of China(Grant No.2021YFA1003300) 

主  题:SGD momentum SGD Nesterov acceleration time averaged SGD convergence analysis non-convex 

摘      要:In this paper,we first reinvestigate the convergence of the vanilla SGD method in the sense of L2 under more general learning rates conditions and a more general convex assumption,which relieves the conditions on learning rates and does not need the problem to be strongly ***,by taking advantage of the Lyapunov function technique,we present the convergence of the momentum SGD and Nesterov accelerated SGDmethods for the convex and non-convex problem under L-smooth assumption that extends the bounded gradient limitation to a certain *** convergence of time averaged SGD was also analyzed.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分