咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >CONVERGENCE OF ONLINE GRADIENT... 收藏

CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS

CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS

作     者:邵红梅 吴微 李峰 

作者机构:Department of Applied Mathematics Dalian University of Technology Dalian 116024 PRC. Department of Applied Mathematics Dalian University of Technology Dalian 116024 PRC. 

出 版 物:《Numerical Mathematics A Journal of Chinese Universities(English Series)》 (NUMERICAL MATHEMATICS A JOURNAL OF CHINESE UNIVERSITIES ENGLISH SERIES)

年 卷 期:2005年第14卷第1期

页      面:87-96页

学科分类:07[理学] 0701[理学-数学] 070101[理学-基础数学] 

基  金:Partly supported by the National Natural Science Foundation of China,and the Basic Research Program of the Committee of Science Technology and Industry of National Defense of China 

主  题:前馈神经网络系统 收敛 随机变量 单调性 有界性原理 在线梯度计算法 

摘      要:Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaranteed. We also present a numerical experiment to support our results.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分