A multiplicative Gauss-Newton minimization algorithm:Theory and application to exponential functions
A multiplicative Gauss-Newton minimization algorithm:Theory and application to exponential functions作者机构:Department of Electronics and Communication EngineeringThapar Institute of Engineering and TechnologyPatiala 147004India
出 版 物:《Applied Mathematics(A Journal of Chinese Universities)》 (高校应用数学学报(英文版)(B辑))
年 卷 期:2021年第36卷第3期
页 面:370-389页
核心收录:
学科分类:07[理学] 0701[理学-数学] 070101[理学-基础数学]
主 题:multiplicative calculus multiplicative least square method multiplicative Newton minimization multiplicative Gauss-Newton minimization non-linear exponential functions
摘 要:Multiplicative calculus(MUC)measures the rate of change of function in terms of ratios,which makes the exponential functions significantly linear in the framework of ***,a generally non-linear optimization problem containing exponential functions becomes a linear problem in *** this as motivation,this paper lays mathematical foundation of well-known classical Gauss-Newton minimization(CGNM)algorithm in the framework of *** paper formulates the mathematical derivation of proposed method named as multiplicative Gauss-Newton minimization(MGNM)method along with its convergence *** proposed method is generalized for n number of variables,and all its theoretical concepts are authenticated by simulation *** case studies have been conducted incorporating multiplicatively-linear and non-linear exponential *** simulation results,it has been observed that proposed MGNM method converges for 12972 points,out of 19600 points considered while optimizing multiplicatively-linear exponential function,whereas CGNM and multiplicative Newton minimization methods converge for only 2111 and 9922 points,***,for a given set of initial value,the proposed MGNM converges only after 2 iterations as compared to 5 iterations taken by other methods.A similar pattern is observed for multiplicatively-non-linear exponential ***,it can be said that proposed method converges faster and for large range of initial values as compared to conventional methods.