咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Solving Markov Decision Proces... 收藏

Solving Markov Decision Processes with Downside Risk Adjustment

Solving Markov Decision Processes with Downside Risk Adjustment

作     者:Abhijit Gosavi Anish Parulekar 

作者机构:219 Engineering Management Building Department of Engineering Management and Systems Engineering Missouri University of Science and Technology Rolla MO 65409 USA Axis Bank Mumbai India 

出 版 物:《International Journal of Automation and computing》 (国际自动化与计算杂志(英文版))

年 卷 期:2016年第13卷第3期

页      面:235-245页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 07[理学] 070105[理学-运筹学与控制论] 0701[理学-数学] 

基  金:Coordination for the Improvement of Higher Level Personnel 

主  题:Downside risk Markov decision processes reinforcement learning dynamic programming targets thresholds. 

摘      要:Markov decision processes (MDPs) and their variants are widely studied in the theory of controls for stochastic discrete- event systems driven by Markov chains. Much of the literature focusses on the risk-neutral criterion in which the expected rewards, either average or discounted, are maximized. There exists some literature on MDPs that takes risks into account. Much of this addresses the exponential utility (EU) function and mechanisms to penalize different forms of variance of the rewards. EU functions have some numerical deficiencies, while variance measures variability both above and below the mean rewards; the variability above mean rewards is usually beneficial and should not be penalized/avoided. As such, risk metrics that account for pre-specified targets (thresholds) for rewards have been considered in the literature, where the goal is to penalize the risks of revenues falling below those targets. Existing work on MDPs that takes targets into account seeks to minimize risks of this nature. Minimizing risks can lead to poor solutions where the risk is zero or near zero, but the average rewards are also rather low. In this paper, hence, we study a risk-averse criterion, in particular the so-called downside risk, which equals the probability of the revenues falling below a given target, where, in contrast to minimizing such risks, we only reduce this risk at the cost of slightly lowered average rewards. A solution where the risk is low and the average reward is quite high, although not at its maximum attainable value, is very attractive in practice. To be more specific, in our formulation, the objective function is the expected value of the rewards minus a scalar times the downside risk. In this setting, we analyze the infinite horizon MDP, the finite horizon MDP, and the infinite horizon semi-MDP (SMDP). We develop dynamic programming and reinforcement learning algorithms for the finite and infinite horizon. The algorithms are tested in numerical studies an

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分