Solving Finite-Horizon Discounted Non-Stationary MDPS

El Akraoui Bouchra, C. Daoui
{"title":"Solving Finite-Horizon Discounted Non-Stationary MDPS","authors":"El Akraoui Bouchra, C. Daoui","doi":"10.2478/foli-2023-0001","DOIUrl":null,"url":null,"abstract":"Abstract Research background Markov Decision Processes (MDPs) are a powerful framework for modeling many real-world problems with finite-horizons that maximize the reward given a sequence of actions. Although many problems such as investment and financial market problems where the value of a reward decreases exponentially with time, require the introduction of interest rates. Purpose This study investigates non-stationary finite-horizon MDPs with a discount factor to account for fluctuations in rewards over time. Research methodology To consider the fluctuations of rewards with time, the authors define new nonstationary finite-horizon MDPs with a discount factor. First, the existence of an optimal policy for the proposed finite-horizon discounted MDPs is proven. Next, a new Discounted Backward Induction (DBI) algorithm is presented to find it. To enhance the value of their proposal, a financial model is used as an example of a finite-horizon discounted MDP and an adaptive DBI algorithm is used to solve it. Results The proposed method calculates the optimal values of the investment to maximize its expected total return with consideration of the time value of money. Novelty No existing studies have before examined dynamic finite-horizon problems that account for temporal fluctuations in rewards.","PeriodicalId":314664,"journal":{"name":"Folia Oeconomica Stetinensia","volume":"520 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Folia Oeconomica Stetinensia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/foli-2023-0001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Research background Markov Decision Processes (MDPs) are a powerful framework for modeling many real-world problems with finite-horizons that maximize the reward given a sequence of actions. Although many problems such as investment and financial market problems where the value of a reward decreases exponentially with time, require the introduction of interest rates. Purpose This study investigates non-stationary finite-horizon MDPs with a discount factor to account for fluctuations in rewards over time. Research methodology To consider the fluctuations of rewards with time, the authors define new nonstationary finite-horizon MDPs with a discount factor. First, the existence of an optimal policy for the proposed finite-horizon discounted MDPs is proven. Next, a new Discounted Backward Induction (DBI) algorithm is presented to find it. To enhance the value of their proposal, a financial model is used as an example of a finite-horizon discounted MDP and an adaptive DBI algorithm is used to solve it. Results The proposed method calculates the optimal values of the investment to maximize its expected total return with consideration of the time value of money. Novelty No existing studies have before examined dynamic finite-horizon problems that account for temporal fluctuations in rewards.
求解有限视界折现非平稳MDPS
研究背景马尔可夫决策过程(mdp)是一个强大的框架,用于模拟许多具有有限视界的现实世界问题,这些问题在给定一系列行动的情况下最大化回报。尽管许多问题,如投资和金融市场问题,奖励的价值随着时间呈指数级下降,需要引入利率。目的:本研究探讨了非平稳的有限视界MDPs与贴现因子,以解释奖励随时间的波动。研究方法为了考虑报酬随时间的波动,作者定义了新的带折扣因子的非平稳有限视界mdp。首先,证明了有限视界折现mdp的最优策略的存在性。然后,提出了一种新的折扣逆向归纳(DBI)算法。为了提高建议的价值,本文以有限视界贴现MDP的财务模型为例,采用自适应DBI算法进行求解。结果该方法在考虑资金时间价值的情况下,计算出投资的最优值,使投资的预期总收益最大化。新颖性在此之前,还没有任何现有的研究考察了动态有限视界问题,该问题可以解释奖励的时间波动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信