利用强化学习处理非累积目标的决策过程

Maximilian Nägele, Jan Olle, Thomas Fösel, Remmy Zen, Florian Marquardt
{"title":"利用强化学习处理非累积目标的决策过程","authors":"Maximilian Nägele, Jan Olle, Thomas Fösel, Remmy Zen, Florian Marquardt","doi":"arxiv-2405.13609","DOIUrl":null,"url":null,"abstract":"Markov decision processes (MDPs) are used to model a wide variety of\napplications ranging from game playing over robotics to finance. Their optimal\npolicy typically maximizes the expected sum of rewards given at each step of\nthe decision process. However, a large class of problems does not fit\nstraightforwardly into this framework: Non-cumulative Markov decision processes\n(NCMDPs), where instead of the expected sum of rewards, the expected value of\nan arbitrary function of the rewards is maximized. Example functions include\nthe maximum of the rewards or their mean divided by their standard deviation.\nIn this work, we introduce a general mapping of NCMDPs to standard MDPs. This\nallows all techniques developed to find optimal policies for MDPs, such as\nreinforcement learning or dynamic programming, to be directly applied to the\nlarger class of NCMDPs. Focusing on reinforcement learning, we show\napplications in a diverse set of tasks, including classical control, portfolio\noptimization in finance, and discrete optimization problems. Given our\napproach, we can improve both final performance and training time compared to\nrelying on standard MDPs.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"52 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tackling Decision Processes with Non-Cumulative Objectives using Reinforcement Learning\",\"authors\":\"Maximilian Nägele, Jan Olle, Thomas Fösel, Remmy Zen, Florian Marquardt\",\"doi\":\"arxiv-2405.13609\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Markov decision processes (MDPs) are used to model a wide variety of\\napplications ranging from game playing over robotics to finance. Their optimal\\npolicy typically maximizes the expected sum of rewards given at each step of\\nthe decision process. However, a large class of problems does not fit\\nstraightforwardly into this framework: Non-cumulative Markov decision processes\\n(NCMDPs), where instead of the expected sum of rewards, the expected value of\\nan arbitrary function of the rewards is maximized. Example functions include\\nthe maximum of the rewards or their mean divided by their standard deviation.\\nIn this work, we introduce a general mapping of NCMDPs to standard MDPs. This\\nallows all techniques developed to find optimal policies for MDPs, such as\\nreinforcement learning or dynamic programming, to be directly applied to the\\nlarger class of NCMDPs. Focusing on reinforcement learning, we show\\napplications in a diverse set of tasks, including classical control, portfolio\\noptimization in finance, and discrete optimization problems. Given our\\napproach, we can improve both final performance and training time compared to\\nrelying on standard MDPs.\",\"PeriodicalId\":501294,\"journal\":{\"name\":\"arXiv - QuantFin - Computational Finance\",\"volume\":\"52 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Computational Finance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.13609\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.13609","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

马尔可夫决策过程(Markov decision processes,MDPs)被用来模拟从机器人游戏到金融等各种应用。它们的最优策略通常是最大化决策过程中每一步所给奖励的预期总和。然而,有一大类问题无法直接纳入这一框架:非累积马尔可夫决策过程(NCMDPs),在这类问题中,最大化的不是奖励的预期总和,而是奖励的任意函数的预期值。在这项工作中,我们引入了 NCMDP 与标准 MDP 的一般映射。这使得所有为寻找 MDPs 最佳策略而开发的技术(如强化学习或动态编程)都能直接应用于更大类的 NCMDPs。我们以强化学习为重点,展示了在各种任务中的应用,包括经典控制、金融投资组合优化和离散优化问题。鉴于我们的方法,与依赖标准 MDPs 相比,我们可以提高最终性能并缩短训练时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Tackling Decision Processes with Non-Cumulative Objectives using Reinforcement Learning
Markov decision processes (MDPs) are used to model a wide variety of applications ranging from game playing over robotics to finance. Their optimal policy typically maximizes the expected sum of rewards given at each step of the decision process. However, a large class of problems does not fit straightforwardly into this framework: Non-cumulative Markov decision processes (NCMDPs), where instead of the expected sum of rewards, the expected value of an arbitrary function of the rewards is maximized. Example functions include the maximum of the rewards or their mean divided by their standard deviation. In this work, we introduce a general mapping of NCMDPs to standard MDPs. This allows all techniques developed to find optimal policies for MDPs, such as reinforcement learning or dynamic programming, to be directly applied to the larger class of NCMDPs. Focusing on reinforcement learning, we show applications in a diverse set of tasks, including classical control, portfolio optimization in finance, and discrete optimization problems. Given our approach, we can improve both final performance and training time compared to relying on standard MDPs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信