Vincent Zha, Ivey Chiu, Alexandre Guilbault, Jaime Tatis
{"title":"连续状态空间中变量缓慢变化的基于模型的强化学习问题的超空间邻居渗透动态规划方法","authors":"Vincent Zha, Ivey Chiu, Alexandre Guilbault, Jaime Tatis","doi":"10.1109/ICCMA53594.2021.00018","DOIUrl":null,"url":null,"abstract":"Slowly changing variables in a continuous state space constitute an important category of reinforcement learning and see their applications in many domains, such as modeling a climate control system where temperature, humidity, etc. change slowly over time. However, this subject is less addressed in relevant studies. Classical methods with certain variants, such as Dynamic Programming with Tile Coding which discretizes the state space, fail to handle slowly changing variables because those methods cannot capture the tiny changes in each transition step, as it is computationally expensive or impossible to establish an extremely granular grid system. In this paper, we introduce a Hyperspace Neighbor Penetration (HNP) approach that solves the problem. HNP captures in each transition step the state’s partial “penetration” into its neighboring hyper-tiles in the gridded hyperspace, thus does not require the transition to be inter-tile for the change to be captured. Therefore, HNP allows for a very coarse grid system, which makes the computation feasible. HNP assumes near linearity of the transition function in a local space, which is commonly satisfied. In summary, HNP can be orders of magnitude more efficient than classical method in handling slowly changing variables in reinforcement learning. We have successfully made an industrial implementation of NHP.","PeriodicalId":131082,"journal":{"name":"2021 International Conference on Computing, Computational Modelling and Applications (ICCMA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hyperspace Neighbor Penetration Approach to Dynamic Programming for Model-Based Reinforcement Learning Problems with Slowly Changing Variables in a Continuous State Space\",\"authors\":\"Vincent Zha, Ivey Chiu, Alexandre Guilbault, Jaime Tatis\",\"doi\":\"10.1109/ICCMA53594.2021.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Slowly changing variables in a continuous state space constitute an important category of reinforcement learning and see their applications in many domains, such as modeling a climate control system where temperature, humidity, etc. change slowly over time. However, this subject is less addressed in relevant studies. Classical methods with certain variants, such as Dynamic Programming with Tile Coding which discretizes the state space, fail to handle slowly changing variables because those methods cannot capture the tiny changes in each transition step, as it is computationally expensive or impossible to establish an extremely granular grid system. In this paper, we introduce a Hyperspace Neighbor Penetration (HNP) approach that solves the problem. HNP captures in each transition step the state’s partial “penetration” into its neighboring hyper-tiles in the gridded hyperspace, thus does not require the transition to be inter-tile for the change to be captured. Therefore, HNP allows for a very coarse grid system, which makes the computation feasible. HNP assumes near linearity of the transition function in a local space, which is commonly satisfied. In summary, HNP can be orders of magnitude more efficient than classical method in handling slowly changing variables in reinforcement learning. We have successfully made an industrial implementation of NHP.\",\"PeriodicalId\":131082,\"journal\":{\"name\":\"2021 International Conference on Computing, Computational Modelling and Applications (ICCMA)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Computing, Computational Modelling and Applications (ICCMA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCMA53594.2021.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computing, Computational Modelling and Applications (ICCMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCMA53594.2021.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
连续状态空间中缓慢变化的变量构成了强化学习的一个重要类别,并在许多领域中得到应用,例如对温度、湿度等随时间缓慢变化的气候控制系统进行建模。然而,这一主题在相关研究中较少涉及。具有某些变量的经典方法,如离散状态空间的动态编程(Dynamic Programming with Tile Coding),无法处理缓慢变化的变量,因为这些方法无法捕捉到每个过渡步骤中的微小变化,因为它的计算成本很高,或者不可能建立一个极细粒度的网格系统。在本文中,我们引入了一种超空间邻居渗透(HNP)方法来解决这个问题。HNP在每个转换步骤中捕获状态对网格超空间中相邻超块的部分“渗透”,因此不需要转换是块间的即可捕获更改。因此,HNP允许一个非常粗糙的网格系统,这使得计算可行。HNP假设过渡函数在局部空间近似线性,一般满足这一条件。总之,在处理强化学习中缓慢变化的变量时,HNP可以比经典方法效率高几个数量级。我们已经成功地实现了NHP的工业实施。
Hyperspace Neighbor Penetration Approach to Dynamic Programming for Model-Based Reinforcement Learning Problems with Slowly Changing Variables in a Continuous State Space
Slowly changing variables in a continuous state space constitute an important category of reinforcement learning and see their applications in many domains, such as modeling a climate control system where temperature, humidity, etc. change slowly over time. However, this subject is less addressed in relevant studies. Classical methods with certain variants, such as Dynamic Programming with Tile Coding which discretizes the state space, fail to handle slowly changing variables because those methods cannot capture the tiny changes in each transition step, as it is computationally expensive or impossible to establish an extremely granular grid system. In this paper, we introduce a Hyperspace Neighbor Penetration (HNP) approach that solves the problem. HNP captures in each transition step the state’s partial “penetration” into its neighboring hyper-tiles in the gridded hyperspace, thus does not require the transition to be inter-tile for the change to be captured. Therefore, HNP allows for a very coarse grid system, which makes the computation feasible. HNP assumes near linearity of the transition function in a local space, which is commonly satisfied. In summary, HNP can be orders of magnitude more efficient than classical method in handling slowly changing variables in reinforcement learning. We have successfully made an industrial implementation of NHP.