Du Runle, Liu Jiaqi, Wang Yonghai, Jiang Zhiye, Zhou Di
{"title":"时间差分强化学习中基于有序位置的非线性归一化方法","authors":"Du Runle, Liu Jiaqi, Wang Yonghai, Jiang Zhiye, Zhou Di","doi":"10.1109/ICMAE52228.2021.9522465","DOIUrl":null,"url":null,"abstract":"In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.","PeriodicalId":161846,"journal":{"name":"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Ordinal Position Based Nonlinear Normalization Method in Temporal-Difference Reinforced Learning\",\"authors\":\"Du Runle, Liu Jiaqi, Wang Yonghai, Jiang Zhiye, Zhou Di\",\"doi\":\"10.1109/ICMAE52228.2021.9522465\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.\",\"PeriodicalId\":161846,\"journal\":{\"name\":\"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMAE52228.2021.9522465\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMAE52228.2021.9522465","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Ordinal Position Based Nonlinear Normalization Method in Temporal-Difference Reinforced Learning
In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.