时间差分强化学习中基于有序位置的非线性归一化方法

Du Runle, Liu Jiaqi, Wang Yonghai, Jiang Zhiye, Zhou Di
{"title":"时间差分强化学习中基于有序位置的非线性归一化方法","authors":"Du Runle, Liu Jiaqi, Wang Yonghai, Jiang Zhiye, Zhou Di","doi":"10.1109/ICMAE52228.2021.9522465","DOIUrl":null,"url":null,"abstract":"In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.","PeriodicalId":161846,"journal":{"name":"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Ordinal Position Based Nonlinear Normalization Method in Temporal-Difference Reinforced Learning\",\"authors\":\"Du Runle, Liu Jiaqi, Wang Yonghai, Jiang Zhiye, Zhou Di\",\"doi\":\"10.1109/ICMAE52228.2021.9522465\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.\",\"PeriodicalId\":161846,\"journal\":{\"name\":\"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMAE52228.2021.9522465\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 12th International Conference on Mechanical and Aerospace Engineering (ICMAE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMAE52228.2021.9522465","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在大气层外追逐游戏的场景中,被追逐的车辆需要做出一个回避机动来逃避被追逐车辆的追逐。因此,实现对跟踪车辆制导行为的智能识别具有十分重要的意义。强化学习有能力实现这样的智能行动。在不同的强化学习方法中,时间差分方法使用不同时间步长的组合估计来确定输出策略的值函数,因此在统计上比蒙特卡罗方法花费更少的训练时间。为了用时间差分方法研究逃避-追踪问题,需要将连续状态空间映射为有限数量的离散状态。将时间差分强化学习方法应用于该问题,提出了一种基于有序位置的非线性归一化方法,将连续状态向量和控制向量转化为离散形式,从而创建了一种增广时间差分强化学习方法。仿真结果验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ordinal Position Based Nonlinear Normalization Method in Temporal-Difference Reinforced Learning
In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信