{"title":"基于策略迭代的移动机器人在线自适应轨迹跟踪积分强化学习","authors":"Tatsuki Ashida, H. Ichihara","doi":"10.1080/18824889.2021.1972266","DOIUrl":null,"url":null,"abstract":"This paper considers trajectory tracking control for a nonholonomic mobile robot using integral reinforcement learning (IRL) based on a value functional represented by integrating a local cost. The tracking error dynamics between the robot and reference trajectories takes the form of time-invariant input-affine continuous-time nonlinear systems if the reference trajectory counterpart of the translational and angular velocities are constant. This paper applies integral reinforcement learning to the tracking error dynamics by approximating the value functional from the data collected along the robot trajectory. The paper proposes a specific procedure to implement the IRL-based policy iteration online, including a batch least-squares minimization. The approximate value function updates the control policy to compensate for the translational and angular velocities that drive the robot. Numerical examples illustrate to demonstrate the tracking performance of integral reinforcement learning.","PeriodicalId":413922,"journal":{"name":"SICE journal of control, measurement, and system integration","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Policy iteration-based integral reinforcement learning for online adaptive trajectory tracking of mobile robot\",\"authors\":\"Tatsuki Ashida, H. Ichihara\",\"doi\":\"10.1080/18824889.2021.1972266\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper considers trajectory tracking control for a nonholonomic mobile robot using integral reinforcement learning (IRL) based on a value functional represented by integrating a local cost. The tracking error dynamics between the robot and reference trajectories takes the form of time-invariant input-affine continuous-time nonlinear systems if the reference trajectory counterpart of the translational and angular velocities are constant. This paper applies integral reinforcement learning to the tracking error dynamics by approximating the value functional from the data collected along the robot trajectory. The paper proposes a specific procedure to implement the IRL-based policy iteration online, including a batch least-squares minimization. The approximate value function updates the control policy to compensate for the translational and angular velocities that drive the robot. Numerical examples illustrate to demonstrate the tracking performance of integral reinforcement learning.\",\"PeriodicalId\":413922,\"journal\":{\"name\":\"SICE journal of control, measurement, and system integration\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SICE journal of control, measurement, and system integration\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/18824889.2021.1972266\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SICE journal of control, measurement, and system integration","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/18824889.2021.1972266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Policy iteration-based integral reinforcement learning for online adaptive trajectory tracking of mobile robot
This paper considers trajectory tracking control for a nonholonomic mobile robot using integral reinforcement learning (IRL) based on a value functional represented by integrating a local cost. The tracking error dynamics between the robot and reference trajectories takes the form of time-invariant input-affine continuous-time nonlinear systems if the reference trajectory counterpart of the translational and angular velocities are constant. This paper applies integral reinforcement learning to the tracking error dynamics by approximating the value functional from the data collected along the robot trajectory. The paper proposes a specific procedure to implement the IRL-based policy iteration online, including a batch least-squares minimization. The approximate value function updates the control policy to compensate for the translational and angular velocities that drive the robot. Numerical examples illustrate to demonstrate the tracking performance of integral reinforcement learning.