{"title":"Development of a Human-Like Learning Frame for Data-Driven Adaptive Control Algorithm of Automated Driving","authors":"K. Oh, Sechan Oh, Jongmin Lee, K. Yi","doi":"10.23919/ICCAS52745.2021.9649954","DOIUrl":null,"url":null,"abstract":"This paper proposes a human-like learning frame for data-driven adaptive control algorithm of automated driving. Generally, driving control algorithms for automated vehicles need environment information and relatively accurate system information like mathematical model and system parameters. Because there are unexpected uncertainties and changes in environment and system dynamic, derivation of relatively accurate mathematical model or dynamic parameters information is not easy in real world and it can have a negative impact on driving control performance. Therefore, this study proposes data-driven feedback control method for automated driving based on human-like learning frame in order to address the aforementioned limitation. The human-like learning frame is based on finite-memory like human and is divided into two parts such as control and decision parts. In the control part, it is designed that feedback gains are derived based on least squares method using saved error states and gains in finite-memory. And the control input has been computed using the derived feedback gains. After control input is used for driving control, it is designed that current error states and the used feedback gains are saved in the finite-memory real-time in the decision part if the time-derivative of cost function has a negative value. If the time-derivative of the cost function has greater than or equal to zero, it is designed that the feedback gains are updated using gradient descent method with sensitivity estimation and the used error states and gains are saved in the memory as a new data. The performance evaluation has been conducted using the Matlab/Simulink and CarMaker software for reasonable evaluation.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICCAS52745.2021.9649954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper proposes a human-like learning frame for data-driven adaptive control algorithm of automated driving. Generally, driving control algorithms for automated vehicles need environment information and relatively accurate system information like mathematical model and system parameters. Because there are unexpected uncertainties and changes in environment and system dynamic, derivation of relatively accurate mathematical model or dynamic parameters information is not easy in real world and it can have a negative impact on driving control performance. Therefore, this study proposes data-driven feedback control method for automated driving based on human-like learning frame in order to address the aforementioned limitation. The human-like learning frame is based on finite-memory like human and is divided into two parts such as control and decision parts. In the control part, it is designed that feedback gains are derived based on least squares method using saved error states and gains in finite-memory. And the control input has been computed using the derived feedback gains. After control input is used for driving control, it is designed that current error states and the used feedback gains are saved in the finite-memory real-time in the decision part if the time-derivative of cost function has a negative value. If the time-derivative of the cost function has greater than or equal to zero, it is designed that the feedback gains are updated using gradient descent method with sensitivity estimation and the used error states and gains are saved in the memory as a new data. The performance evaluation has been conducted using the Matlab/Simulink and CarMaker software for reasonable evaluation.