{"title":"Using Physiological Metrics to Improve Reinforcement Learning for Autonomous Vehicles","authors":"Michael Fleicher, Oren Musicant, A. Azaria","doi":"10.1109/ICTAI56018.2022.00186","DOIUrl":null,"url":null,"abstract":"Thanks to recent technological advances Autonomous Vehicles (AVs) are becoming available at some locations. Safety impacts of these devices have, however, been difficult to assess. In this paper we utilize physiological metrics to improve the performance of a reinforcement learning agent attempting to drive an autonomous vehicle in simulation. We measure the performance of our reinforcement learner in several aspects, including the amount of stress imposed on potential passengers, the number of training episodes required, and a score measuring the vehicle's speed as well as the distance successfully traveled by the vehicle, without traveling off-track or hitting a different vehicle. To that end, we compose a human model, which is based on a dataset of physiological metrics of passengers in an autonomous vehicle. We embed this model in a reinforcement learning agent by providing negative reward to the agent for actions that cause the human model an increase in heart rate. We show that such a “passenger-aware” reinforcement learner agent does not only reduce the stress imposed on hypothetical passengers, but, quite surprisingly, also drives safer and its learning process is more effective than an agent that does not obtain rewards from a human model.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Thanks to recent technological advances Autonomous Vehicles (AVs) are becoming available at some locations. Safety impacts of these devices have, however, been difficult to assess. In this paper we utilize physiological metrics to improve the performance of a reinforcement learning agent attempting to drive an autonomous vehicle in simulation. We measure the performance of our reinforcement learner in several aspects, including the amount of stress imposed on potential passengers, the number of training episodes required, and a score measuring the vehicle's speed as well as the distance successfully traveled by the vehicle, without traveling off-track or hitting a different vehicle. To that end, we compose a human model, which is based on a dataset of physiological metrics of passengers in an autonomous vehicle. We embed this model in a reinforcement learning agent by providing negative reward to the agent for actions that cause the human model an increase in heart rate. We show that such a “passenger-aware” reinforcement learner agent does not only reduce the stress imposed on hypothetical passengers, but, quite surprisingly, also drives safer and its learning process is more effective than an agent that does not obtain rewards from a human model.