{"title":"混合交通中自动驾驶车辆变道的人机反馈强化学习","authors":"Yuting Wang, Lu Liu, Maonan Wang, Xi Xiong","doi":"arxiv-2408.04447","DOIUrl":null,"url":null,"abstract":"The burgeoning field of autonomous driving necessitates the seamless\nintegration of autonomous vehicles (AVs) with human-driven vehicles, calling\nfor more predictable AV behavior and enhanced interaction with human drivers.\nHuman-like driving, particularly during lane-changing maneuvers on highways, is\na critical area of research due to its significant impact on safety and traffic\nflow. Traditional rule-based decision-making approaches often fail to\nencapsulate the nuanced boundaries of human behavior in diverse driving\nscenarios, while crafting reward functions for learning-based methods\nintroduces its own set of complexities. This study investigates the application\nof Reinforcement Learning from Human Feedback (RLHF) to emulate human-like\nlane-changing decisions in AVs. An initial RL policy is pre-trained to ensure\nsafe lane changes. Subsequently, this policy is employed to gather data, which\nis then annotated by humans to train a reward model that discerns lane changes\naligning with human preferences. This human-informed reward model supersedes\nthe original, guiding the refinement of the policy to reflect human-like\npreferences. The effectiveness of RLHF in producing human-like lane changes is\ndemonstrated through the development and evaluation of conservative and\naggressive lane-changing models within obstacle-rich environments and mixed\nautonomy traffic scenarios. The experimental outcomes underscore the potential\nof RLHF to diversify lane-changing behaviors in AVs, suggesting its viability\nfor enhancing the integration of AVs into the fabric of human-driven traffic.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement Learning from Human Feedback for Lane Changing of Autonomous Vehicles in Mixed Traffic\",\"authors\":\"Yuting Wang, Lu Liu, Maonan Wang, Xi Xiong\",\"doi\":\"arxiv-2408.04447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The burgeoning field of autonomous driving necessitates the seamless\\nintegration of autonomous vehicles (AVs) with human-driven vehicles, calling\\nfor more predictable AV behavior and enhanced interaction with human drivers.\\nHuman-like driving, particularly during lane-changing maneuvers on highways, is\\na critical area of research due to its significant impact on safety and traffic\\nflow. Traditional rule-based decision-making approaches often fail to\\nencapsulate the nuanced boundaries of human behavior in diverse driving\\nscenarios, while crafting reward functions for learning-based methods\\nintroduces its own set of complexities. This study investigates the application\\nof Reinforcement Learning from Human Feedback (RLHF) to emulate human-like\\nlane-changing decisions in AVs. An initial RL policy is pre-trained to ensure\\nsafe lane changes. Subsequently, this policy is employed to gather data, which\\nis then annotated by humans to train a reward model that discerns lane changes\\naligning with human preferences. This human-informed reward model supersedes\\nthe original, guiding the refinement of the policy to reflect human-like\\npreferences. The effectiveness of RLHF in producing human-like lane changes is\\ndemonstrated through the development and evaluation of conservative and\\naggressive lane-changing models within obstacle-rich environments and mixed\\nautonomy traffic scenarios. The experimental outcomes underscore the potential\\nof RLHF to diversify lane-changing behaviors in AVs, suggesting its viability\\nfor enhancing the integration of AVs into the fabric of human-driven traffic.\",\"PeriodicalId\":501309,\"journal\":{\"name\":\"arXiv - CS - Computational Engineering, Finance, and Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computational Engineering, Finance, and Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04447\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Engineering, Finance, and Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reinforcement Learning from Human Feedback for Lane Changing of Autonomous Vehicles in Mixed Traffic
The burgeoning field of autonomous driving necessitates the seamless
integration of autonomous vehicles (AVs) with human-driven vehicles, calling
for more predictable AV behavior and enhanced interaction with human drivers.
Human-like driving, particularly during lane-changing maneuvers on highways, is
a critical area of research due to its significant impact on safety and traffic
flow. Traditional rule-based decision-making approaches often fail to
encapsulate the nuanced boundaries of human behavior in diverse driving
scenarios, while crafting reward functions for learning-based methods
introduces its own set of complexities. This study investigates the application
of Reinforcement Learning from Human Feedback (RLHF) to emulate human-like
lane-changing decisions in AVs. An initial RL policy is pre-trained to ensure
safe lane changes. Subsequently, this policy is employed to gather data, which
is then annotated by humans to train a reward model that discerns lane changes
aligning with human preferences. This human-informed reward model supersedes
the original, guiding the refinement of the policy to reflect human-like
preferences. The effectiveness of RLHF in producing human-like lane changes is
demonstrated through the development and evaluation of conservative and
aggressive lane-changing models within obstacle-rich environments and mixed
autonomy traffic scenarios. The experimental outcomes underscore the potential
of RLHF to diversify lane-changing behaviors in AVs, suggesting its viability
for enhancing the integration of AVs into the fabric of human-driven traffic.