{"title":"From Words to Wheels: Automated Style-Customized Policy Generation for Autonomous Driving","authors":"Xu Han, Xianda Chen, Zhenghan Cai, Pinlong Cai, Meixin Zhu, Xiaowen Chu","doi":"arxiv-2409.11694","DOIUrl":null,"url":null,"abstract":"Autonomous driving technology has witnessed rapid advancements, with\nfoundation models improving interactivity and user experiences. However,\ncurrent autonomous vehicles (AVs) face significant limitations in delivering\ncommand-based driving styles. Most existing methods either rely on predefined\ndriving styles that require expert input or use data-driven techniques like\nInverse Reinforcement Learning to extract styles from driving data. These\napproaches, though effective in some cases, face challenges: difficulty\nobtaining specific driving data for style matching (e.g., in Robotaxis),\ninability to align driving style metrics with user preferences, and limitations\nto pre-existing styles, restricting customization and generalization to new\ncommands. This paper introduces Words2Wheels, a framework that automatically\ngenerates customized driving policies based on natural language user commands.\nWords2Wheels employs a Style-Customized Reward Function to generate a\nStyle-Customized Driving Policy without relying on prior driving data. By\nleveraging large language models and a Driving Style Database, the framework\nefficiently retrieves, adapts, and generalizes driving styles. A Statistical\nEvaluation module ensures alignment with user preferences. Experimental results\ndemonstrate that Words2Wheels outperforms existing methods in accuracy,\ngeneralization, and adaptability, offering a novel solution for customized AV\ndriving behavior. Code and demo available at\nhttps://yokhon.github.io/Words2Wheels/.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":"52 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11694","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Autonomous driving technology has witnessed rapid advancements, with
foundation models improving interactivity and user experiences. However,
current autonomous vehicles (AVs) face significant limitations in delivering
command-based driving styles. Most existing methods either rely on predefined
driving styles that require expert input or use data-driven techniques like
Inverse Reinforcement Learning to extract styles from driving data. These
approaches, though effective in some cases, face challenges: difficulty
obtaining specific driving data for style matching (e.g., in Robotaxis),
inability to align driving style metrics with user preferences, and limitations
to pre-existing styles, restricting customization and generalization to new
commands. This paper introduces Words2Wheels, a framework that automatically
generates customized driving policies based on natural language user commands.
Words2Wheels employs a Style-Customized Reward Function to generate a
Style-Customized Driving Policy without relying on prior driving data. By
leveraging large language models and a Driving Style Database, the framework
efficiently retrieves, adapts, and generalizes driving styles. A Statistical
Evaluation module ensures alignment with user preferences. Experimental results
demonstrate that Words2Wheels outperforms existing methods in accuracy,
generalization, and adaptability, offering a novel solution for customized AV
driving behavior. Code and demo available at
https://yokhon.github.io/Words2Wheels/.