{"title":"Game-Theoretic Cooperative Lane Changing Using Data-Driven Models","authors":"Guohui Ding, Sina Aghli, C. Heckman, Lijun Chen","doi":"10.1109/IROS.2018.8593725","DOIUrl":null,"url":null,"abstract":"Self-driving vehicles are being increasingly deployed in the wild. One of the most important next hurdles for autonomous driving is how such vehicles will optimally interact with one another and with their surroundings. In this paper, we consider the lane changing problem that is fundamental to road-bound multi-vehicle systems, and approach it through a combination of deep reinforcement learning (DRL) and game theory. We introduce a proactive-passive lane changing framework and formulate the lane changing problem as a Markov game between the proactive and passive vehicles. Based on different approaches to carry out DRL to solve the Markov game, we propose an asynchronous lane changing scheme as in a single-agent RL setting and a synchronous cooperative lane changing scheme that takes into consideration the adaptive behavior of the other vehicle in a vehicle's decision. Experimental results show that the synchronous scheme can effectively create and find proper merging moment after sufficient training. The framework and solution developed here demonstrate the potential of using reinforcement learning to solve multi-agent autonomous vehicle tasks such as the lane changing as they are formulated as Markov games.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"8 1","pages":"3640-3647"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2018.8593725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
Self-driving vehicles are being increasingly deployed in the wild. One of the most important next hurdles for autonomous driving is how such vehicles will optimally interact with one another and with their surroundings. In this paper, we consider the lane changing problem that is fundamental to road-bound multi-vehicle systems, and approach it through a combination of deep reinforcement learning (DRL) and game theory. We introduce a proactive-passive lane changing framework and formulate the lane changing problem as a Markov game between the proactive and passive vehicles. Based on different approaches to carry out DRL to solve the Markov game, we propose an asynchronous lane changing scheme as in a single-agent RL setting and a synchronous cooperative lane changing scheme that takes into consideration the adaptive behavior of the other vehicle in a vehicle's decision. Experimental results show that the synchronous scheme can effectively create and find proper merging moment after sufficient training. The framework and solution developed here demonstrate the potential of using reinforcement learning to solve multi-agent autonomous vehicle tasks such as the lane changing as they are formulated as Markov games.