{"title":"Mixed Motivation Driven Social Multi-Agent Reinforcement Learning for Autonomous Driving","authors":"Long Chen;Peng Deng;Lingxi Li;Xuemin Hu","doi":"10.1109/JAS.2025.125201","DOIUrl":null,"url":null,"abstract":"Despite great achievement has been made in autonomous driving technologies, autonomous vehicles (AVs) still exhibit limitations in intelligence and lack social coordination, which is primarily attributed to their reliance on single-agent technologies, neglecting inter-AV interactions. Current research on multi-agent autonomous driving (MAAD) predominantly focuses on either distributed individual learning or centralized cooperative learning, ignoring the mixed-motive nature of MAAD systems, where each agent is not only self-interested in reaching its own destination but also needs to coordinate with other traffic participants to enhance efficiency and safety. Inspired by the mixed motivation of human driving behavior and their learning process, we propose a novel mixed motivation driven social multi-agent reinforcement learning method for autonomous driving. In our method, a multi-agent reinforcement learning (MARL) algorithm, called Social Learning Policy Optimization (SoLPO), which takes advantage of both the individual and social learning paradigms, is proposed to empower agents to rapidly acquire self-interested policies and effectively learn socially coordinated behavior. Based on the proposed SoLPO, we further develop a mixed-motive MARL method for autonomous driving combined with a social reward integration module that can model the mixed-motive nature of MAAD systems by integrating individual and neighbor rewards into a social learning objective for improved learning speed and effectiveness. Experiments conducted on the MetaDrive simulator show that our proposed method outperforms existing state-of-the-art MARL approaches in metrics including the success rate, safety, and efficiency. More-over, the AVs trained by our method form coordinated social norms and exhibit human-like driving behavior, demonstrating a high degree of social coordination.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"12 6","pages":"1272-1282"},"PeriodicalIF":15.3000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11036678/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Despite great achievement has been made in autonomous driving technologies, autonomous vehicles (AVs) still exhibit limitations in intelligence and lack social coordination, which is primarily attributed to their reliance on single-agent technologies, neglecting inter-AV interactions. Current research on multi-agent autonomous driving (MAAD) predominantly focuses on either distributed individual learning or centralized cooperative learning, ignoring the mixed-motive nature of MAAD systems, where each agent is not only self-interested in reaching its own destination but also needs to coordinate with other traffic participants to enhance efficiency and safety. Inspired by the mixed motivation of human driving behavior and their learning process, we propose a novel mixed motivation driven social multi-agent reinforcement learning method for autonomous driving. In our method, a multi-agent reinforcement learning (MARL) algorithm, called Social Learning Policy Optimization (SoLPO), which takes advantage of both the individual and social learning paradigms, is proposed to empower agents to rapidly acquire self-interested policies and effectively learn socially coordinated behavior. Based on the proposed SoLPO, we further develop a mixed-motive MARL method for autonomous driving combined with a social reward integration module that can model the mixed-motive nature of MAAD systems by integrating individual and neighbor rewards into a social learning objective for improved learning speed and effectiveness. Experiments conducted on the MetaDrive simulator show that our proposed method outperforms existing state-of-the-art MARL approaches in metrics including the success rate, safety, and efficiency. More-over, the AVs trained by our method form coordinated social norms and exhibit human-like driving behavior, demonstrating a high degree of social coordination.
期刊介绍:
The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control.
Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.