{"title":"Instigating Cooperation among LLM Agents Using Adaptive Information Modulation","authors":"Qiliang ChenSepehr, AlirezaSepehr, Ilami, Nunzio Lore, Babak Heydari","doi":"arxiv-2409.10372","DOIUrl":null,"url":null,"abstract":"This paper introduces a novel framework combining LLM agents as proxies for\nhuman strategic behavior with reinforcement learning (RL) to engage these\nagents in evolving strategic interactions within team environments. Our\napproach extends traditional agent-based simulations by using strategic LLM\nagents (SLA) and introducing dynamic and adaptive governance through a\npro-social promoting RL agent (PPA) that modulates information access across\nagents in a network, optimizing social welfare and promoting pro-social\nbehavior. Through validation in iterative games, including the prisoner\ndilemma, we demonstrate that SLA agents exhibit nuanced strategic adaptations.\nThe PPA agent effectively learns to adjust information transparency, resulting\nin enhanced cooperation rates. This framework offers significant insights into\nAI-mediated social dynamics, contributing to the deployment of AI in real-world\nteam settings.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"194 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computers and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10372","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a novel framework combining LLM agents as proxies for
human strategic behavior with reinforcement learning (RL) to engage these
agents in evolving strategic interactions within team environments. Our
approach extends traditional agent-based simulations by using strategic LLM
agents (SLA) and introducing dynamic and adaptive governance through a
pro-social promoting RL agent (PPA) that modulates information access across
agents in a network, optimizing social welfare and promoting pro-social
behavior. Through validation in iterative games, including the prisoner
dilemma, we demonstrate that SLA agents exhibit nuanced strategic adaptations.
The PPA agent effectively learns to adjust information transparency, resulting
in enhanced cooperation rates. This framework offers significant insights into
AI-mediated social dynamics, contributing to the deployment of AI in real-world
team settings.