Mustafa Yasir, Andrew Howes, Vasilios Mavroudis, Chris Hicks
{"title":"Environment Complexity and Nash Equilibria in a Sequential Social Dilemma","authors":"Mustafa Yasir, Andrew Howes, Vasilios Mavroudis, Chris Hicks","doi":"arxiv-2408.02148","DOIUrl":null,"url":null,"abstract":"Multi-agent reinforcement learning (MARL) methods, while effective in\nzero-sum or positive-sum games, often yield suboptimal outcomes in general-sum\ngames where cooperation is essential for achieving globally optimal outcomes.\nMatrix game social dilemmas, which abstract key aspects of general-sum\ninteractions, such as cooperation, risk, and trust, fail to model the temporal\nand spatial dynamics characteristic of real-world scenarios. In response, our\nstudy extends matrix game social dilemmas into more complex, higher-dimensional\nMARL environments. We adapt a gridworld implementation of the Stag Hunt dilemma\nto more closely match the decision-space of a one-shot matrix game while also\nintroducing variable environment complexity. Our findings indicate that as\ncomplexity increases, MARL agents trained in these environments converge to\nsuboptimal strategies, consistent with the risk-dominant Nash equilibria\nstrategies found in matrix games. Our work highlights the impact of environment\ncomplexity on achieving optimal outcomes in higher-dimensional game-theoretic\nMARL environments.","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-agent reinforcement learning (MARL) methods, while effective in
zero-sum or positive-sum games, often yield suboptimal outcomes in general-sum
games where cooperation is essential for achieving globally optimal outcomes.
Matrix game social dilemmas, which abstract key aspects of general-sum
interactions, such as cooperation, risk, and trust, fail to model the temporal
and spatial dynamics characteristic of real-world scenarios. In response, our
study extends matrix game social dilemmas into more complex, higher-dimensional
MARL environments. We adapt a gridworld implementation of the Stag Hunt dilemma
to more closely match the decision-space of a one-shot matrix game while also
introducing variable environment complexity. Our findings indicate that as
complexity increases, MARL agents trained in these environments converge to
suboptimal strategies, consistent with the risk-dominant Nash equilibria
strategies found in matrix games. Our work highlights the impact of environment
complexity on achieving optimal outcomes in higher-dimensional game-theoretic
MARL environments.