Nicole Orzan, Erman Acar, Davide Grossi, Patrick Mannion, Roxana Rădulescu
{"title":"Learning in Multi-Objective Public Goods Games with Non-Linear Utilities","authors":"Nicole Orzan, Erman Acar, Davide Grossi, Patrick Mannion, Roxana Rădulescu","doi":"arxiv-2408.00682","DOIUrl":null,"url":null,"abstract":"Addressing the question of how to achieve optimal decision-making under risk\nand uncertainty is crucial for enhancing the capabilities of artificial agents\nthat collaborate with or support humans. In this work, we address this question\nin the context of Public Goods Games. We study learning in a novel\nmulti-objective version of the Public Goods Game where agents have different\nrisk preferences, by means of multi-objective reinforcement learning. We\nintroduce a parametric non-linear utility function to model risk preferences at\nthe level of individual agents, over the collective and individual reward\ncomponents of the game. We study the interplay between such preference\nmodelling and environmental uncertainty on the incentive alignment level in the\ngame. We demonstrate how different combinations of individual preferences and\nenvironmental uncertainties sustain the emergence of cooperative patterns in\nnon-cooperative environments (i.e., where competitive strategies are dominant),\nwhile others sustain competitive patterns in cooperative environments (i.e.,\nwhere cooperative strategies are dominant).","PeriodicalId":501315,"journal":{"name":"arXiv - CS - Multiagent Systems","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multiagent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.00682","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Addressing the question of how to achieve optimal decision-making under risk
and uncertainty is crucial for enhancing the capabilities of artificial agents
that collaborate with or support humans. In this work, we address this question
in the context of Public Goods Games. We study learning in a novel
multi-objective version of the Public Goods Game where agents have different
risk preferences, by means of multi-objective reinforcement learning. We
introduce a parametric non-linear utility function to model risk preferences at
the level of individual agents, over the collective and individual reward
components of the game. We study the interplay between such preference
modelling and environmental uncertainty on the incentive alignment level in the
game. We demonstrate how different combinations of individual preferences and
environmental uncertainties sustain the emergence of cooperative patterns in
non-cooperative environments (i.e., where competitive strategies are dominant),
while others sustain competitive patterns in cooperative environments (i.e.,
where cooperative strategies are dominant).