Andreas Kallinteris, Stavros Orfanoudakis, Georgios Chalkiadakis
{"title":"A comprehensive analysis of agent factorization and learning algorithms in multiagent systems","authors":"Andreas Kallinteris, Stavros Orfanoudakis, Georgios Chalkiadakis","doi":"10.1007/s10458-024-09662-9","DOIUrl":null,"url":null,"abstract":"<div><p>In multiagent systems, agent factorization denotes the process of segmenting the state-action space of the environment into distinct components, each corresponding to an individual agent, and subsequently determining the interactions among these agents. Effective agent factorization significantly influences the system performance of real-world industrial applications. In this work, we try to assess the performance impact of agent factorization when using different learning algorithms in multiagent coordination settings; and thus discover the source of performance quality of the multiagent solution derived by combining different factorizations with different learning algorithms. To this end, we evaluate twelve different agent factorization instances—or <i>agent definitions</i>—in the warehouse traffic management domain, comparing the training performance of (primarily) three learning algorithms suitable for learning coordinated multiagent policies: the Evolutionary Strategies (<i>ES</i>), the Canonical Evolutionary Strategies (<i>CES</i>), and a genetic algorithm (<i>CCEA</i>) previously used in a similar setting. Our results demonstrate that the performance of different learning algorithms is affected in different ways by alternative agent definitions. Given this, we can conclude that many important multiagent coordination problems can eventually be solved more efficiently by a suitable agent factorization combined with an appropriate choice of a learning algorithm. Moreover, our work shows that ES and CES are effective learning algorithms for the warehouse traffic management domain, while, interestingly, celebrated policy gradient methods do not fare well in this complex real-world problem setting. As such, our work offers insights into the intrinsic properties of the learning algorithms that make them well-suited for this problem domain. More broadly, our work demonstrates the need to identify appropriate agent definitions-multiagent learning algorithm pairings in order to solve specific complex problems effectively, and provides insights into the general characteristics that such pairings must possess to address broad classes of multiagent learning and coordination problems.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"38 2","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Autonomous Agents and Multi-Agent Systems","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10458-024-09662-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In multiagent systems, agent factorization denotes the process of segmenting the state-action space of the environment into distinct components, each corresponding to an individual agent, and subsequently determining the interactions among these agents. Effective agent factorization significantly influences the system performance of real-world industrial applications. In this work, we try to assess the performance impact of agent factorization when using different learning algorithms in multiagent coordination settings; and thus discover the source of performance quality of the multiagent solution derived by combining different factorizations with different learning algorithms. To this end, we evaluate twelve different agent factorization instances—or agent definitions—in the warehouse traffic management domain, comparing the training performance of (primarily) three learning algorithms suitable for learning coordinated multiagent policies: the Evolutionary Strategies (ES), the Canonical Evolutionary Strategies (CES), and a genetic algorithm (CCEA) previously used in a similar setting. Our results demonstrate that the performance of different learning algorithms is affected in different ways by alternative agent definitions. Given this, we can conclude that many important multiagent coordination problems can eventually be solved more efficiently by a suitable agent factorization combined with an appropriate choice of a learning algorithm. Moreover, our work shows that ES and CES are effective learning algorithms for the warehouse traffic management domain, while, interestingly, celebrated policy gradient methods do not fare well in this complex real-world problem setting. As such, our work offers insights into the intrinsic properties of the learning algorithms that make them well-suited for this problem domain. More broadly, our work demonstrates the need to identify appropriate agent definitions-multiagent learning algorithm pairings in order to solve specific complex problems effectively, and provides insights into the general characteristics that such pairings must possess to address broad classes of multiagent learning and coordination problems.
期刊介绍:
This is the official journal of the International Foundation for Autonomous Agents and Multi-Agent Systems. It provides a leading forum for disseminating significant original research results in the foundations, theory, development, analysis, and applications of autonomous agents and multi-agent systems. Coverage in Autonomous Agents and Multi-Agent Systems includes, but is not limited to:
Agent decision-making architectures and their evaluation, including: cognitive models; knowledge representation; logics for agency; ontological reasoning; planning (single and multi-agent); reasoning (single and multi-agent)
Cooperation and teamwork, including: distributed problem solving; human-robot/agent interaction; multi-user/multi-virtual-agent interaction; coalition formation; coordination
Agent communication languages, including: their semantics, pragmatics, and implementation; agent communication protocols and conversations; agent commitments; speech act theory
Ontologies for agent systems, agents and the semantic web, agents and semantic web services, Grid-based systems, and service-oriented computing
Agent societies and societal issues, including: artificial social systems; environments, organizations and institutions; ethical and legal issues; privacy, safety and security; trust, reliability and reputation
Agent-based system development, including: agent development techniques, tools and environments; agent programming languages; agent specification or validation languages
Agent-based simulation, including: emergent behavior; participatory simulation; simulation techniques, tools and environments; social simulation
Agreement technologies, including: argumentation; collective decision making; judgment aggregation and belief merging; negotiation; norms
Economic paradigms, including: auction and mechanism design; bargaining and negotiation; economically-motivated agents; game theory (cooperative and non-cooperative); social choice and voting
Learning agents, including: computational architectures for learning agents; evolution, adaptation; multi-agent learning.
Robotic agents, including: integrated perception, cognition, and action; cognitive robotics; robot planning (including action and motion planning); multi-robot systems.
Virtual agents, including: agents in games and virtual environments; companion and coaching agents; modeling personality, emotions; multimodal interaction; verbal and non-verbal expressiveness
Significant, novel applications of agent technology
Comprehensive reviews and authoritative tutorials of research and practice in agent systems
Comprehensive and authoritative reviews of books dealing with agents and multi-agent systems.