Cooperative Multi-agent Inverse Reinforcement Learning Based on Selfish Expert and its Behavior Archives

Yukiko Fukumoto, Masakazu Tadokoro, K. Takadama
{"title":"Cooperative Multi-agent Inverse Reinforcement Learning Based on Selfish Expert and its Behavior Archives","authors":"Yukiko Fukumoto, Masakazu Tadokoro, K. Takadama","doi":"10.1109/SSCI47803.2020.9308491","DOIUrl":null,"url":null,"abstract":"This paper explores the multi-agent inverse reinforcement learning (MAIRL) method which enables the agents to acquire their cooperative behaviors based on selfish expert behaviors (i.e., it is generated from the viewpoint of a single agent). Since such selfish expert behaviors may not derive cooperative behaviors among agents, this paper tackles this problem by archiving the cooperative behaviors found in the learning process and by replacing the original expert behaviors with the archived one at a certain interval. For this issue, this paper proposes AMAIRL (Archive Multi-Agent Inverse Reinforcement Learning). Through the intensive simulations of the maze problem for our method, the following implications have been revealed: (1) AMAIRL is superior to MaxEntIRL in terms of finding cooperative behavior; (2) AMAIRL requires a long interval period to acquire the cooperative behaviors. In particular, AMAIRL with the long interval can find the cooperative behaviors that are hard to be found in AMAIRL with the short interval.","PeriodicalId":413489,"journal":{"name":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI47803.2020.9308491","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This paper explores the multi-agent inverse reinforcement learning (MAIRL) method which enables the agents to acquire their cooperative behaviors based on selfish expert behaviors (i.e., it is generated from the viewpoint of a single agent). Since such selfish expert behaviors may not derive cooperative behaviors among agents, this paper tackles this problem by archiving the cooperative behaviors found in the learning process and by replacing the original expert behaviors with the archived one at a certain interval. For this issue, this paper proposes AMAIRL (Archive Multi-Agent Inverse Reinforcement Learning). Through the intensive simulations of the maze problem for our method, the following implications have been revealed: (1) AMAIRL is superior to MaxEntIRL in terms of finding cooperative behavior; (2) AMAIRL requires a long interval period to acquire the cooperative behaviors. In particular, AMAIRL with the long interval can find the cooperative behaviors that are hard to be found in AMAIRL with the short interval.
基于自利专家及其行为档案的协同多智能体逆强化学习
本文探索了多智能体逆强化学习(MAIRL)方法,该方法使智能体基于自私的专家行为(即从单个智能体的角度生成)获得合作行为。由于这种自私的专家行为可能不会衍生出智能体之间的合作行为,因此本文通过对学习过程中发现的合作行为进行归档,并以一定的间隔将原始专家行为替换为存档的专家行为来解决这一问题。针对这个问题,本文提出了AMAIRL (Archive Multi-Agent Inverse Reinforcement Learning)。通过对该方法的迷宫问题的大量模拟,揭示了以下意义:(1)AMAIRL在寻找合作行为方面优于MaxEntIRL;(2) AMAIRL需要较长的间隔时间才能获得合作行为。特别是,具有长间隔的AMAIRL能够发现短间隔AMAIRL难以发现的合作行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信