Evaluating the learning and performance characteristics of self-organizing systems with different task features

IF 1.7 3区 工程技术 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hao Ji, Yan Jin
{"title":"Evaluating the learning and performance characteristics of self-organizing systems with different task features","authors":"Hao Ji, Yan Jin","doi":"10.1017/S089006042100024X","DOIUrl":null,"url":null,"abstract":"Abstract Self-organizing systems (SOS) are developed to perform complex tasks in unforeseen situations with adaptability. Predefining rules for self-organizing agents can be challenging, especially in tasks with high complexity and changing environments. Our previous work has introduced a multiagent reinforcement learning (RL) model as a design approach to solving the rule generation problem of SOS. A deep multiagent RL algorithm was devised to train agents to acquire the task and self-organizing knowledge. However, the simulation was based on one specific task environment. Sensitivity of SOS to reward functions and systematic evaluation of SOS designed with multiagent RL remain an issue. In this paper, we introduced a rotation reward function to regulate agent behaviors during training and tested different weights of such reward on SOS performance in two case studies: box-pushing and T-shape assembly. Additionally, we proposed three metrics to evaluate the SOS: learning stability, quality of learned knowledge, and scalability. Results show that depending on the type of tasks; designers may choose appropriate weights of rotation reward to obtain the full potential of agents’ learning capability. Good learning stability and quality of knowledge can be achieved with an optimal range of team sizes. Scaling up to larger team sizes has better performance than scaling downwards.","PeriodicalId":50951,"journal":{"name":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","volume":"35 1","pages":"404 - 422"},"PeriodicalIF":1.7000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Edam-Artificial Intelligence for Engineering Design Analysis and Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1017/S089006042100024X","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 3

Abstract

Abstract Self-organizing systems (SOS) are developed to perform complex tasks in unforeseen situations with adaptability. Predefining rules for self-organizing agents can be challenging, especially in tasks with high complexity and changing environments. Our previous work has introduced a multiagent reinforcement learning (RL) model as a design approach to solving the rule generation problem of SOS. A deep multiagent RL algorithm was devised to train agents to acquire the task and self-organizing knowledge. However, the simulation was based on one specific task environment. Sensitivity of SOS to reward functions and systematic evaluation of SOS designed with multiagent RL remain an issue. In this paper, we introduced a rotation reward function to regulate agent behaviors during training and tested different weights of such reward on SOS performance in two case studies: box-pushing and T-shape assembly. Additionally, we proposed three metrics to evaluate the SOS: learning stability, quality of learned knowledge, and scalability. Results show that depending on the type of tasks; designers may choose appropriate weights of rotation reward to obtain the full potential of agents’ learning capability. Good learning stability and quality of knowledge can be achieved with an optimal range of team sizes. Scaling up to larger team sizes has better performance than scaling downwards.
评价具有不同任务特征的自组织系统的学习和性能特征
自组织系统(SOS)是为了在不可预见的情况下执行复杂任务而发展起来的具有适应性的系统。为自组织代理预定义规则可能具有挑战性,特别是在具有高复杂性和不断变化的环境的任务中。我们之前的工作已经引入了一个多智能体强化学习(RL)模型作为解决SOS规则生成问题的设计方法。设计了一种深度多智能体强化学习算法来训练智能体获取任务和自组织知识。然而,模拟是基于一个特定的任务环境。SOS对奖励函数的敏感性和多智能体RL设计的SOS的系统评价仍然是一个问题。在本文中,我们引入了一个旋转奖励函数来调节智能体在训练过程中的行为,并在推箱和t形装配两个案例中测试了这种奖励的不同权重对SOS性能的影响。此外,我们提出了评估SOS的三个指标:学习稳定性、学习知识的质量和可扩展性。结果表明,根据任务类型;设计者可以选择适当的轮换奖励权重,以充分发挥智能体学习能力的潜力。良好的学习稳定性和知识质量可以通过最优的团队规模范围来实现。扩大团队规模比缩小团队规模有更好的表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.40
自引率
14.30%
发文量
27
审稿时长
>12 weeks
期刊介绍: The journal publishes original articles about significant AI theory and applications based on the most up-to-date research in all branches and phases of engineering. Suitable topics include: analysis and evaluation; selection; configuration and design; manufacturing and assembly; and concurrent engineering. Specifically, the journal is interested in the use of AI in planning, design, analysis, simulation, qualitative reasoning, spatial reasoning and graphics, manufacturing, assembly, process planning, scheduling, numerical analysis, optimization, distributed systems, multi-agent applications, cooperation, cognitive modeling, learning and creativity. AI EDAM is also interested in original, major applications of state-of-the-art knowledge-based techniques to important engineering problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信