Multi-agent Learning by Trial and Error for Resource Leveling during Multi-Project (Re)scheduling

Laura Tosselli, Verónica Bogado, E. Martínez
{"title":"Multi-agent Learning by Trial and Error for Resource Leveling during Multi-Project (Re)scheduling","authors":"Laura Tosselli, Verónica Bogado, E. Martínez","doi":"10.24215/16666038.18.E14","DOIUrl":null,"url":null,"abstract":"In a multi-project context within enterprise networks, reaching feasible solutions to the (re)scheduling problem represents a major challenge, mainly when scarce resources are shared among projects. The multi-project (re)scheduling must achieve the most efficient possible resource usage without increasing the prescribed project constraints, considering the Resource Leveling Problem (RLP), whose objective is to level the consumption of resources shared in order to minimize their idle times and to avoid overallocation conflicts. In this work, a multi-agent solution that allows solving the Resource Constrained Multi-project Scheduling Problem (RCMPSP) and the Resource Investment Problem is extended to incorporate indicators on agents’ payoff functions to address the Resource Leveling Problem in a decentralized and autonomous way, through decoupled rules based on Trial-and-Error approach. The proposed agent-based simulation model is tested through a set of project instances that vary in their structure, parameters, number of resources shared, etc. Results obtained are assessed through different scheduling goals, such as project total duration, project total cost and leveling resource usage. Our results are far better compared to the ones obtained with alternative approaches. This proposal shows that the interacting agents that implement decoupled learning rules find a solution which can be understood as a Nash equilibrium.","PeriodicalId":188846,"journal":{"name":"J. Comput. Sci. Technol.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Comput. Sci. Technol.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24215/16666038.18.E14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In a multi-project context within enterprise networks, reaching feasible solutions to the (re)scheduling problem represents a major challenge, mainly when scarce resources are shared among projects. The multi-project (re)scheduling must achieve the most efficient possible resource usage without increasing the prescribed project constraints, considering the Resource Leveling Problem (RLP), whose objective is to level the consumption of resources shared in order to minimize their idle times and to avoid overallocation conflicts. In this work, a multi-agent solution that allows solving the Resource Constrained Multi-project Scheduling Problem (RCMPSP) and the Resource Investment Problem is extended to incorporate indicators on agents’ payoff functions to address the Resource Leveling Problem in a decentralized and autonomous way, through decoupled rules based on Trial-and-Error approach. The proposed agent-based simulation model is tested through a set of project instances that vary in their structure, parameters, number of resources shared, etc. Results obtained are assessed through different scheduling goals, such as project total duration, project total cost and leveling resource usage. Our results are far better compared to the ones obtained with alternative approaches. This proposal shows that the interacting agents that implement decoupled learning rules find a solution which can be understood as a Nash equilibrium.
基于试错法的多智能体学习在多项目调度中的资源均衡
在企业网络中的多项目环境中,为(重新)调度问题找到可行的解决方案是一个主要的挑战,特别是当稀缺资源在项目之间共享时。考虑到资源均衡问题(RLP),多项目(再)调度必须在不增加规定的项目约束的情况下实现最有效的资源使用,其目标是均衡共享资源的消耗,以最小化其空闲时间并避免过度分配冲突。在这项工作中,一个允许解决资源约束多项目调度问题(RCMPSP)和资源投资问题的多智能体解决方案被扩展到包含智能体支付函数的指标,通过基于试错方法的解耦规则以分散和自主的方式解决资源均衡问题。通过一组在结构、参数、共享资源数量等方面有所不同的项目实例,对提出的基于代理的仿真模型进行了测试。通过不同的进度目标,如项目总工期、项目总成本和资源使用平衡,评估获得的结果。我们的结果比用其他方法得到的结果要好得多。该方案表明,实现解耦学习规则的交互智能体找到了一个可以理解为纳什均衡的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信