减少协调开销的局部知识下的分散自适应

Kishan Kumar Ganguly, Moumita Asad, K. Sakib
{"title":"减少协调开销的局部知识下的分散自适应","authors":"Kishan Kumar Ganguly, Moumita Asad, K. Sakib","doi":"10.5815/ijitcs.2022.01.02","DOIUrl":null,"url":null,"abstract":"Decentralized self-adaptive systems consist of multiple control loops that adapt some local and system-level global goals of each locally managed system or component in a decentralized setting. As each component works together in a decentralized environment, a control loop cannot take adaptation decisions independently. Therefore, all the control loops need to exchange their adaptation decisions to infer a global knowledge about the system. Decentralized self-adaptation approaches in the literature uses the global knowledge to take decisions that optimize both local and global goals. However, coordinating in such an unbounded manner impairs scalability. This paper proposes a decentralized self-adaptation technique using reinforcement learning that incorporates partial knowledge in order to reduce coordination overhead. The Q-learning algorithm based on Interaction Driven Markov Games is utilized to take adaptation decisions as it enables coordination only when it is beneficial. Rather than using unbounded number of peers, the adaptation control loop coordinates with a single peer control loop. The proposed approach was evaluated on a service-based Tele Assistance System. It was compared to random, independent and multiagent learners that assume global knowledge. It was observed that, in all cases, the proposed approach conformed to both local and global goals while maintaining comparatively lower coordination overhead.","PeriodicalId":130361,"journal":{"name":"International Journal of Information Technology and Computer Science","volume":"277 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Decentralized Self-adaptation in the Presence of Partial Knowledge with Reduced Coordination Overhead\",\"authors\":\"Kishan Kumar Ganguly, Moumita Asad, K. Sakib\",\"doi\":\"10.5815/ijitcs.2022.01.02\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decentralized self-adaptive systems consist of multiple control loops that adapt some local and system-level global goals of each locally managed system or component in a decentralized setting. As each component works together in a decentralized environment, a control loop cannot take adaptation decisions independently. Therefore, all the control loops need to exchange their adaptation decisions to infer a global knowledge about the system. Decentralized self-adaptation approaches in the literature uses the global knowledge to take decisions that optimize both local and global goals. However, coordinating in such an unbounded manner impairs scalability. This paper proposes a decentralized self-adaptation technique using reinforcement learning that incorporates partial knowledge in order to reduce coordination overhead. The Q-learning algorithm based on Interaction Driven Markov Games is utilized to take adaptation decisions as it enables coordination only when it is beneficial. Rather than using unbounded number of peers, the adaptation control loop coordinates with a single peer control loop. The proposed approach was evaluated on a service-based Tele Assistance System. It was compared to random, independent and multiagent learners that assume global knowledge. It was observed that, in all cases, the proposed approach conformed to both local and global goals while maintaining comparatively lower coordination overhead.\",\"PeriodicalId\":130361,\"journal\":{\"name\":\"International Journal of Information Technology and Computer Science\",\"volume\":\"277 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Technology and Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5815/ijitcs.2022.01.02\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Technology and Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5815/ijitcs.2022.01.02","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

分散自适应系统由多个控制回路组成,这些控制回路适应分散设置中每个本地管理系统或组件的一些局部和系统级全局目标。由于每个组件在分散的环境中一起工作,控制回路不能独立地做出适应决策。因此,所有的控制回路需要交换它们的适应决策来推断关于系统的全局知识。文献中的分散自适应方法使用全局知识来做出优化局部和全局目标的决策。然而,以这种无界的方式进行协调会损害可伸缩性。为了减少协调开销,本文提出了一种采用局部知识强化学习的分散自适应技术。基于交互驱动马尔可夫博弈的Q-learning算法被用于做出适应性决策,因为它只在有益的情况下才会进行协调。该自适应控制环与单个对等控制环协调,而不是使用无限数量的对等体。在基于服务的电话援助系统上对提议的方法进行了评价。将其与假设全局知识的随机、独立和多智能体学习器进行比较。有人指出,在所有情况下,拟议的办法都符合地方和全球目标,同时保持较低的协调间接费用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Decentralized Self-adaptation in the Presence of Partial Knowledge with Reduced Coordination Overhead
Decentralized self-adaptive systems consist of multiple control loops that adapt some local and system-level global goals of each locally managed system or component in a decentralized setting. As each component works together in a decentralized environment, a control loop cannot take adaptation decisions independently. Therefore, all the control loops need to exchange their adaptation decisions to infer a global knowledge about the system. Decentralized self-adaptation approaches in the literature uses the global knowledge to take decisions that optimize both local and global goals. However, coordinating in such an unbounded manner impairs scalability. This paper proposes a decentralized self-adaptation technique using reinforcement learning that incorporates partial knowledge in order to reduce coordination overhead. The Q-learning algorithm based on Interaction Driven Markov Games is utilized to take adaptation decisions as it enables coordination only when it is beneficial. Rather than using unbounded number of peers, the adaptation control loop coordinates with a single peer control loop. The proposed approach was evaluated on a service-based Tele Assistance System. It was compared to random, independent and multiagent learners that assume global knowledge. It was observed that, in all cases, the proposed approach conformed to both local and global goals while maintaining comparatively lower coordination overhead.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信