Muhammad Naveed Abbas , Paul Liston , Brian Lee , Yuansong Qiao
{"title":"CESDQL: Communicative experience-sharing deep Q-learning for scalability in multi-robot collaboration with sparse reward","authors":"Muhammad Naveed Abbas , Paul Liston , Brian Lee , Yuansong Qiao","doi":"10.1016/j.knosys.2024.112714","DOIUrl":null,"url":null,"abstract":"<div><div>Owing to the massive transformation in industrial processes and logistics, warehouses are also undergoing advanced automation. The application of Autonomous Mobile Robots (a.k.a. multi-robots) is one of the important elements of overall warehousing automation. The autonomous collaborative behaviour of the multi-robots can be considered as employment on a control task and, thus, can be optimised using multi-agent reinforcement learning (MARL). Consequently, an autonomous warehouse is to be represented by an MARL environment. An MARL environment replicating an autonomous warehouse poses the challenge of exploration due to sparse reward leading to inefficient collaboration. This challenge aggravates further with an increase in the number of robots and the grid size, i.e., scalability. This research proposes <strong>C</strong>ommunicative <strong>E</strong>xperience-<strong>S</strong>haring <strong>D</strong>eep <strong>Q</strong>-<strong>L</strong>earning (CESDQL) based on Q-learning, a novel hybrid multi-robot communicative framework for scalability for MARL collaboration with sparse rewards, where exploration is challenging and makes collaboration difficult. CESDQL makes use of experience-sharing through collective sampling from the Experience (Replay) buffer and communication through Communicative Deep recurrent Q-network (CommDRQN), a Q-function approximator. Through empirical evaluation of CESDQL in a variety of collaborative scenarios, it is established that CESDQL outperforms the baselines in terms of convergence and stable learning. Overall, CESDQL achieves 5%, 69%, 60%, 211%, 171%, 3.8% & 10% more final accumulative training returns than the closest performing baseline by scenario, and, 27%, 10.33% & 573% more final average training returns than the closest performing baseline by the big-scale scenario.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"306 ","pages":"Article 112714"},"PeriodicalIF":7.2000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705124013480","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Owing to the massive transformation in industrial processes and logistics, warehouses are also undergoing advanced automation. The application of Autonomous Mobile Robots (a.k.a. multi-robots) is one of the important elements of overall warehousing automation. The autonomous collaborative behaviour of the multi-robots can be considered as employment on a control task and, thus, can be optimised using multi-agent reinforcement learning (MARL). Consequently, an autonomous warehouse is to be represented by an MARL environment. An MARL environment replicating an autonomous warehouse poses the challenge of exploration due to sparse reward leading to inefficient collaboration. This challenge aggravates further with an increase in the number of robots and the grid size, i.e., scalability. This research proposes Communicative Experience-Sharing Deep Q-Learning (CESDQL) based on Q-learning, a novel hybrid multi-robot communicative framework for scalability for MARL collaboration with sparse rewards, where exploration is challenging and makes collaboration difficult. CESDQL makes use of experience-sharing through collective sampling from the Experience (Replay) buffer and communication through Communicative Deep recurrent Q-network (CommDRQN), a Q-function approximator. Through empirical evaluation of CESDQL in a variety of collaborative scenarios, it is established that CESDQL outperforms the baselines in terms of convergence and stable learning. Overall, CESDQL achieves 5%, 69%, 60%, 211%, 171%, 3.8% & 10% more final accumulative training returns than the closest performing baseline by scenario, and, 27%, 10.33% & 573% more final average training returns than the closest performing baseline by the big-scale scenario.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.