{"title":"基于离线强化学习的机器人主动辅助装配任务框架","authors":"Yingchao You, Boliang Cai, Ze Ji","doi":"10.1016/j.cie.2025.111313","DOIUrl":null,"url":null,"abstract":"<div><div>Proactive robot assistance plays a critical role in human–robot collaborative assembly (HRCA), enhancing operational efficiency, product quality and workers’ ergonomics. The shift toward mass personalisation in industries brings significant challenges to the collaborative robot that must quickly adapt to product changes for proactive assistance. State-of-the-art knowledge-based task planners in HRCA struggle to quickly update their knowledge to adapt to the change of new products. Different from conventional methods, this work studies learning proactive assistance by leveraging reinforcement learning (RL) to train a policy, ready to be used for robot proactive assistance planning in HRCA. To address the limitations therein, we propose an offline RL framework where a policy for proactive assistance is trained using the dataset visually extracted from human demonstrations. In particular, an RL algorithm with a conservative Q-value is utilised to train a planning policy in an actor–critic framework with carefully designed state space and reward function. The experimental results show that with only a few demonstrations performed by workers as input, the algorithm can train a policy for proactive assistance in HRCA. The assistance task provided by the robot can fully meet the task requirement and improve human assembly preference satisfaction by 47.06% compared to a static strategy.</div></div>","PeriodicalId":55220,"journal":{"name":"Computers & Industrial Engineering","volume":"208 ","pages":"Article 111313"},"PeriodicalIF":6.7000,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An offline reinforcement learning-based framework for proactive robot assistance in assembly task\",\"authors\":\"Yingchao You, Boliang Cai, Ze Ji\",\"doi\":\"10.1016/j.cie.2025.111313\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Proactive robot assistance plays a critical role in human–robot collaborative assembly (HRCA), enhancing operational efficiency, product quality and workers’ ergonomics. The shift toward mass personalisation in industries brings significant challenges to the collaborative robot that must quickly adapt to product changes for proactive assistance. State-of-the-art knowledge-based task planners in HRCA struggle to quickly update their knowledge to adapt to the change of new products. Different from conventional methods, this work studies learning proactive assistance by leveraging reinforcement learning (RL) to train a policy, ready to be used for robot proactive assistance planning in HRCA. To address the limitations therein, we propose an offline RL framework where a policy for proactive assistance is trained using the dataset visually extracted from human demonstrations. In particular, an RL algorithm with a conservative Q-value is utilised to train a planning policy in an actor–critic framework with carefully designed state space and reward function. The experimental results show that with only a few demonstrations performed by workers as input, the algorithm can train a policy for proactive assistance in HRCA. The assistance task provided by the robot can fully meet the task requirement and improve human assembly preference satisfaction by 47.06% compared to a static strategy.</div></div>\",\"PeriodicalId\":55220,\"journal\":{\"name\":\"Computers & Industrial Engineering\",\"volume\":\"208 \",\"pages\":\"Article 111313\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Industrial Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0360835225004590\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Industrial Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360835225004590","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
An offline reinforcement learning-based framework for proactive robot assistance in assembly task
Proactive robot assistance plays a critical role in human–robot collaborative assembly (HRCA), enhancing operational efficiency, product quality and workers’ ergonomics. The shift toward mass personalisation in industries brings significant challenges to the collaborative robot that must quickly adapt to product changes for proactive assistance. State-of-the-art knowledge-based task planners in HRCA struggle to quickly update their knowledge to adapt to the change of new products. Different from conventional methods, this work studies learning proactive assistance by leveraging reinforcement learning (RL) to train a policy, ready to be used for robot proactive assistance planning in HRCA. To address the limitations therein, we propose an offline RL framework where a policy for proactive assistance is trained using the dataset visually extracted from human demonstrations. In particular, an RL algorithm with a conservative Q-value is utilised to train a planning policy in an actor–critic framework with carefully designed state space and reward function. The experimental results show that with only a few demonstrations performed by workers as input, the algorithm can train a policy for proactive assistance in HRCA. The assistance task provided by the robot can fully meet the task requirement and improve human assembly preference satisfaction by 47.06% compared to a static strategy.
期刊介绍:
Computers & Industrial Engineering (CAIE) is dedicated to researchers, educators, and practitioners in industrial engineering and related fields. Pioneering the integration of computers in research, education, and practice, industrial engineering has evolved to make computers and electronic communication integral to its domain. CAIE publishes original contributions focusing on the development of novel computerized methodologies to address industrial engineering problems. It also highlights the applications of these methodologies to issues within the broader industrial engineering and associated communities. The journal actively encourages submissions that push the boundaries of fundamental theories and concepts in industrial engineering techniques.