Improvements to Disassembly Lot Sizing With Task Control Through Reinforcement Learning

Sachini Weerasekara, Wei Li, Jacqueline Isaacs, Sagar Kamarthi
{"title":"Improvements to Disassembly Lot Sizing With Task Control Through Reinforcement Learning","authors":"Sachini Weerasekara,&nbsp;Wei Li,&nbsp;Jacqueline Isaacs,&nbsp;Sagar Kamarthi","doi":"10.1002/amp2.70032","DOIUrl":null,"url":null,"abstract":"<p>This research presents a novel methodology to control disassembly tasks for cost-efficient component recovery from end-of-life products, fostering remanufacturing. Inventory management is an integral part of systems that assemble or disassemble products. Unlike assembly systems, disassembly operations pose a unique challenge, as they can lead to inventory accumulation and risk uncontrolled growth without careful management. Disassembly system inventory management is complex due to various factors, including non-uniform demand for disassembled components, uncertainty in demands for salvage components, the arrival of different end-of-life product variants, end-of-life product condition variation, and processing time variation. These complexities often lead to unexpected inventory fluctuations, resulting in high inventory costs, inventory shortages, and customer dissatisfaction due to uncertainty in component availability. These inventory fluctuations can be mitigated if a real-time decision-making system supports disassembly processes. This study explores an innovative approach to addressing these complexities and controlling disassembly tasks using Deep Reinforcement Learning (DRL). This approach offers a more effective alternative to traditional methods. Experiments on Quantum-dot LED (QLED), Organic LED (OLED), and Quantum Dot OLED (QD-OLED) TV disassembly systems demonstrate the effectiveness of the DRL approach. Compared to the Multiple Elman Neural Networks (MENN) method, the DRL model offers a 21% reduction in inventory accumulation and a 12% improvement in demand satisfaction for the disassembly setup in the study.</p>","PeriodicalId":87290,"journal":{"name":"Journal of advanced manufacturing and processing","volume":"7 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://aiche.onlinelibrary.wiley.com/doi/epdf/10.1002/amp2.70032","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of advanced manufacturing and processing","FirstCategoryId":"1085","ListUrlMain":"https://aiche.onlinelibrary.wiley.com/doi/10.1002/amp2.70032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This research presents a novel methodology to control disassembly tasks for cost-efficient component recovery from end-of-life products, fostering remanufacturing. Inventory management is an integral part of systems that assemble or disassemble products. Unlike assembly systems, disassembly operations pose a unique challenge, as they can lead to inventory accumulation and risk uncontrolled growth without careful management. Disassembly system inventory management is complex due to various factors, including non-uniform demand for disassembled components, uncertainty in demands for salvage components, the arrival of different end-of-life product variants, end-of-life product condition variation, and processing time variation. These complexities often lead to unexpected inventory fluctuations, resulting in high inventory costs, inventory shortages, and customer dissatisfaction due to uncertainty in component availability. These inventory fluctuations can be mitigated if a real-time decision-making system supports disassembly processes. This study explores an innovative approach to addressing these complexities and controlling disassembly tasks using Deep Reinforcement Learning (DRL). This approach offers a more effective alternative to traditional methods. Experiments on Quantum-dot LED (QLED), Organic LED (OLED), and Quantum Dot OLED (QD-OLED) TV disassembly systems demonstrate the effectiveness of the DRL approach. Compared to the Multiple Elman Neural Networks (MENN) method, the DRL model offers a 21% reduction in inventory accumulation and a 12% improvement in demand satisfaction for the disassembly setup in the study.

Abstract Image

Abstract Image

Abstract Image

基于强化学习的任务控制对拆卸批量的改进
本研究提出了一种新的方法来控制从报废产品中成本有效地回收部件的拆卸任务,促进再制造。库存管理是产品组装或拆卸系统的一个组成部分。与装配系统不同,拆卸操作带来了独特的挑战,因为它们可能导致库存积累,并且在没有精心管理的情况下可能导致不受控制的增长。由于各种因素的影响,拆解系统库存管理是复杂的,包括对被拆解部件的需求不统一、对回收部件需求的不确定性、不同报废产品变体的到来、报废产品状态的变化以及加工时间的变化。这些复杂性通常会导致意想不到的库存波动,从而导致高库存成本、库存短缺以及由于组件可用性的不确定性而导致的客户不满。如果实时决策系统支持拆卸过程,则可以减轻这些库存波动。本研究探索了一种创新的方法来解决这些复杂性,并使用深度强化学习(DRL)控制拆卸任务。这种方法比传统方法更有效。在量子点LED (QLED)、有机LED (OLED)和量子点OLED (QD-OLED)电视拆卸系统上的实验证明了DRL方法的有效性。与多埃尔曼神经网络(MENN)方法相比,DRL模型在研究中减少了21%的库存积累,并将拆卸装置的需求满意度提高了12%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信