大规模生产中用于自适应光刻调度的实用强化学习

IF 2.3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Eungjin Kim;Taehyung Kim;Dongcheol Lee;Hyeongook Kim;Sehwan Kim;Jaewon Kim;Woosub Kim;Eunzi Kim;Younggil Jin;Tae-Eog Lee
{"title":"大规模生产中用于自适应光刻调度的实用强化学习","authors":"Eungjin Kim;Taehyung Kim;Dongcheol Lee;Hyeongook Kim;Sehwan Kim;Jaewon Kim;Woosub Kim;Eunzi Kim;Younggil Jin;Tae-Eog Lee","doi":"10.1109/TSM.2023.3336909","DOIUrl":null,"url":null,"abstract":"This work introduces a practical reinforcement learning (RL) techniques to address the complex scheduling challenges in producing Active Matrix Organic Light Emitting Diode displays. Specifically, we focus on autonomous optimization of the photolithography process, a critical bottleneck in the fabrication. This provides an outperforming scheduling method compared with the existing rule-based approach which requires diverse rules and engineer experience on adapting dynamic environments. Our purposing RL network was designed to make effective schedules aligning with layered structures of the planning and scheduling modules for mass production. In the training phase, historical production data is utilized to create a representative discrete event simulation environment. The RL agent, based on the Deep Q-Network, undergoes episodic training to learn optimal scheduling policies. To ensure safe and reliable scheduling decisions, we further introduce action filters and parallel competing schedulers. The performance of RL-based Scheduler (RLS) is compared to the Rule-Based Scheduler (RBS) over actual fabrication in a year-long period. Based on key performance indicators, we validate the RLS outperforms the RBS, with a remarkable improvement in step target matching, reduced setup times, and enhanced lot assignments. This work also paves a way for the gradual integration of AI-based algorithms into smart manufacturing practices.","PeriodicalId":451,"journal":{"name":"IEEE Transactions on Semiconductor Manufacturing","volume":"37 1","pages":"16-26"},"PeriodicalIF":2.3000,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Practical Reinforcement Learning for Adaptive Photolithography Scheduler in Mass Production\",\"authors\":\"Eungjin Kim;Taehyung Kim;Dongcheol Lee;Hyeongook Kim;Sehwan Kim;Jaewon Kim;Woosub Kim;Eunzi Kim;Younggil Jin;Tae-Eog Lee\",\"doi\":\"10.1109/TSM.2023.3336909\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work introduces a practical reinforcement learning (RL) techniques to address the complex scheduling challenges in producing Active Matrix Organic Light Emitting Diode displays. Specifically, we focus on autonomous optimization of the photolithography process, a critical bottleneck in the fabrication. This provides an outperforming scheduling method compared with the existing rule-based approach which requires diverse rules and engineer experience on adapting dynamic environments. Our purposing RL network was designed to make effective schedules aligning with layered structures of the planning and scheduling modules for mass production. In the training phase, historical production data is utilized to create a representative discrete event simulation environment. The RL agent, based on the Deep Q-Network, undergoes episodic training to learn optimal scheduling policies. To ensure safe and reliable scheduling decisions, we further introduce action filters and parallel competing schedulers. The performance of RL-based Scheduler (RLS) is compared to the Rule-Based Scheduler (RBS) over actual fabrication in a year-long period. Based on key performance indicators, we validate the RLS outperforms the RBS, with a remarkable improvement in step target matching, reduced setup times, and enhanced lot assignments. This work also paves a way for the gradual integration of AI-based algorithms into smart manufacturing practices.\",\"PeriodicalId\":451,\"journal\":{\"name\":\"IEEE Transactions on Semiconductor Manufacturing\",\"volume\":\"37 1\",\"pages\":\"16-26\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Semiconductor Manufacturing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10330650/\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Semiconductor Manufacturing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10330650/","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

这项研究介绍了一种实用的强化学习(RL)技术,用于解决有源矩阵有机发光二极管显示器生产过程中的复杂调度难题。具体来说,我们将重点放在光刻工艺的自主优化上,这是制造过程中的一个关键瓶颈。与现有的基于规则的方法相比,这种方法需要多样化的规则和工程师在适应动态环境方面的经验,因此提供了一种性能更优的调度方法。我们设计的目的性 RL 网络可根据大规模生产的计划和调度模块的分层结构制定有效的调度计划。在训练阶段,我们利用历史生产数据创建了一个具有代表性的离散事件模拟环境。基于深度 Q 网络的 RL 代理通过偶发训练来学习最优调度策略。为确保安全、可靠的调度决策,我们进一步引入了动作过滤器和并行竞争调度器。基于 RL 的调度器(RLS)的性能与基于规则的调度器(RBS)在一年的实际制造中的性能进行了比较。基于关键性能指标,我们验证了 RLS 的性能优于 RBS,在步骤目标匹配、减少设置时间和增强批次分配方面都有显著改善。这项工作还为将基于人工智能的算法逐步融入智能制造实践铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Practical Reinforcement Learning for Adaptive Photolithography Scheduler in Mass Production
This work introduces a practical reinforcement learning (RL) techniques to address the complex scheduling challenges in producing Active Matrix Organic Light Emitting Diode displays. Specifically, we focus on autonomous optimization of the photolithography process, a critical bottleneck in the fabrication. This provides an outperforming scheduling method compared with the existing rule-based approach which requires diverse rules and engineer experience on adapting dynamic environments. Our purposing RL network was designed to make effective schedules aligning with layered structures of the planning and scheduling modules for mass production. In the training phase, historical production data is utilized to create a representative discrete event simulation environment. The RL agent, based on the Deep Q-Network, undergoes episodic training to learn optimal scheduling policies. To ensure safe and reliable scheduling decisions, we further introduce action filters and parallel competing schedulers. The performance of RL-based Scheduler (RLS) is compared to the Rule-Based Scheduler (RBS) over actual fabrication in a year-long period. Based on key performance indicators, we validate the RLS outperforms the RBS, with a remarkable improvement in step target matching, reduced setup times, and enhanced lot assignments. This work also paves a way for the gradual integration of AI-based algorithms into smart manufacturing practices.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Semiconductor Manufacturing
IEEE Transactions on Semiconductor Manufacturing 工程技术-工程:电子与电气
CiteScore
5.20
自引率
11.10%
发文量
101
审稿时长
3.3 months
期刊介绍: The IEEE Transactions on Semiconductor Manufacturing addresses the challenging problems of manufacturing complex microelectronic components, especially very large scale integrated circuits (VLSI). Manufacturing these products requires precision micropatterning, precise control of materials properties, ultraclean work environments, and complex interactions of chemical, physical, electrical and mechanical processes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信