不同任务到达模式下基于深度强化学习的云制造多任务调度

IF 2.4 3区 工程技术 Q3 ENGINEERING, MANUFACTURING
Yaoyao Ping, Yongkui Liu, Lin Zhang, Lihui Wang, Xun Xu
{"title":"不同任务到达模式下基于深度强化学习的云制造多任务调度","authors":"Yaoyao Ping, Yongkui Liu, Lin Zhang, Lihui Wang, Xun Xu","doi":"10.1115/1.4062217","DOIUrl":null,"url":null,"abstract":"\n Cloud manufacturing is a manufacturing model that aims to provide on-demand resources and services to consumers over the Internet. Scheduling is one of the core techniques for cloud manufacturing to achieve the aim. Multi-task scheduling with dynamical task arrivals is an important research issue in the area of cloud manufacturing scheduling. Many traditional algorithms such as the genetic algorithm (GA) and ant colony optimization algorithm (ACO) have been used to solve the issue, which, however, are either incapable of or perform poorly in tackling the problem. Deep reinforcement learning (DRL) that combines artificial neural networks with reinforcement learning provides an effective technique in this regard. In view of this, we employ a typical deep reinforcement learning algorithm – Deep Q-network (DQN) – and proposed a DQN-based multi-task scheduling approach for cloud manufacturing. Three different task arrival modes – arriving at the same time, arriving in random batches, and arriving one by one sequentially – are considered. Four baseline approaches including random scheduling, round-robin scheduling, earliest scheduling, and minimum execution time scheduling are investigated. A comparison of results indicates that the DQN-based scheduling approach is able to effectively address the multi-task scheduling problem in cloud manufacturing and performs best among all approaches.","PeriodicalId":16299,"journal":{"name":"Journal of Manufacturing Science and Engineering-transactions of The Asme","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Reinforcement Learning-Based Multi-Task Scheduling in Cloud Manufacturing under Different Task Arrival Modes\",\"authors\":\"Yaoyao Ping, Yongkui Liu, Lin Zhang, Lihui Wang, Xun Xu\",\"doi\":\"10.1115/1.4062217\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Cloud manufacturing is a manufacturing model that aims to provide on-demand resources and services to consumers over the Internet. Scheduling is one of the core techniques for cloud manufacturing to achieve the aim. Multi-task scheduling with dynamical task arrivals is an important research issue in the area of cloud manufacturing scheduling. Many traditional algorithms such as the genetic algorithm (GA) and ant colony optimization algorithm (ACO) have been used to solve the issue, which, however, are either incapable of or perform poorly in tackling the problem. Deep reinforcement learning (DRL) that combines artificial neural networks with reinforcement learning provides an effective technique in this regard. In view of this, we employ a typical deep reinforcement learning algorithm – Deep Q-network (DQN) – and proposed a DQN-based multi-task scheduling approach for cloud manufacturing. Three different task arrival modes – arriving at the same time, arriving in random batches, and arriving one by one sequentially – are considered. Four baseline approaches including random scheduling, round-robin scheduling, earliest scheduling, and minimum execution time scheduling are investigated. A comparison of results indicates that the DQN-based scheduling approach is able to effectively address the multi-task scheduling problem in cloud manufacturing and performs best among all approaches.\",\"PeriodicalId\":16299,\"journal\":{\"name\":\"Journal of Manufacturing Science and Engineering-transactions of The Asme\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Manufacturing Science and Engineering-transactions of The Asme\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4062217\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, MANUFACTURING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Manufacturing Science and Engineering-transactions of The Asme","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4062217","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

摘要

云制造是一种制造模式,旨在通过互联网向消费者提供按需资源和服务。调度是云制造实现这一目标的核心技术之一。具有动态任务到达的多任务调度是云制造调度领域的一个重要研究课题。许多传统算法,如遗传算法(GA)和蚁群优化算法(ACO)已被用于解决该问题,但这些算法在解决该问题时要么不能解决,要么表现不佳。将人工神经网络与强化学习相结合的深度强化学习(DRL)在这方面提供了一种有效的技术。有鉴于此,我们采用了一种典型的深度强化学习算法——深度Q网络(DQN),并提出了一种基于DQN的云制造多任务调度方法。考虑了三种不同的任务到达模式——同时到达、随机分批到达和逐个顺序到达。研究了四种基线方法,包括随机调度、循环调度、最早调度和最小执行时间调度。结果比较表明,基于DQN的调度方法能够有效地解决云制造中的多任务调度问题,并且在所有方法中表现最好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Reinforcement Learning-Based Multi-Task Scheduling in Cloud Manufacturing under Different Task Arrival Modes
Cloud manufacturing is a manufacturing model that aims to provide on-demand resources and services to consumers over the Internet. Scheduling is one of the core techniques for cloud manufacturing to achieve the aim. Multi-task scheduling with dynamical task arrivals is an important research issue in the area of cloud manufacturing scheduling. Many traditional algorithms such as the genetic algorithm (GA) and ant colony optimization algorithm (ACO) have been used to solve the issue, which, however, are either incapable of or perform poorly in tackling the problem. Deep reinforcement learning (DRL) that combines artificial neural networks with reinforcement learning provides an effective technique in this regard. In view of this, we employ a typical deep reinforcement learning algorithm – Deep Q-network (DQN) – and proposed a DQN-based multi-task scheduling approach for cloud manufacturing. Three different task arrival modes – arriving at the same time, arriving in random batches, and arriving one by one sequentially – are considered. Four baseline approaches including random scheduling, round-robin scheduling, earliest scheduling, and minimum execution time scheduling are investigated. A comparison of results indicates that the DQN-based scheduling approach is able to effectively address the multi-task scheduling problem in cloud manufacturing and performs best among all approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.80
自引率
20.00%
发文量
126
审稿时长
12 months
期刊介绍: Areas of interest including, but not limited to: Additive manufacturing; Advanced materials and processing; Assembly; Biomedical manufacturing; Bulk deformation processes (e.g., extrusion, forging, wire drawing, etc.); CAD/CAM/CAE; Computer-integrated manufacturing; Control and automation; Cyber-physical systems in manufacturing; Data science-enhanced manufacturing; Design for manufacturing; Electrical and electrochemical machining; Grinding and abrasive processes; Injection molding and other polymer fabrication processes; Inspection and quality control; Laser processes; Machine tool dynamics; Machining processes; Materials handling; Metrology; Micro- and nano-machining and processing; Modeling and simulation; Nontraditional manufacturing processes; Plant engineering and maintenance; Powder processing; Precision and ultra-precision machining; Process engineering; Process planning; Production systems optimization; Rapid prototyping and solid freeform fabrication; Robotics and flexible tooling; Sensing, monitoring, and diagnostics; Sheet and tube metal forming; Sustainable manufacturing; Tribology in manufacturing; Welding and joining
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信