T. Ai, L. Huang, R.J. Song, H.F. Huang, F. Jiao, W.G. Ma
{"title":"改进的深度强化学习方法:散货码头泊位和堆场调度优化案例研究","authors":"T. Ai, L. Huang, R.J. Song, H.F. Huang, F. Jiao, W.G. Ma","doi":"10.14743/apem2023.3.474","DOIUrl":null,"url":null,"abstract":"The cornerstone of port production operations is ship handling, necessitating judicious allocation of diverse production resources to enhance the efficiency of loading and unloading operations. This paper introduces an optimisation method based on deep reinforcement learning to schedule berths and yards at a bulk cargo terminal. A Markov Decision Process model is formulated by analysing scheduling processes and unloading operations in bulk port imports business. The study presents an enhanced reinforcement learning algorithm called PS-D3QN (Prioritised Experience Replay and Softmax strategy-based Dueling Double Deep Q-Network), amalgamating the strengths of the Double DQN and Dueling DQN algorithms. The proposed solution is evaluated using actual port data and benchmarked against the other two algorithms mentioned in this paper. The numerical experiments and comparative analysis substantiate that the PS-D3QN algorithm significantly enhances the efficiency of berth and yard scheduling in bulk terminals, reduces the cost of port operation, and eliminates errors associated with manual scheduling. The algorithm presented in this paper can be tailored to address scheduling issues in the fields of production and manufacturing with suitable adjustments, including problems like the job shop scheduling problem and its extensions.","PeriodicalId":445710,"journal":{"name":"Advances in Production Engineering & Management","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An improved deep reinforcement learning approach: A case study for optimisation of berth and yard scheduling for bulk cargo terminal\",\"authors\":\"T. Ai, L. Huang, R.J. Song, H.F. Huang, F. Jiao, W.G. Ma\",\"doi\":\"10.14743/apem2023.3.474\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The cornerstone of port production operations is ship handling, necessitating judicious allocation of diverse production resources to enhance the efficiency of loading and unloading operations. This paper introduces an optimisation method based on deep reinforcement learning to schedule berths and yards at a bulk cargo terminal. A Markov Decision Process model is formulated by analysing scheduling processes and unloading operations in bulk port imports business. The study presents an enhanced reinforcement learning algorithm called PS-D3QN (Prioritised Experience Replay and Softmax strategy-based Dueling Double Deep Q-Network), amalgamating the strengths of the Double DQN and Dueling DQN algorithms. The proposed solution is evaluated using actual port data and benchmarked against the other two algorithms mentioned in this paper. The numerical experiments and comparative analysis substantiate that the PS-D3QN algorithm significantly enhances the efficiency of berth and yard scheduling in bulk terminals, reduces the cost of port operation, and eliminates errors associated with manual scheduling. The algorithm presented in this paper can be tailored to address scheduling issues in the fields of production and manufacturing with suitable adjustments, including problems like the job shop scheduling problem and its extensions.\",\"PeriodicalId\":445710,\"journal\":{\"name\":\"Advances in Production Engineering & Management\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Production Engineering & Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14743/apem2023.3.474\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Production Engineering & Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14743/apem2023.3.474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An improved deep reinforcement learning approach: A case study for optimisation of berth and yard scheduling for bulk cargo terminal
The cornerstone of port production operations is ship handling, necessitating judicious allocation of diverse production resources to enhance the efficiency of loading and unloading operations. This paper introduces an optimisation method based on deep reinforcement learning to schedule berths and yards at a bulk cargo terminal. A Markov Decision Process model is formulated by analysing scheduling processes and unloading operations in bulk port imports business. The study presents an enhanced reinforcement learning algorithm called PS-D3QN (Prioritised Experience Replay and Softmax strategy-based Dueling Double Deep Q-Network), amalgamating the strengths of the Double DQN and Dueling DQN algorithms. The proposed solution is evaluated using actual port data and benchmarked against the other two algorithms mentioned in this paper. The numerical experiments and comparative analysis substantiate that the PS-D3QN algorithm significantly enhances the efficiency of berth and yard scheduling in bulk terminals, reduces the cost of port operation, and eliminates errors associated with manual scheduling. The algorithm presented in this paper can be tailored to address scheduling issues in the fields of production and manufacturing with suitable adjustments, including problems like the job shop scheduling problem and its extensions.