Xinan Chen;Ruibin Bai;Rong Qu;Jing Dong;Yaochu Jin
{"title":"Deep Reinforcement Learning Assisted Genetic Programming Ensemble Hyper-Heuristics for Dynamic Scheduling of Container Port Trucks","authors":"Xinan Chen;Ruibin Bai;Rong Qu;Jing Dong;Yaochu Jin","doi":"10.1109/TEVC.2024.3381042","DOIUrl":null,"url":null,"abstract":"Efficient truck dispatching is crucial for optimizing container terminal operations within dynamic and complex scenarios. Despite good progress being made recently with more advanced uncertainty-handling techniques, existing approaches still have generalization issues and require considerable expertise and manual interventions in algorithm design. In this work, we present deep reinforcement learning-assisted genetic programming hyper-heuristics (DRL-GPHHs) and their ensemble variant (DRL-GPEHH). These frameworks utilize a reinforcement learning (RL) agent to orchestrate a set of auto-generated genetic programming (GP) low-level heuristics, leveraging the collective intelligence, ensuring advanced robustness and an increased level of automation of the algorithm development. DRL-GPEHH, notably, excels through its concurrent integration of a GP heuristic ensemble, achieving enhanced adaptability and performance in complex, dynamic optimization tasks. This method effectively navigates traditional convergence issues of deep RL (DRL) in sparse reward and vast action spaces, while avoiding the reliance on expert-designed heuristics. It also addresses the inadequate performance of the single GP individual in varying and complex environments and preserves the inherent interpretability of the GP approach. Evaluations across various real port operational instances highlight the adaptability and efficacy of our frameworks. Essentially, innovations in DRL-GPHH and DRL-GPEHH reveal the synergistic potential of RL and GP in dynamic truck dispatching, yielding transformative impacts on algorithm design and significantly advancing solutions to complex real-world optimization problems.","PeriodicalId":13206,"journal":{"name":"IEEE Transactions on Evolutionary Computation","volume":"29 4","pages":"1371-1385"},"PeriodicalIF":11.7000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10478109/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Efficient truck dispatching is crucial for optimizing container terminal operations within dynamic and complex scenarios. Despite good progress being made recently with more advanced uncertainty-handling techniques, existing approaches still have generalization issues and require considerable expertise and manual interventions in algorithm design. In this work, we present deep reinforcement learning-assisted genetic programming hyper-heuristics (DRL-GPHHs) and their ensemble variant (DRL-GPEHH). These frameworks utilize a reinforcement learning (RL) agent to orchestrate a set of auto-generated genetic programming (GP) low-level heuristics, leveraging the collective intelligence, ensuring advanced robustness and an increased level of automation of the algorithm development. DRL-GPEHH, notably, excels through its concurrent integration of a GP heuristic ensemble, achieving enhanced adaptability and performance in complex, dynamic optimization tasks. This method effectively navigates traditional convergence issues of deep RL (DRL) in sparse reward and vast action spaces, while avoiding the reliance on expert-designed heuristics. It also addresses the inadequate performance of the single GP individual in varying and complex environments and preserves the inherent interpretability of the GP approach. Evaluations across various real port operational instances highlight the adaptability and efficacy of our frameworks. Essentially, innovations in DRL-GPHH and DRL-GPEHH reveal the synergistic potential of RL and GP in dynamic truck dispatching, yielding transformative impacts on algorithm design and significantly advancing solutions to complex real-world optimization problems.
期刊介绍:
The IEEE Transactions on Evolutionary Computation is published by the IEEE Computational Intelligence Society on behalf of 13 societies: Circuits and Systems; Computer; Control Systems; Engineering in Medicine and Biology; Industrial Electronics; Industry Applications; Lasers and Electro-Optics; Oceanic Engineering; Power Engineering; Robotics and Automation; Signal Processing; Social Implications of Technology; and Systems, Man, and Cybernetics. The journal publishes original papers in evolutionary computation and related areas such as nature-inspired algorithms, population-based methods, optimization, and hybrid systems. It welcomes both purely theoretical papers and application papers that provide general insights into these areas of computation.