{"title":"Mixed-production flexible assembly job shop scheduling considering parallel assembly sequence variations under dual-resource constraints using multi-objective hybrid memetic algorithm","authors":"Xin Lu, Cong Lu","doi":"10.1016/j.cor.2024.106932","DOIUrl":"10.1016/j.cor.2024.106932","url":null,"abstract":"<div><div>In this study, a mixed-production flexible assembly job shop scheduling considering parallel assembly sequence variations under dual-resource constraints (MFAJSS-PASV-DRC) is proposed, to achieve simultaneous optimization of part processing sequence and assembly sequence in mixed-production. By analyzing the MFAJSS-PASV-DRC problem, an integrated mathematical model that considers the interactive effects between part processing sequence and assembly sequence under dual-resource constraints in mixed-production is established, with the optimization objectives to minimize the total production completion time, the total inventory time, and the total labor cost during the production process. Based on the above, a multi-objective hybrid memetic algorithm (MoHMA) is proposed to solve the MFAJSS-PASV-DRC. In MoHMA, a four-layer segmented hybrid chromosome encoding structure is designed, then a mixed initialization strategy (MIX3) is applied to obtain a population of high quality, and two evolutionary methods are used to generate offspring chromosomes. Meanwhile, a variable neighborhood search (VNS) incorporating five local search methods is designed to prevent MoHMA from being stuck into a local optimum. The effectiveness of MIX3 and VNS are verified by an ablation experiment. Then an elite retention strategy is used to improve the quality of non-dominated solutions. In the case study, the Taguchi method is applied to obtain the best combination of parameters for the MoHMA algorithm. After that, the superiority of the proposed MFAJSS-PASV-DRC in improving production efficiency is verified, and the effectiveness of MoHMA is verified in solving the MFAJSS-PASV-DRC problem with different scales by comparing with other algorithms.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"176 ","pages":"Article 106932"},"PeriodicalIF":4.1,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143165825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-driven robust flexible personnel scheduling","authors":"Zilu Wang , Zhixing Luo , Huaxiao Shen","doi":"10.1016/j.cor.2024.106935","DOIUrl":"10.1016/j.cor.2024.106935","url":null,"abstract":"<div><div>Personnel scheduling in various industries often faces challenges due to unpredictable workloads. This paper focuses on the general flexible personnel scheduling problem at the operational level, which is characterized by uncertain demand and limited knowledge of the true distribution of this demand. To address this issue, we propose a distributionally robust model that utilizes the Wasserstein ambiguity set. This model is designed to maintain service levels across the worst-case distribution scenarios of random demand. In addition, we introduce a robust satisficing model that is oriented towards specific targets, offering practical applicability in real-world situations. Both models leverage empirical distributions derived from historical data, enabling the generation of robust personnel schedules that are responsive to uncertain demand, even when data availability is limited. We demonstrate that these robust models can be transformed into tractable counterparts. Moreover, we develop an exact depth-first search algorithm for identifying feasible daily schedules. Through a comprehensive case study and experiments using real-world data, we showcase the effectiveness and advantages of our proposed models and algorithms. The robustness of our models is thoroughly evaluated, providing valuable management insights and demonstrating their ability to tackle scheduling challenges in uncertain environments.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"176 ","pages":"Article 106935"},"PeriodicalIF":4.1,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143166462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amin Ahmadi Digehsara , Menglei Ji , Amir Ardestani-Jaafari , Hoda Bidkhori
{"title":"Equity-driven facility location: A two-stage robust optimization approach","authors":"Amin Ahmadi Digehsara , Menglei Ji , Amir Ardestani-Jaafari , Hoda Bidkhori","doi":"10.1016/j.cor.2024.106920","DOIUrl":"10.1016/j.cor.2024.106920","url":null,"abstract":"<div><div>This paper explores the computational challenge of incorporating equity in p-median facility location models under uncertain demand and discusses how two-stage robust programming can be employed to address the challenge. Our research evaluates various equity measures appropriate for facility location modeling and proposes a novel approach to reformulating the problem into a two-stage robust optimization framework, enhancing computational efficiency caused by incorporating equity and uncertainty into these models. We provide two solution algorithms: an exact and an inexact column-and-constraint generation (C&CG) method. Our findings suggest that although the exact C&CG method generally outperforms the inexact approach, both methods perform well when the number of variables is small, with the inexact C&CG demonstrating a slight advantage in computational time. We further conduct a detailed evaluation of the tractability of our reformulated model and the effectiveness of various equity measures through a real-world case study of Metro Vancouver.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"176 ","pages":"Article 106920"},"PeriodicalIF":4.1,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143166420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graph neural networks for job shop scheduling problems: A survey","authors":"Igor G. Smit , Jianan Zhou , Robbert Reijnen , Yaoxin Wu , Jian Chen , Cong Zhang , Zaharah Bukhsh , Yingqian Zhang , Wim Nuijten","doi":"10.1016/j.cor.2024.106914","DOIUrl":"10.1016/j.cor.2024.106914","url":null,"abstract":"<div><div>Job shop scheduling problems (JSSPs) represent a critical and challenging class of combinatorial optimization problems. Recent years have witnessed a rapid increase in the application of graph neural networks (GNNs) to solve JSSPs, albeit lacking a systematic survey of the relevant literature. This paper aims to thoroughly review prevailing GNN methods for different types of JSSPs and the closely related flow-shop scheduling problems (FSPs), especially those leveraging deep reinforcement learning (DRL). We begin by presenting the graph representations of various JSSPs, followed by an introduction to the most commonly used GNN architectures. We then review current GNN-based methods for each problem type, highlighting key technical elements such as graph representations, GNN architectures, GNN tasks, and training algorithms. Finally, we summarize and analyze the advantages and limitations of GNNs in solving JSSPs and provide potential future research opportunities. We hope this survey can motivate and inspire innovative approaches for more powerful GNN-based approaches in tackling JSSPs and other scheduling problems.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"176 ","pages":"Article 106914"},"PeriodicalIF":4.1,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143166461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A literature review of reinforcement learning methods applied to job-shop scheduling problems","authors":"Xiehui Zhang, Guang-Yu Zhu","doi":"10.1016/j.cor.2024.106929","DOIUrl":"10.1016/j.cor.2024.106929","url":null,"abstract":"<div><div>The job-shop scheduling problem (JSP) is one of the most famous production scheduling problems, and it is an NP-hard problem. Reinforcement learning (RL), a machine learning method capable of feedback-based learning, holds great potential for solving shop scheduling problems. In this paper, the literature on applying RL to solve JSPs is taken as the review object and analyzed in terms of RL methods, the number of agents, and the agent upgrade strategy. We discuss three major issues faced by RL methods for solving JSPs: the curse of dimensionality, the generalizability and the training time. The interconnectedness of the three main issues is revealed and the main factors affecting them are identified. By discussing the current solutions to the above issues as well as other challenges that exist, suggestions for solving these problems are given, and future research trends are proposed.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106929"},"PeriodicalIF":4.1,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An accelerated Benders decomposition method for distributionally robust sustainable medical waste location and transportation problem","authors":"Zihan Quan , Yankui Liu , Aixia Chen","doi":"10.1016/j.cor.2024.106895","DOIUrl":"10.1016/j.cor.2024.106895","url":null,"abstract":"<div><div>This study addresses the sustainable medical waste location and transportation (SMWLT) problem from the viewpoint of social risk, environmental impact, and economic performance, where model uncertainty includes risk and transportation costs. In practice, it is usually hard to obtain the exact probability distribution of uncertain parameters. To address this challenge, this study first constructs an ambiguity set to model the partial distribution information of uncertain parameters. Based on the constructed ambiguity set, this study develops a new multi-objective distributionally robust chance-constrained (DRCC) model for the SMWLT problem. Subsequently, this study adopts the robust counterpart (RC) approximation method to reformulate the proposed DRCC model as a computationally tractable mixed-integer linear programming (MILP) model. Furthermore, an accelerated Benders decomposition (BD) enhanced by valid inequalities is designed to solve the resulting MILP model, which significantly improves the solution efficiency compared with the classical BD algorithm and CPLEX solver. Finally, a practical case in Chongqing, China, is addressed to illustrate the effectiveness of our DRCC model and the accelerated BD solution method.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106895"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A VNS method for the conditional p-next center problem","authors":"Jelena Tasić, Zorica Dražić, Zorica Stanimirović","doi":"10.1016/j.cor.2024.106916","DOIUrl":"10.1016/j.cor.2024.106916","url":null,"abstract":"<div><div>This paper considers the conditional <span><math><mi>p</mi></math></span>-next center problem (CPNCP) and proposes a metaheuristic method as a solution approach. The <span><math><mi>p</mi></math></span>-next center problem (PNCP) is an extension of the classical <span><math><mi>p</mi></math></span>-center problem that captures real-life situations when centers suddenly fail due to an accident or some other problem. When the center failure happens, the customers allocated to the closed center are redirected to the center closest to the closed one, called the backup center. On the other hand, when a service network expands, some of the existing centers are usually retained and a number of new centers are opened. The conditional <span><math><mi>p</mi></math></span>-next center problem involves both of these two aspects that arise in practice and, to the best of our knowledge, has not been considered in the literature so far. Since the CPNCP is NP-hard, a metaheuristic algorithm based on the Variable Neighborhood Search is developed. The proposed VNS includes an efficient implementation of the Fast Interchange heuristic which enables the VNS to tackle with real-life problem dimensions. The exhaustive computational experiments were performed on the modified PNCP test instances from the literature with up to 900 nodes. The obtained results are compared with the results of the exact solver CPLEX. It is shown that the proposed VNS reaches optimal solutions or improves the feasible ones provided by CPLEX in a significantly shorter CPU time. The VNS also quickly returns its best solutions when CPLEX failed to provide a feasible one. In order to investigate the effects of two different approaches in service network planning, the VNS solutions of the CPNCP are compared with the optimal or best-known solutions of the <span><math><mi>p</mi></math></span>-next center problem. In addition, the conducted computational study includes direct comparisons of the results obtained when the proposed SVNS is applied to PNCP (by setting the number of existing centers to 0) with the results of recent solution methods proposed for the PNCP.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106916"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lexicographic optimization-based approaches to learning a representative model for multi-criteria sorting with non-monotonic criteria","authors":"Zhen Zhang , Zhuolin Li , Wenyu Yu","doi":"10.1016/j.cor.2024.106917","DOIUrl":"10.1016/j.cor.2024.106917","url":null,"abstract":"<div><div>Deriving a representative model using value function-based methods from the perspective of preference disaggregation has emerged as a prominent and growing topic in multi-criteria sorting (MCS) problems. A noteworthy observation is that many existing approaches to learning a representative model for MCS problems traditionally assume the monotonicity of criteria, which may not always align with the complexities found in real-world MCS scenarios. Consequently, this paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria through the integration of the threshold-based value-driven sorting procedure. To do so, we first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space. Subsequently, we construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker’s assignment example preference information. By simultaneously considering the complexity and discriminative power of the models, two distinct lexicographic optimization-based approaches are developed to derive a representative model for MCS problems with non-monotonic criteria. Eventually, we offer an illustrative example and conduct comprehensive simulation experiments to elaborate the feasibility and validity of the proposed approaches.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106917"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Portfolio optimisation: Bridging the gap between theory and practice","authors":"Cristiano Arbex Valle","doi":"10.1016/j.cor.2024.106918","DOIUrl":"10.1016/j.cor.2024.106918","url":null,"abstract":"<div><div>Portfolio optimisation is essential in quantitative investing, but its implementation faces several practical difficulties. One particular challenge is converting optimal portfolio weights into real-life trades in the presence of realistic features, such as transaction costs and integral lots. This is especially important in automated trading, where the entire process happens without human intervention.</div><div>Several works in literature have extended portfolio optimisation models to account for these features. In this paper, we highlight and illustrate difficulties faced when employing the existing literature in a practical setting, such as computational intractability, numerical imprecision and modelling trade-offs. We then propose a two-stage framework as an alternative approach to address this issue. Its goal is to optimise portfolio weights in the first stage and to generate realistic trades in the second. Through extensive computational experiments, we show that our approach not only mitigates the difficulties discussed above but also can be successfully employed in a realistic scenario.</div><div>By splitting the problem in two, we are able to incorporate new features without adding too much complexity to any single model. With this in mind we model two novel features that are critical to many investment strategies: first, we integrate two classes of assets, futures contracts and equities, into a single framework, with an example illustrating how this can help portfolio managers in enhancing investment strategies. Second, we account for borrowing costs in short positions, which have so far been neglected in literature but which significantly impact profits in long/short strategies. Even with these new features, our two-stage approach still effectively converts optimal portfolios into actionable trades.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106918"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haonan Song , Junqing Li , Zhaosheng Du , Xin Yu , Ying Xu , Zhixin Zheng , Jiake Li
{"title":"A Q-learning driven multi-objective evolutionary algorithm for worker fatigue dual-resource-constrained distributed hybrid flow shop","authors":"Haonan Song , Junqing Li , Zhaosheng Du , Xin Yu , Ying Xu , Zhixin Zheng , Jiake Li","doi":"10.1016/j.cor.2024.106919","DOIUrl":"10.1016/j.cor.2024.106919","url":null,"abstract":"<div><div>In practical industrial production, workers are often critical resources in manufacturing systems. However, few studies have considered the level of worker fatigue when assigning resources and arranging tasks, which has a negative impact on productivity. To fill this gap, the distributed hybrid flow shop scheduling problem with dual-resource constraints considering worker fatigue (DHFSPW) is introduced in this study. Due to the complexity and diversity of distributed manufacturing and multi-objective, a Q-learning driven multi-objective evolutionary algorithm (QMOEA) is proposed to optimize both the makespan and total energy consumption of the DHFSPW at the same time. In QMOEA, solutions are represented by a four-dimensional vector, and a decoding heuristic that accounts for real-time worker productivity is proposed. Additionally, three problem-specific initialization heuristics are developed to enhance convergence and diversity capabilities. Moreover, encoding-based crossover, mirror crossover and balanced mutation methods are presented to improve the algorithm’s exploitation capabilities. Furthermore, a Q-learning based local search is employed to explore promising nondominated solutions across different dimensions. Finally, the QMOEA is assessed using a set of randomly generated instances, and a detailed comparison with state-of-the-art algorithms is performed to demonstrate its efficiency and robustness.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"175 ","pages":"Article 106919"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}