{"title":"Dynamic crowdsourcing problem in urban–rural distribution using the learning-based approach","authors":"Zongcheng Zhang , Maoliang Ran , Yanru Chen , M.I.M. Wahab , Mujin Gao , Yangsheng Jiang","doi":"10.1016/j.cor.2025.107292","DOIUrl":"10.1016/j.cor.2025.107292","url":null,"abstract":"<div><div>Inspired by real-world urban and rural distribution logistics scenarios, this study explores the dynamic crowdsourcing multi-depot pickup and delivery problem (DCMDPDP) through an online platform (OCP), where requests and crowdsourced vehicles arrive dynamically. Vehicles either collect from multiple depots for deliveries or pick up from customers to depots. To maximize the OCP’s daily total gain, the net value of completed task revenue minus vehicle compensation costs, we integrate anticipated future gains into each decision-making process and formulate the DCMDPDP as a Markov decision process. A learning-based hybrid heuristic algorithm is proposed for the DCMDPDP. Specifically, we develop an enhanced adaptive large neighborhood search algorithm leveraging the heat map to batch orders into multiple groups and assign them to depots, where the heat map is learned offline using a graph convolutional residual network with an attention mechanism model. A value learning-based algorithm is also developed to obtain optimal matches between order batches and vehicles, and near-optimal travel routes. Experimental results demonstrate that the proposed algorithm improves the OCP total gain by 46.09%, 57.13%, 0.49%, 2.45%, 1.08%, and 2.77% over six benchmarks. Furthermore, the proposed algorithm reduces unserved customers to 7.83 on average, outperforming six benchmarks by 2.19–167.52 fewer cases. Moreover, extensive experiments validate that the proposed algorithm is strongly generalizable in handling instances with varying customer sizes and different temporal, spatial, and demand distributions.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107292"},"PeriodicalIF":4.3,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145262358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solving the strip packing problem with a decomposition framework and a generic solver: Implementation, tuning, and reinforcement-learning-based hybridization","authors":"Fatih Burak Akçay, Maxence Delorme","doi":"10.1016/j.cor.2025.107276","DOIUrl":"10.1016/j.cor.2025.107276","url":null,"abstract":"<div><div>In the strip packing problem, the objective is to pack a set of two-dimensional items into a strip of fixed width such that the total height of the packing is minimized. The current state-of-the-art exact approach for the problem uses a decomposition framework in which the main problem (MP) fixes the item abscissas and the strip height, whereas the subproblem (SP) determines whether a set of item ordinates resulting in a feasible packing exists. Even though this decomposition framework has already been used several times in the literature, implementation details were often obfuscated, limiting the outreach of the approach. We address this issue by thoroughly describing and testing various builds for this framework, investigating important features such as the way to forbid an infeasible solution in the MP (e.g., by rejecting them or through a no-good cut) and the techniques used to solve the MP and the SP. One of our findings is that a minor implementation tweak such as changing the random seed between two MP iterations can bring the same level of improvement as a more involved feature such as strengthening the no-good cuts. From our extensive experiments, we identify two versions of the framework that produce complementary results: one where the main problem is solved with integer linear programming and the other where it is solved with constraint programming. We then train a reinforcement learning agent to find the best hybridization of these two algorithms and show that the resulting approach obtains state-of-the-art results on benchmark instances.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107276"},"PeriodicalIF":4.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145262361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolai Antonov , Přemysl Šůcha , Mikoláš Janota , Jan Hůla
{"title":"Minimizing the weighted number of tardy jobs: data-driven heuristic for single-machine scheduling","authors":"Nikolai Antonov , Přemysl Šůcha , Mikoláš Janota , Jan Hůla","doi":"10.1016/j.cor.2025.107281","DOIUrl":"10.1016/j.cor.2025.107281","url":null,"abstract":"<div><div>Existing research on single-machine scheduling is largely focused on exact algorithms, which perform well on typical instances but can significantly deteriorate on certain regions of the problem space. In contrast, data-driven approaches provide strong and scalable performance when tailored to the structure of specific datasets. Leveraging this idea, we focus on a single-machine scheduling problem where each job is defined by its weight, duration, due date, and deadline, aiming to minimize the total weight of tardy jobs. We introduce a novel data-driven scheduling heuristic that combines machine learning with problem-specific characteristics, ensuring feasible solutions, which is a common challenge for ML-based algorithms. Experimental results demonstrate that our approach significantly outperforms the state-of-the-art in terms of optimality gap, number of optimal solutions, and adaptability across varied data scenarios, highlighting its flexibility for practical applications. In addition, we conduct a systematic exploration of ML models, addressing a common gap in similar studies by offering a detailed model selection process and demonstrating why the chosen model is the best fit.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107281"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145217197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H.W. Ljósheim, S. Jenkins, K.D. Searle, J.K. Wolff
{"title":"Optimal placement of electric vehicle slow-charging stations: A continuous facility location problem under uncertainty","authors":"H.W. Ljósheim, S. Jenkins, K.D. Searle, J.K. Wolff","doi":"10.1016/j.cor.2025.107289","DOIUrl":"10.1016/j.cor.2025.107289","url":null,"abstract":"<div><div>Electric vehicles (EVs) are becoming a key mechanism to reduce emissions in the transportation industry, and hence contribute to the green transition. In this paper, we present a mathematical programming model which determines the optimal placement of EV charging stations such that chargers are placed in the most cost-efficient way possible for all stakeholders, assuming additionally that EV charging demand is inherently stochastic in nature. The model is formulated as a two-stage, continuous location–allocation model in the form of a generalised Weber problem in two dimensions. However, this formulation is non-convex and notoriously difficult to solve. We therefore propose a suitable discretisation procedure to find high quality solutions in suitable time. The discretisation procedure shows strong performance across a variety of computational experiments using randomly generated scenarios, maintaining robustness in terms of the objective value and overall solution quality.</div><div>A part of this solution procedure was entered into the 15th AIMMS-MOPTA Optimisation Modelling Competition.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107289"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145217201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonzalo Méndez-Vogel , Sebastián Dávila-Gálvez , Pedro Jara-Moroni , Jorge Zamorano , Vladimir Marianov
{"title":"Maximum capture location problem with random utilities and overflow penalties","authors":"Gonzalo Méndez-Vogel , Sebastián Dávila-Gálvez , Pedro Jara-Moroni , Jorge Zamorano , Vladimir Marianov","doi":"10.1016/j.cor.2025.107285","DOIUrl":"10.1016/j.cor.2025.107285","url":null,"abstract":"<div><div>This paper extends the maximum capture location problem with random utilities by incorporating the facility capacity and introducing penalties for overflows into the objective function. We propose a method that combines the key features of two state-of-the-art approaches for the uncapacitated case, which are adapted to solve the problem at hand. The first approach is a linear reformulation that extends the best-known linearization in the literature, which is based on variable substitution. The second approach is a reformulation that incorporates outer-approximation cuts and enhanced submodular cuts, solving the problem via a branch-and-cut approach. We tested the performance of the three approaches on several instances and show that the combined method outperforms each of the preceding techniques. The optimal location patterns of the model are also analysed, and it is found that considering the overflow and overflow penalties in the objective function affects the location decisions. The resulting optimal locations align more closely with practical scenarios.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107285"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145217202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wen Ma , Gedong Jiang , Nuogang Sun , Chaoqing Min , Xuesong Mei
{"title":"Modeling and algorithm for job shop scheduling with batch operations in semiconductor fabs","authors":"Wen Ma , Gedong Jiang , Nuogang Sun , Chaoqing Min , Xuesong Mei","doi":"10.1016/j.cor.2025.107287","DOIUrl":"10.1016/j.cor.2025.107287","url":null,"abstract":"<div><div>Semiconductor manufacturing presents a highly complex Job Shop Scheduling Problem (JSP) due to the diversity and large number of processing machines, as well as the intricate manufacturing processes including batch and non-batch operations. Existing studies often either overlook batching problems or address them in oversimplified ways, failing to provide effective solutions for large-scale scheduling challenges with batch operations. For this problem, a model for the JSP involving both batching and non-batching processes in semiconductor fabs is first developed. Then, the First Come First Served (FCFS) approach, as an effective rule-based method, is employed to generate high-quality initial solutions. A tailored Constrained Genetic Algorithm (CGA) by embedding constraints to the stages of genetic algorithms is proposed to further optimize the solution. The CGA incorporates batch grouping, constrained encoding, constrained crossover and constrained mutation to effectively handle the sequential constraints of batch and non-batch processes, ensuring the generation of valid solutions. The CGA is validated using the SMT2020 and SMAT2022 datasets across various scales and scenarios. Experimental results demonstrate that the CGA outperforms FCFS, backward simulation and reinforcement learning. These results highlight the CGA’s effectiveness and robustness in solving complex scheduling problems in semiconductor manufacturing.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107287"},"PeriodicalIF":4.3,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145217130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the periodic service scheduling problem with non-uniform demands","authors":"Elena Fernández , Jörg Kalcsics","doi":"10.1016/j.cor.2025.107280","DOIUrl":"10.1016/j.cor.2025.107280","url":null,"abstract":"<div><div>This paper introduces the Periodic Service Scheduling Problem with Non-uniform Demands, in which the best service policy for a set of customers with periodically recurring demand through a given finite planning horizon has to be determined. Service to customers is provided at every time period by a set of potential service providers, each of them with an activation cost and a capacity. The decisions to be made include the servers to be activated at each time period together with a service schedule and server allocation for every customer that respect the periodicity of customer demand and the capacity of the activated servers, which minimize the total cost of the activated servers. We give a first Integer Linear Programming formulation with one set of decision variables associated with each of the decisions of the problem. Afterwards, we develop a logic-based Benders reformulation where one set of variables is projected out and constraints that guarantee the feasibility of the solutions are introduced. The separation problem for the new set of constraints is studied, and an exact Branch & Logic-Benders-Cut algorithm for the reformulation is proposed together with several variations and enhancements. The particular cases in which all servers are identical and in which all parameters are time-invariant are also studied. Extensive computational experiments assess the superiority of the logic-based Benders reformulation over the first formulation.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107280"},"PeriodicalIF":4.3,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145155374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A unified approach to extract interpretable rules from tree ensembles via Integer Programming","authors":"Lorenzo Bonasera , Emilio Carrizosa","doi":"10.1016/j.cor.2025.107283","DOIUrl":"10.1016/j.cor.2025.107283","url":null,"abstract":"<div><div>Tree ensembles are widely used machine learning models, known for their effectiveness in supervised classification and regression tasks. Their performance derives from aggregating predictions of multiple decision trees, which are renowned for their interpretability properties. However, tree ensemble models do not reliably exhibit interpretable output. Our work aims to extract an optimized list of rules from a trained tree ensemble, providing the user with a condensed, interpretable model that retains most of the predictive power of the full model. Our approach consists of solving a set partitioning problem formulated through Integer Programming. The extracted list of rules is unweighted and defines a partition of the training data, assigning each instance to exactly one rule, and thereby simplifying the explanation process. The proposed method works with tabular or time series data, for both classification and regression tasks, and its flexible formulation can include any arbitrary loss or regularization functions. Our computational experiments offer statistically significant evidence that our method performs comparably to several rule extraction methods in terms of predictive performance and fidelity towards the tree ensemble. Moreover, we empirically show that the proposed method effectively extracts interpretable rules from tree ensembles that are designed for time series data.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107283"},"PeriodicalIF":4.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel mathematical model for the scheduling of a zero inventory production: an application of process scheduling in fog computing","authors":"Mani Sharifi , Sharareh Taghipour , Abdolreza Abhari , Maciej Rysz","doi":"10.1016/j.cor.2025.107284","DOIUrl":"10.1016/j.cor.2025.107284","url":null,"abstract":"<div><div>One of the main production-related costs in manufacturing is inventory cost since manufacturing firms allocate a vast area to raw material, semi-processed, and final products in production lines and warehouses. Reducing the volume of these inventories leads to lower production-related costs. This paper presents a novel mathematical model for zero-inventory production scheduling. In this model, the jobs arrive at fixed times and are scheduled on a set of unrelated machines. The jobs have different operations that need to be processed one by one. Since the system has zero inventory, the jobs must be processed immediately upon arrival. Also, whenever a job’s operation is complete, the following operation must instantly start (no wait time). That operation is outsourced if no machines are available to process any of the job’s operations. The jobs’ operations are dispatched to the machines from a dispatching center, and there is a latency between the dispatching center, the machines, and the outsourcing center. We present a mixed-integer non-linear programming (MINLP) model to formulate this problem. Then, the MINLP model is turned into a mixed-integer linear programming (MILP) model by linearizing its constraints. Since many production scheduling problems are known to be NP-hard, particularly those involving unrelated parallel machines, precedence constraints, and time-dependent decisions like ours, we adopt two metaheuristics to solve the problem for large-scale cases where exact methods are computationally inefficient. The first is a Genetic Algorithm (GA), and the second is a Teaching-Learning-Based Optimization (TLBO) algorithm. The performance of these algorithms is tested against the optimal solutions obtained from CPLEX for a set of small-scale problems. We consider a real case study, an image processing system, to validate the proposed developments (the MILP model and the GA). The results show that the presented model and algorithm can reduce the system’s total cost by about 12.57% compared to the existing online dispatching rules.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107284"},"PeriodicalIF":4.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing high-tech product take-back schemes in a closed-loop supply chain","authors":"Fatemeh Keshavarz-Ghorbani , Mohamad Y. Jaber , Seyed Hamid Reza Pasandideh","doi":"10.1016/j.cor.2025.107282","DOIUrl":"10.1016/j.cor.2025.107282","url":null,"abstract":"<div><div>Frequent product development is a solution to the shortened product lifecycles in the consumer electronics industry. It enables companies to maintain competitiveness and strengthen their market share. However, environmental concerns bring reverse logistics practices into focus. A take-back policy is a strategic reverse logistics activity known to foster market share; however, it poses various challenges and uncertainties. Considering uncertain demand, we introduced an innovative adoption model with two distinct take-back policies, trade-in and credit, to address challenges in multi-generation production planning. Inspired by real-world practices of companies like Apple and Samsung, our model first examines how trade-in programs drive repeat purchases and enhance market share, with credit-based programs to attract new customers. It then captures changes in demand, production planning, recovery decisions, and internal competition among multiple product generations. Distinct from previous conclusions, this study explores how producers can strategically manage demand for new generations to slow diffusion, thereby increasing refurbishment and recycling volumes over time. Our findings highlight the pivotal role of adaptive pricing strategies and production scalability in maximizing profitability and promoting sustainability in competitive high-tech industries.</div></div>","PeriodicalId":10542,"journal":{"name":"Computers & Operations Research","volume":"185 ","pages":"Article 107282"},"PeriodicalIF":4.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}