{"title":"Balancing cost and influential ability under structural failures via a multi-objective approach","authors":"Shuai Wang , Junru Tang , Xiuli Bai","doi":"10.1016/j.swevo.2026.102329","DOIUrl":"10.1016/j.swevo.2026.102329","url":null,"abstract":"<div><div>The influence maximization problem and the network robustness enhancement have been studied as optimization or information mining tasks pertaining to complex systems, which attracts increasing attention in recent studies. The former seeks to identify a set of seed nodes that maximize information spread, and the latter aims to preserve network functionality under failures or attacks. Building upon these concepts, the Robust Influence Maximization (RIM) problem has been introduced to jointly consider the robustness of the influence spread process. However, existing research primarily adopts single-objective approaches and largely overlooks the associated cost of node selection. In this work, an extended formulation of the RIM problem has been developed by integrating node cost considerations, thereby framing it as a multi-objective optimization problem. Our objective is to simultaneously maximize the robust influential ability and minimize the cost under cascading failure scenarios, which has not been touched upon in previous studies. To solve this problem, a novel multi-objective algorithm is introduced, MOALO-CRIM, using effective random walk and elite mechanism which effectively balances the trade-off between influence and cost. Experiments conducted on both real-world and synthetic networks demonstrate the effectiveness of the proposed approach. The algorithm achieves high-quality Pareto fronts with both considerable convergence and diversity. Additionally, we apply TOPSIS for selecting representative solutions from the Pareto set, supporting informed decision-making for stakeholders.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"102 ","pages":"Article 102329"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengfan Wang , Xinglin Chen , Siyu Liu , Sisindisiwe Nomalanga Ncube , Ruotong Ming , Fuqiang Liu , Jun Luo , Huayan Pu
{"title":"Improved QPSO with Darwinian evolution theory for three-dimensional path planning of robots","authors":"Mengfan Wang , Xinglin Chen , Siyu Liu , Sisindisiwe Nomalanga Ncube , Ruotong Ming , Fuqiang Liu , Jun Luo , Huayan Pu","doi":"10.1016/j.swevo.2026.102334","DOIUrl":"10.1016/j.swevo.2026.102334","url":null,"abstract":"<div><div>Effective three-dimensional (3D) path planning is crucial for robots to achieve missions in complex environments, where swarm optimization algorithms, which are important methods in path planning, face challenges such as local optimum and high sensitivity to initial particle positions. Therefore, this paper proposes a competitive-assimilation strategy based on Darwinian evolution theory and combines it with the Quantum-behaved Particle Swarm Optimization (QPSO) algorithm to solve the challenges of optimization in 3D path planning. Simultaneously, a novel adaptive law and the Coyote Optimization Algorithm (COA) are both introduced into the algorithm for further strengthening the local search capability. By comparing with the latest related algorithms in simulation experiments and statistical validation tests, the two proposed methods demonstrate superior path quality (achieving mean value reductions of 1.51% and 3.44%, respectively) and enhanced algorithmic stability (with standard deviation reductions of 55.41% and 86.88%, respectively). These results substantiate the effectiveness and superiority of the proposed methodology in 3D path planning applications.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"102 ","pages":"Article 102334"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolutionary reinforcement learning with weight-freezing and Markov Blanket-based dimensionality reduction","authors":"Oladayo S. Ajani, Jong Taek Lee, Byungchul Tak","doi":"10.1016/j.swevo.2026.102347","DOIUrl":"10.1016/j.swevo.2026.102347","url":null,"abstract":"<div><div>Deep Neuroevolution facilitated through Evolutionary Algorithms (EAs) has demonstrated considerable potential as a scalable alternative to gradient-based optimization in Deep Reinforcement Learning (DRL), particularly because EAs are highly parallelizable. Despite their benefits, EAs struggle with the high-dimensional search space of Deep Neural Networks (DNNs), and dedicated mechanisms for handling large-scale optimization in EAs often fail due to the dynamic nature of DRL environments. This paper introduces a novel representation-level approach that integrates weight freezing and Markov Blankets (MB). Specifically, the Simultaneous Markov Blanket (STMB) algorithm is used to identify a minimal set of conditioning weights (intrinsic weights) that makes the objective function independent of all other weights. Consequently, during the evolutionary optimization process, only the intrinsic weights are optimized while the non-intrinsic weights are frozen. By freezing the non-intrinsic weights, we significantly reduce the optimization space and focus the evolutionary search on the MB-learned intrinsic weights only. We incorporate this mechanism into a standard Genetic Algorithm and evaluate it across seven DRL benchmarks. Results show that our method reduces parameter dimensionality by an average of over 97%, while improving or maintaining policy performance relative to other baseline EAs. The performance edge of the proposed method is also demonstrated through comparative experiments with other dimensionality-reduced training methods as well as traditional and hybrid DRL algorithms in the literature. This work highlights the viability of combining probabilistic graphical methods and evolutionary computation to overcome scalability challenges in Evolutionary DRL.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"103 ","pages":"Article 102347"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147426634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditional diffusion with gradient guidance for high-dimensional expensive multi-objective optimization","authors":"Shikun Chen","doi":"10.1016/j.swevo.2026.102340","DOIUrl":"10.1016/j.swevo.2026.102340","url":null,"abstract":"<div><div>Multi-objective optimization with expensive function evaluations demands efficient use of limited computational budgets. Existing surrogate-assisted evolutionary algorithms rely on Gaussian processes or radial basis functions; however, these methods suffer from cubic computational complexity and degrade in high-dimensional decision spaces. Here, we propose a conditional diffusion framework that models the Pareto set as a learnable probability distribution rather than a discrete point collection. Our approach consists of three components: a transformer-based diffusion model that generates candidate solutions based on preference vectors, a gradient-guided sampling mechanism that incorporates surrogate-derived descent directions during reverse diffusion, and an entropy-weighted acquisition ensemble for batch selection. The diffusion model learns to map noise samples directly to Pareto-optimal regions. In contrast to isotropic mutation operators, the gradient guidance steers generation toward improved objective values while the repulsion mechanism preserves solution diversity. We evaluate the proposed method on multiple benchmark suites with decision dimensions up to 100 and objective counts up to 10. Results demonstrate superior performance compared to state-of-the-art methods under identical evaluation budgets.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"102 ","pages":"Article 102340"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Runkai Liu , Wei Hu , Witold Pedrycz , Yanjie Song , Lining Xing , Yongguang Yu
{"title":"A reinforcement learning-based artificial bee colony algorithm for resolving the UAV logistics path planning problem in a 3D space","authors":"Runkai Liu , Wei Hu , Witold Pedrycz , Yanjie Song , Lining Xing , Yongguang Yu","doi":"10.1016/j.swevo.2026.102328","DOIUrl":"10.1016/j.swevo.2026.102328","url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs), with their distinctive aerial mobility and autonomy, play a pivotal role in logistics and distribution, post-disaster search and rescue operations, as well as power inspection. Particularly in the logistics and distribution industry, this novel form of delivery not only reduces labor costs but also enhances the flexibility and responsiveness of the logistical network. However, UAV flights can be impeded by obstacles such as buildings and mountains. Ensuring flight safety for UAVs while planning an economically efficient distribution path has become crucial in addressing the challenges faced by UAV logistics and distribution. To tackle the three-dimensional space UAV logistics path planning problem (3D-ULPP), A comprehensive model was developed which incorporates a mountainous environment and considers the constraints of collision avoidance with the UAV, as well as the maximum climb angle of the UAV. Consequently, a multi-strategy dual-population artificial bee colony algorithm based on reinforcement learning (RL-MDABC) is proposed. The RL-MDABC generates an initial population using heuristic rules derived from departure and destination location information, thereby significantly enhancing the quality of the initial honey source. During the employed bee search process, we employ Q-learning methods to determine which search method is chosen by employed bees; these methods encompass global optimal guidance strategy searches, local circular region strategy searches, and spiral approximation strategy searches. Additionally, our algorithm employs a dual-population approach during the onlooker bee stage wherein it combines both initially obtained populations from honey-picking bees with randomly generated supplementary populations to create candidate populations for further exploration through local honey source searches. To evaluate the performance of the RL-MDABC, CEC 2017 benchmark functions are adopted to compare it to several improved artificial bee colony algorithm and meta-heuristic algorithms. The results show that RL-MDABC is superior to the comparison algorithm on nearly three-fifth functions of the entire test set. This reflects that RL-MDABC has certain advantages in comparison with various algorithms. Moreover, The RL-MDABC demonstrates significant advantages over the comparison algorithms in terms of final optimization results and solution efficiency when addressing the 3D-ULPP, as evidenced by a multitude of experiments. This proposed algorithm enables more efficient identification of optimal logistics and distribution paths.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"102 ","pages":"Article 102328"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenquan Zhang , Fei Zhao , Cameron Walker , Thomas Adams , Xuesong Mei
{"title":"Solving Dynamic Job Shop Scheduling Problems via an enhanced Proximal Policy Optimization algorithm with Double Prioritized Experience Replay","authors":"Wenquan Zhang , Fei Zhao , Cameron Walker , Thomas Adams , Xuesong Mei","doi":"10.1016/j.swevo.2026.102344","DOIUrl":"10.1016/j.swevo.2026.102344","url":null,"abstract":"<div><div>The Dynamic Job Shop Scheduling Problem (DJSSP) is a typical scheduling task that requires rescheduling in the presence of unexpected events, such as random job arrivals and urgent orders. However, due to the varying scales of scheduling problems, existing rescheduling methods struggle to effectively reuse trained scheduling strategies or benefit from previous transfer learning from previous models. To address this challenge, we propose a Double Priority Experience Replay (DPER) mechanism integrated within the Proximal Policy Optimization (PPO) framework (DPER-PPO). First, we introduce a generalized disjunctive graph to model random job arrivals and combine it with an extensible state representation consisting of 10 distinct features to optimize completion time, thereby meeting the dynamic and adaptive requirements of DJSSP. Next, we develop a comprehensive multidimensional action space with adaptive weighting rules, enhancing the action coverage and improving the global optimization capability of the algorithm. Finally, the proposed DPER mechanism, integrated within the PPO framework, enhances elite sample utilization and accelerates agent learning in dynamic environments. Static experimental results on classic benchmark instances demonstrate that our scheduling model outperforms existing Deep reinforcement learning (DRL) methods in terms of average performance. Furthermore, dynamic scheduling experiments show that, when encountering unexpected events such as random job arrivals and urgent orders, our model achieves better results than the Priority scheduling rules (PDR) scheduling method and other DRL approaches within a reasonable time frame. In addition, the results of the analysis of variance (ANOVA) test further confirm the statistical significance and effectiveness of our proposed method.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"102 ","pages":"Article 102344"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Ma , Yong Zhang , Dun-wei Gong , Yu Xue , Ali Wagdy Mohamed , Xiangjuan Yao
{"title":"An adaptive auxiliary problem-based evolutionary algorithm for multi-objective optimization problems with complex constraints","authors":"Jun Ma , Yong Zhang , Dun-wei Gong , Yu Xue , Ali Wagdy Mohamed , Xiangjuan Yao","doi":"10.1016/j.swevo.2026.102324","DOIUrl":"10.1016/j.swevo.2026.102324","url":null,"abstract":"<div><div>Multi-objective optimization problems with complex constraints (CMOPs) are widespread in engineering fields. The multi-stage constraint-handling mechanism has proven to be an effective approach for addressing such problems. However, existing research has not yet provided an effective mechanism for determining the order of constraint processing. To address this issue, this paper first proposes a constraint-handling sequence determination mechanism based on correlation analysis. Building upon the mechanism and the auxiliary problem-based optimization framework, a constraint multi-objective evolutionary algorithm with adaptive auxiliary problem (AA-CMOEA) is developed. In the AA-CMOEA, two populations are respectively employed to solve the original CMOP and an auxiliary multi-objective optimization problem with variable constraint subsets. Specifically, an adaptive updating strategy of auxiliary problem, linked to the constraint-handling sequence, is introduced to autonomously update the components of the auxiliary problem, ensuring that it effectively guides the search direction of the main problem during the evolutionary process. An adaptive cooperation mechanism guided by population similarity is proposed to dynamically adjust their cooperation intensity. Additionally, an individual generation method assisted by dynamic reference points is also introduced to enhance the population’s exploration capability. Experimental results on 23 benchmark functions and the coal mine integrated energy system day-ahead scheduling problem demonstrate that AA-CMOEA outperforms 11 state-of-the-art CMOEA algorithms, it is a highly competitive method for solving complex CMOPs.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"102 ","pages":"Article 102324"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147423746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An adaptive coevolutionary resource assignment algorithm for constrained multi-objective optimization","authors":"Yutao Lai, Hairu Fan, Hai-Lin Liu, Huangxu Sheng","doi":"10.1016/j.swevo.2026.102343","DOIUrl":"10.1016/j.swevo.2026.102343","url":null,"abstract":"<div><div>This paper introduces ARACMO, a two-stage multi-population coevolutionary algorithm designed to solve constrained multi-objective optimization problems. In the first stage, an auxiliary population evolves by ignoring all constraints to approximate the Unconstrained Pareto Front, sharing its offspring to help other populations cross infeasible regions and reach promising search areas. In the second stage, specialized auxiliary populations are assigned to individual sub-constraints. To manage these populations effectively, we introduce a Constraints’ Weight Assignment (CWA) mechanism, which adaptively allocates coevolutionary resources based on the difficulty and importance of each constraint. This ensures that the main population receives high-intensity cooperation from the most critical auxiliary populations. Furthermore, a Constraints Combined Mechanism (CCM) is proposed to merge similar constraints, thereby saving computational budget and providing progressively deeper auxiliary information. Experimental results across four benchmark suites demonstrate that ARACMO exhibits superior competitiveness and robustness compared to five state-of-the-art algorithms. The source code is publicly available at: <span><span>https://github.com/tg980515/ARACMO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"103 ","pages":"Article 102343"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147426632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constrained multimodal dominance embedded dual-population coevolutionary algorithm for multimodal multi-objective optimization","authors":"Wenhua Li, Xingyi Yao, Rui Wang, Tao Zhang","doi":"10.1016/j.swevo.2026.102352","DOIUrl":"10.1016/j.swevo.2026.102352","url":null,"abstract":"<div><div>Constrained multimodal multi-objective optimization (CMMO) is a challenging yet critical problem in real-world applications, as it requires simultaneously approximating the Pareto-optimal front, maintaining diversity in the decision space, and satisfying complex constraint requirements. Existing studies on multimodal multi-objective optimization have largely overlooked the impact of constraints, while CMMO methods often fail to address the multimodal nature of the problem. The coupling of constraints and multimodality creates particularly challenging search scenarios where constraints fragment the multimodal solution space into disconnected feasible regions, making it extremely difficult to discover and preserve equivalent solutions across different niches while ensuring constraint satisfaction. To tackle these challenges, this paper propose a Constrained Multi-modal Dominance based Cooperative Evolutionary Algorithm (CMD-CoEA) that employs two collaborating populations with specialized roles. The convergence population can efficiently approximate the true Pareto front while maintaining diversity, whereas the diversity population employs a novel constrained multimodal dominance (CMD) relation to preserve equivalent solutions across different niches in the decision space. A two-stage environmental selection strategy progressively shifts from broad exploration to focused exploitation, while a dynamic modality identification mechanism adapts to the evolving population distribution. Comprehensive experiments on benchmark CMMOPs demonstrate that CMD-CoEA significantly outperforms state-of-the-art algorithms in terms of convergence, diversity, and constraint satisfaction.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"103 ","pages":"Article 102352"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147426635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collaborative path planning for multi-robot systems integrating multi-strategy NACO and fuzzy adaptive DWA","authors":"Xinghong Kuang, Yingting Li","doi":"10.1016/j.swevo.2026.102350","DOIUrl":"10.1016/j.swevo.2026.102350","url":null,"abstract":"<div><div>To address the increasing collaborative navigation demands of multi-robot systems in complex industrial environments, this paper proposes NFDWA, a distributed hybrid path planning algorithm that integrates an improved Ant Colony Optimization (NACO) and an improved Dynamic Window Approach (DWA). Adopting a hierarchical architecture of global guidance and local obstacle avoidance, the algorithm aims to enhance operational efficiency and safety in complex scenarios. On one hand, addressing the issue where paths generated by traditional ACO contain redundant nodes and excessive turns, leading to low execution efficiency, this paper proposes an improved algorithm, NACO. It initializes a non-uniform pheromone field using obstacle influence factors, constructs a dual-guided heuristic function based on the triangle inequality principle, and designs a secondary key-point optimization strategy to eliminate geometric redundancy. On the other hand, since the traditional DWA is limited in its adaptability within obstacle-dense areas due to its reliance on fixed evaluation weights, this paper develops an improved DWA based on fuzzy logic control (FLC). It utilizes global path points generated by NACO as real-time navigation benchmarks, dynamically maps evaluation weight coefficients through a FLC based on environmental feedback, and incorporates an adaptive path-tracking function to ensure motion stability. Building on this, an active conflict coordination mechanism combining path fitness values and spatial detection is proposed, effectively resolving deadlock and collision issues among multiple robots. Experimental results demonstrate that, compared to traditional ACO, NACO reduces the average global path length by 6.5% and the number of turns by 64.2%. In complex high-density collaborative scenarios, the NFDWA framework achieves a task success rate significantly superior to traditional DWA and reduces total system movement steps by 20%. This method successfully achieves a balance between multi-robot system operational efficiency and path smoothness while ensuring navigation safety.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"103 ","pages":"Article 102350"},"PeriodicalIF":8.5,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147451525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}