{"title":"Optimization control of spacecraft proximation based on r-domination adaptive bare-bones particle swarm optimization algorithm","authors":"Zhihao Zhu , Yu Guo , Zhi Gao","doi":"10.1016/j.eswa.2025.127269","DOIUrl":"10.1016/j.eswa.2025.127269","url":null,"abstract":"<div><div>This paper proposes a novel finite-time (FT) optimization control approach of spacecraft proximation based on a new r-domination adaptive bare-bones multi-objective particle swarm optimization scheme (r-ABBMOPSO). Specifically, a new adaptive particle update strategy is developed for bare-bones multi-objective particle swarm optimization algorithm (BBMOPSO) to enhance the robustness of the search. To make the search toward the desired point, r-ABBMOPSO applies r-domination to replace Pareto-domination. In addition, a new adaptive mutation algorithm is designed to strong the population search diversity. By virtue of r-ABBMOPSO to obtain the optimal control parameters, a FT six degrees of freedom (6-DOF) proximation controller with the adaptive update laws of the unknown parameters is proposed to regulate chaser spacecraft approach to target spacecraft. Finally, numerical comparison examples illustrate the performance of the proposed optimization controller.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127269"},"PeriodicalIF":7.5,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic heterogeneous graph representation based on adaptive negative sample mining via high-fidelity instances and context-aware uncertainty learning","authors":"Wenhao Bai, Liqing Qiu, Weidong Zhao","doi":"10.1016/j.eswa.2025.127291","DOIUrl":"10.1016/j.eswa.2025.127291","url":null,"abstract":"<div><div>Graph contrastive learning is a self-supervised learning method widely used in dynamic heterogeneous graph representation in recent years, demonstrating great potential and achieving excellent results. However, most graph contrastive learning methods randomly select negative samples and treat all negative samples as equally important to the model. This ignores that some negative samples can provide more information due to their closer proximity to positive samples in the feature space or higher semantic similarity. Therefore, this paper proposes a <strong>HCUAN</strong> model that aims to utilize <u><strong>h</strong></u>igh-fidelity anchor instances and corresponding positive and negative samples for <u><strong>c</strong></u>ontext-aware <u><strong>u</strong></u>ncertainty learning to <u><strong>a</strong></u>daptively mine prioritized <u><strong>n</strong></u>egative samples, which in turn improves the performance of graph contrastive learning. Specifically, the HCUAN first designs a new GNN encoder (LGE) for generating high-fidelity anchor instances and corresponding positive and negative samples, which efficiently fuses between local and global information to prevents the introduction of easy negative samples and enhance the model’s discriminative ability. Then, the HCUAN utilizes an uncertainty discriminator to perform an adaptive assessment of the correlation between each negative sample and the anchor instance, which provides more accurate references for graph contrastive learning, thus helping the model to distinguish the really prioritized negative samples more clearly. Next, the HCUAN designs an unified graph contrastive learning, which incorporates the modeling method of dynamic heterogeneous graphs in graph contrastive learning, the method of generating high-fidelity anchor instances and corresponding positive and negative samples, and the method of prioritized negative samples mining in the form of modules into the traditional processes of graph contrastive learning. Each module in the unified graph contrastive learning can be disassembled and updated according to the needs of the task, providing powerful flexibility and scalability for practical applications. Finally, numerous experiments on twelve datasets show that HCUAN can significantly improve the performance of graph contrastive learning.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127291"},"PeriodicalIF":7.5,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A parallel approach to accelerate neural network hyperparameter selection for energy forecasting","authors":"D. Criado-Ramón , L.G.B. Ruiz , M.C. Pegalajar","doi":"10.1016/j.eswa.2025.127386","DOIUrl":"10.1016/j.eswa.2025.127386","url":null,"abstract":"<div><div>Finding the optimal hyperparameters of a neural network is a challenging task, usually done through a trial-and-error approach. Given the complexity of just training one neural network, particularly those with complex architectures and large input sizes, many implementations accelerated with GPU (Graphics Processing Unit) and distributed and parallel technologies have come to light over the past decade. However, whenever the complexity of the neural network used is simple and the number of features per sample is small, these implementations become lackluster and provide almost no benefit from just using the CPU (Central Processing Unit). As such, in this paper, we propose a novel parallelized approach that leverages GPU resources to simultaneously train multiple neural networks with different hyperparameters, maximizing resource utilization for smaller networks. The proposed method is evaluated on energy demand datasets from Spain and Uruguay, demonstrating consistent speedups of up to 1164x over TensorFlow and 410x over PyTorch.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"279 ","pages":"Article 127386"},"PeriodicalIF":7.5,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhehan Liu , Jinming Liu , Xiaolu Liu, Jungang Yan, Yuqing Cheng, Yingwu Chen
{"title":"An iterated adaptive large neighborhood search algorithm for the large-scale communication satellite range scheduling problem","authors":"Zhehan Liu , Jinming Liu , Xiaolu Liu, Jungang Yan, Yuqing Cheng, Yingwu Chen","doi":"10.1016/j.eswa.2025.127377","DOIUrl":"10.1016/j.eswa.2025.127377","url":null,"abstract":"<div><div>The communication satellite range scheduling problem (CSRSP) is indispensable for the regular operation of the low earth orbit internet constellation, which involves scheduling tracking telemetry and command (TT&C) tasks within their executable arcs to maximize the profit from these scheduled tasks. Different from traditional SRSP, the inter-satellite links are taken into account in CSRSP to facilitate the rapid completion of TT&C tasks. Moreover, the increasing number of satellites and the emergence of associated diverse types of TT&C tasks further escalate the complexity of this problem. Thus, we propose an iterated adaptive large neighborhood search algorithm (IALNS) to solve the CSRSP quickly and straightforwardly. In this algorithm, ALNS is employed to refine heuristic initial solutions. Frequent pattern mining, a popular data mining method, is used to guide the algorithmic search process as iterative mechanisms: on the one hand, the inferior structures in low-quality solutions are mined to significantly assist the ALNS removal process. On the other hand, the superior structures in high-quality solutions are identified to guide the construction of new solutions. Experimental tests with different task scales demonstrate that IALNS effectively deals with the CSRSP, outperforming three state-of-the-art algorithms.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127377"},"PeriodicalIF":7.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep neighbor-coherence hashing with discriminative sample mining for supervised cross-modal retrieval","authors":"Congcong Zhu , Qibing Qin , Wenfeng Zhang , Lei Huang","doi":"10.1016/j.eswa.2025.127365","DOIUrl":"10.1016/j.eswa.2025.127365","url":null,"abstract":"<div><div>Deep supervised cross-modal hashing has attracted extensive attention because of its low cost and high retrieval efficiency. Although the existing deep supervised cross-modal hashing methods have made great progress, they still suffer from two factors in the preservation of semantic relations between heterogeneous modalities. (1) Most of the available deep supervised cross-modal hashing learn hash functions by employing either pair-wise/multi-wise loss to explore the point-to-point relation or class center loss to explore the point-to-class relation, ignoring collaborative semantic relations. (2) Compared with the large proportion of simple samples, the hard pairs with a small proportion could provide more valuable information for the model training, nevertheless, most deep hash treats all samples equally in the learning process, and overlooks the positive contribution of hard samples in the learning process, impeding the hash function learning. To address these challenges, by considering both point-to-point and point-to-class relations, the novel Deep Neighbor-coherence Hashing (DNcH) framework is proposed to preserve the consistency of neighbor relations and generate high-quality binary codes with intra-class compactness and inter-class separability. Specifically, by jointly exploring the point-to-point and point-to-class relations between heterogeneous data, the neighbor-aware constraint is proposed to project the heterogeneous data into a unified Hamming space, where each anchor is close to all similar samples and corresponding class center, and far away from dissimilar samples and their class centers. The hard pairs containing valuable information are effectively mined by introducing the multi-similarity measurement strategy between heterogeneous modalities to construct the informative and representative training batches. Besides, to further gradually capture discriminant information from multi-modal hard pairs, a self-paced learning mechanism is introduced to assign dynamic weights to multi-modal pairs, which enables the deep cross-modal hashing to gradually concentrate on hard pairs while jointly learning universal patterns from the entire set of multi-modal pairs. Extensive experiments on three benchmark datasets show that our DNcH framework has better performance than the most advanced cross-modal hashing methods. The source code for the DNcH framework is available at <span><span>https://github.com/QinLab-WFU/DNcH</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"279 ","pages":"Article 127365"},"PeriodicalIF":7.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinrui Liu , Xiaojuan Jiang , Xinggang Luo , Zhongliang Zhang , Pengli Ji
{"title":"Exact and heuristic algorithms for team orienteering problem with fuzzy travel times","authors":"Xinrui Liu , Xiaojuan Jiang , Xinggang Luo , Zhongliang Zhang , Pengli Ji","doi":"10.1016/j.eswa.2025.127369","DOIUrl":"10.1016/j.eswa.2025.127369","url":null,"abstract":"<div><div>The Team Orienteering Problem (TOP) is a combinatorial optimization challenge that aims to determine a set of routes to maximize the total collected profit. In real-world scenarios, uncertainties in customer travel times frequently arise due to various factors such as weather conditions, traffic congestion, and peak hours. This study addresses the uncertainty by modeling travel time with trapezoidal fuzzy variables, and subsequently proposes a new variant of TOP, which is referred to as the Team Orienteering Problem with Fuzzy Travel Time. To solve this problem, a chance-constrained programming model is developed, and two solution approaches are proposed: a Branch-and-Price (B&P) exact algorithm and a Hybrid Adaptive Large Neighborhood Search (HALNS) heuristic algorithm. Numerical experiments are conducted to evaluate the performance of both algorithms. The results demonstrate the effectiveness of the B&P algorithm based on its capability in optimally solving most instances with a maximum computational time of 120 min. Moreover, the HALNS algorithm shows to be highly efficient, solving all instances within a short running time while maintaining only minimal profit gaps compared to the B&P algorithm.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127369"},"PeriodicalIF":7.5,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143734943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lijia Chen , Chang Sun , Yan-Li Lee , Qingsong Pu , Xinru Chen , Jia Liu , Yajun Du , Wen-Bo Xie
{"title":"Does user interest matter? Exploring the impact of ignoring user interests in recommendations","authors":"Lijia Chen , Chang Sun , Yan-Li Lee , Qingsong Pu , Xinru Chen , Jia Liu , Yajun Du , Wen-Bo Xie","doi":"10.1016/j.eswa.2025.127373","DOIUrl":"10.1016/j.eswa.2025.127373","url":null,"abstract":"<div><div>User interests have long been a critical factor in recommender systems, serving as a key criterion for recommendations. Existing interest-based recommendation prioritize matching items to users’ interests, often overlooking the importance of the intrinsic characteristics of items. This can lead to recommendations that, while aligned with user interests, fail to address the user’s specific preferences for item intrinsic characteristics, reducing satisfaction and trust in the system. In this paper, we propose an interest disentangling recommendation algorithm (IDG). During the model-training phase, user interactions with items are disentangled into user preference for items’ intrinsic characteristics and interest groups associated with those items. During the prediction phase, downplay the user’s preferences for items’ interest groups and focus more on their preferences for the items’ intrinsic characteristics. Extensive experiments show that, on average, IDG outperforms the ten baselines by 15.9%, 34.2%, 25% and 32.8% in terms of HR@20, NDCG@20, PRE@20, and ILS@20, respectively, across three real datasets. Further experiments show that items recommended by the IDG algorithm are more concentrated within the same interest groups. However, IDG effectively enhances the diversity of items within the recommendation list, quantified by item similarity.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"279 ","pages":"Article 127373"},"PeriodicalIF":7.5,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EHW-Font: A handwriting enhancement approach mimicking human writing processes","authors":"Lei Wang , Cunrui Wang , Yu Liu","doi":"10.1016/j.eswa.2025.127278","DOIUrl":"10.1016/j.eswa.2025.127278","url":null,"abstract":"<div><div>Balancing personalized style mimicry and legibility in handwritten font generation is particularly challenging for complex, multi-stroke characters like Chinese. Most existing approaches rely on a single modality – either pixel-based or sequence-based modeling – and employ random style reference selection during training, which often undermines both readability and stylistic consistency. In this paper, we introduce EHW-Font, a novel dual-modal framework that refines handwritten font generation by replicating the user’s writing style and process. Our approach fully exploits component-level, fine-grained style information from content and style characters. It employs a dual-modal fusion strategy to adaptively integrate the global visual features from handwritten stroke images with the dynamic process captured by stroke sequences. To mitigate style redundancy, we propose a quantization strategy that represents the style feature vector as the Cartesian product of one-dimensional variable sets, compressing redundant features while preserving essential stylistic details. Experiments show that our approach exhibits the best performance in qualitative, quantitative, and user studies. Moreover, our method is an equally effective means of data augmentation.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127278"},"PeriodicalIF":7.5,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier González-Alonso , Paula Martín-Tapia, David González-Ortega , Míriam Antón-Rodríguez , Francisco Javier Díaz-Pernas , Mario Martínez-Zarzuela
{"title":"ME-WARD: A multimodal ergonomic analysis tool for musculoskeletal risk assessment from inertial and video data in working places","authors":"Javier González-Alonso , Paula Martín-Tapia, David González-Ortega , Míriam Antón-Rodríguez , Francisco Javier Díaz-Pernas , Mario Martínez-Zarzuela","doi":"10.1016/j.eswa.2025.127212","DOIUrl":"10.1016/j.eswa.2025.127212","url":null,"abstract":"<div><div>This study presents ME-WARD (<em>Multimodal Ergonomic Workplace Assessment and Risk from Data</em>), a novel system for ergonomic assessment and musculoskeletal risk evaluation that implements the Rapid Upper Limb Assessment (RULA) method. ME-WARD is designed to process joint angle data from motion capture systems, including inertial measurement unit (IMU)-based setups, and deep learning human body pose tracking models. The tool’s flexibility enables ergonomic risk assessment using any system capable of reliably measuring joint angles, extending the applicability of RULA beyond proprietary setups. To validate its performance, the tool was tested in an industrial setting during the assembly of conveyor belts, which involved high-risk tasks such as inserting rods and pushing conveyor belt components. The experiments leveraged gold standard IMU systems alongside a state-of-the-art monocular 3D pose estimation system. The results confirmed that ME-WARD produces reliable RULA scores that closely align with IMU-derived metrics for flexion-dominated movements and comparable performance with the monocular system, despite limitations in tracking lateral and rotational motions. This work highlights the potential of integrating multiple motion capture technologies into a unified and accessible ergonomic assessment pipeline. By supporting diverse input sources, including low-cost video-based systems, the proposed multimodal approach offers a scalable, cost-effective solution for ergonomic assessments, paving the way for broader adoption in resource-constrained industrial environments.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127212"},"PeriodicalIF":7.5,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143724874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaotong Liu, Min Ren, Xuecai Hu, Qiong Li, Yongzhen Huang
{"title":"Multimodal depression recognition based on gait and rating scale","authors":"Xiaotong Liu, Min Ren, Xuecai Hu, Qiong Li, Yongzhen Huang","doi":"10.1016/j.eswa.2025.127285","DOIUrl":"10.1016/j.eswa.2025.127285","url":null,"abstract":"<div><div>Recently, depression recognition has garnered significant attention. Given its ease of acquisition from a distance, gait-based depression analysis emerges as a valuable tool for assisting in the diagnosis and assessment of depression. However, current research on gait-based depression recognition often uses scale results as labels but neglects the rich semantic information within the scales, which reflects the emotional, lifestyle, and physical states of participants and provides more personalized depression characteristics. To enhance the reliability and accuracy of depression analysis, we propose a text-guided depression recognition method based on gait. Firstly, we utilize silhouette-based modeling for depression recognition to capture relevant gait features. Secondly, we design the GT-CLIP module to leverage text information from scales as an auxiliary branch to guide feature learning within the gait recognition framework, enabling the model to effectively extract corresponding gait features based on these depression-related text information. Then, we devise a text-guided attention mechanism to capture variations across different body parts. In the D-Gait dataset, which includes 92 depressed subjects and 200 normal controls, our proposed text-guided depression recognition model achieves an F1-score of 59.85, outperforming existing state-of-the-art methods.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"278 ","pages":"Article 127285"},"PeriodicalIF":7.5,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}