{"title":"A traffic prediction method for missing data scenarios: graph convolutional recurrent ordinary differential equation network","authors":"Ming Jiang, Zhiwei Liu, Yan Xu","doi":"10.1007/s40747-024-01768-7","DOIUrl":"https://doi.org/10.1007/s40747-024-01768-7","url":null,"abstract":"<p>Traffic prediction plays an increasingly important role in intelligent transportation systems and smart cities. Both travelers and urban managers rely on accurate traffic information to make decisions about route selection and traffic management. Due to various factors, both human and natural, traffic data often contains missing values. Addressing the impact of missing data on traffic flow prediction has become a widely discussed topic in the academic community and holds significant practical importance. Existing spatiotemporal graph models typically rely on complete data, and the presence of missing values can significantly degrade prediction performance and disrupt the construction of dynamic graph structures. To address this challenge, this paper proposes a neural network architecture designed specifically for missing data scenarios—graph convolutional recurrent ordinary differential equation network (GCRNODE). GCRNODE combines recurrent networks based on ordinary differential equation (ODE) with spatiotemporal memory graph convolutional networks, enabling accurate traffic prediction and effective modeling of dynamic graph structures even in the presence of missing data. GCRNODE uses ODE to model the evolution of traffic flow and updates the hidden states of the ODE through observed data. Additionally, GCRNODE employs a data-independent spatiotemporal memory graph convolutional network to capture the dynamic spatial dependencies in missing data scenarios. The experimental results on three real-world traffic datasets demonstrate that GCRNODE outperforms baseline models in prediction performance under various missing data rates and scenarios. This indicates that the proposed method has stronger adaptability and robustness in handling missing data and modeling spatiotemporal dependencies.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"52 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A generalized diffusion model for remaining useful life prediction with uncertainty","authors":"Bincheng Wen, Xin Zhao, Xilang Tang, Mingqing Xiao, Haizhen Zhu, Jianfeng Li","doi":"10.1007/s40747-024-01773-w","DOIUrl":"https://doi.org/10.1007/s40747-024-01773-w","url":null,"abstract":"<p>Forecasting the remaining useful life (RUL) is a crucial aspect of prognostics and health management (PHM), which has garnered significant attention in academic and industrial domains in recent decades. The accurate prediction of RUL relies on the creation of an appropriate degradation model for the system. In this paper, a general representation of diffusion process models with three sources of uncertainty for RUL estimation is constructed. According to time-space transformation, the analytic equations that approximate the RUL probability distribution function (PDF) are inferred. The results demonstrate that the proposed model is more general, covering several existing simplified cases. The parameters of the model are then calculated utilizing an adaptive technique based on the Kalman filter and expectation maximization with Rauch-Tung-Striebel (KF-EM-RTS). KF-EM-RTS can adaptively estimate and update unknown parameters, overcoming the limits of strong Markovian nature of diffusion model. Linear and nonlinear degradation datasets from real working environments are used to validate the proposed model. The experiments indicate that the proposed model can achieve accurate RUL estimation results.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"7 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yinghan Hong, Fangqing Liu, Han Huang, Yi Xiang, Xueming Yan, Guizhen Mai
{"title":"Microscale search-based algorithm based on time-space transfer for automated test case generation","authors":"Yinghan Hong, Fangqing Liu, Han Huang, Yi Xiang, Xueming Yan, Guizhen Mai","doi":"10.1007/s40747-024-01706-7","DOIUrl":"https://doi.org/10.1007/s40747-024-01706-7","url":null,"abstract":"<p>Automated test case generation for path coverage (ATCG-PC) is a major challenge in search-based software engineering due to its complexity as a large-scale black-box optimization problem. However, existing search-based approaches often fail to achieve high path coverage in large-scale unit programs. This is due to their expansive decision space and the presence of hundreds of feasible paths. In this paper, we present a microscale (small-size subsets of the decomposed decision set) search-based algorithm with time-space transfer (MISA-TST). This algorithm aims to identify more accurate subspaces consisting of optimal solutions based on two strategies. The dimension partition strategy employs a relationship matrix to track subspaces corresponding to the target paths. Additionally, the specific value strategy allows MISA-TST to focus the search on the neighborhood of specific dimension values rather than the entire dimension space. Experiments conducted on nine normal-scale and six large-scale benchmarks demonstrate the effectiveness of MISA-TST. The large-scale unit programs encompass hundreds of feasible paths or more than 1.00E+50 test cases. The results show that MISA-TST achieves significantly higher path coverage than other state-of-the-art algorithms in most benchmarks. Furthermore, the combination of the two time-space transfer strategies significantly enhances the performance of search-based algorithms like MISA, especially in large-scale unit programs.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"12 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoran Sun, Yang Li, Guanci Yang, Zhidong Su, Kexin Luo
{"title":"View adaptive multi-object tracking method based on depth relationship cues","authors":"Haoran Sun, Yang Li, Guanci Yang, Zhidong Su, Kexin Luo","doi":"10.1007/s40747-024-01776-7","DOIUrl":"https://doi.org/10.1007/s40747-024-01776-7","url":null,"abstract":"<p>Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differences and only adopt a unified association strategy to deal with various occlusion situations. This paper proposed View Adaptive Multi-Object Tracking Method Based on Depth Relationship Cues (ViewTrack) to enable MOT to adapt to the scene's dynamic changes. Firstly, based on exploiting the depth relationships between objects by using the position information of the bounding box, a view-type recognition method based on depth relationship cues (VTRM) is proposed to perceive the changes of depth and view within the dynamic scene. Secondly, by adjusting the interval partitioning strategy to adapt to the changes in view characteristics, a view adaptive partitioning method for tracklet sets and detection sets (VAPM) is proposed to achieve sparse decomposition in occluded scenes. Then, combining pedestrian displacement with Intersection over Union (IoU), a displacement modulated Intersection over Union method (DMIoU) is proposed to improve the association accuracy between detection and tracklet boxes. Finally, the comparison results with 12 representative methods demonstrate that ViewTrack outperforms multiple metrics on the benchmark datasets. The code is available at https://github.com/Hamor404/ViewTrack.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"7 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel group-based framework for nature-inspired optimization algorithms with adaptive movement behavior","authors":"Adam Robson, Kamlesh Mistry, Wai-Lok Woo","doi":"10.1007/s40747-024-01763-y","DOIUrl":"https://doi.org/10.1007/s40747-024-01763-y","url":null,"abstract":"<p>This paper proposes two novel group-based frameworks that can be implemented into almost any nature-inspired optimization algorithm. The proposed Group-Based (GB) and Cross Group-Based (XGB) framework implements a strategy which modifies the attraction and movement behaviors of base nature-inspired optimization algorithms and a mechanism that creates a continuing variance within population groupings, while attempting to maintain levels of computational simplicity that have helped nature-inspired optimization algorithms gain notoriety within the field of feature selection. Through this functionality, the proposed framework seeks to increase search diversity within the population swarm to address issues such as premature convergence, and oscillations within the swarm. The proposed frameworks have shown promising results when implemented into the Bat algorithm (BA), Firefly algorithm (FA), and Particle Swarm Optimization algorithm (PSO), all of which are popular when applied to the field of feature selection, and have been shown to perform well in a variety of domains, gaining notoriety due to their powerful search capabilities.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"67 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinning Liu, Li Han, Ling Kang, Jiannan Liu, Huadong Miao
{"title":"Preference learning based deep reinforcement learning for flexible job shop scheduling problem","authors":"Xinning Liu, Li Han, Ling Kang, Jiannan Liu, Huadong Miao","doi":"10.1007/s40747-024-01772-x","DOIUrl":"https://doi.org/10.1007/s40747-024-01772-x","url":null,"abstract":"<p>The flexible job shop scheduling problem (FJSP) holds significant importance in both theoretical research and practical applications. Given the complexity and diversity of FJSP, improving the generalization and quality of scheduling methods has become a hot topic of interest in both industry and academia. To address this, this paper proposes a Preference-Based Mask-PPO (PBMP) algorithm, which leverages the strengths of preference learning and invalid action masking to optimize FJSP solutions. First, a reward predictor based on preference learning is designed to model reward prediction by comparing random fragments, eliminating the need for complex reward function design. Second, a novel intelligent switching mechanism is introduced, where proximal policy optimization (PPO) is employed to enhance exploration during sampling, and masked proximal policy optimization (Mask-PPO) refines the action space during training, significantly improving efficiency and solution quality. Furthermore, the Pearson correlation coefficient (PCC) is used to evaluate the performance of the preference model. Finally, comparative experiments on FJSP benchmark instances of varying sizes demonstrate that PBMP outperforms traditional scheduling strategies such as dispatching rules, OR-Tools, and other deep reinforcement learning (DRL) algorithms, achieving superior scheduling policies and faster convergence. Even with increasing instance sizes, preference learning proves to be an effective reward mechanism in reinforcement learning for FJSP. The ablation study further highlights the advantages of each key component in the PBMP algorithm across performance metrics.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"74 2 Pt 2 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing zero-shot stance detection via multi-task fine-tuning with debate data and knowledge augmentation","authors":"Qinlong Fan, Jicang Lu, Yepeng Sun, Qiankun Pi, Shouxin Shang","doi":"10.1007/s40747-024-01767-8","DOIUrl":"https://doi.org/10.1007/s40747-024-01767-8","url":null,"abstract":"<p>In the real world, stance detection tasks often involve assessing the stance or attitude of a given text toward new, unseen targets, a task known as zero-shot stance detection. However, zero-shot stance detection often suffers from issues such as sparse data annotation and inherent task complexity, which can lead to lower performance. To address these challenges, we propose combining fine-tuning of Large Language Models (LLMs) with knowledge augmentation for zero-shot stance detection. Specifically, we leverage stance detection and related tasks from debate corpora to perform multi-task fine-tuning of LLMs. This approach aims to learn and transfer the capability of zero-shot stance detection and reasoning analysis from relevant data. Additionally, we enhance the model’s semantic understanding of the given text and targets by retrieving relevant knowledge from external knowledge bases as context, alleviating the lack of relevant contextual knowledge. Compared to ChatGPT, our model achieves a significant improvement in the average F1 score, with an increase of 15.74% on the SemEval 2016 Task 6 A and 3.55% on the P-Stance dataset. Our model outperforms current state-of-the-art models on these two datasets, demonstrating the superiority of multi-task fine-tuning with debate data and knowledge augmentation.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142981772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MKER: multi-modal knowledge extraction and reasoning for future event prediction","authors":"Chenghang Lai, Shoumeng Qiu","doi":"10.1007/s40747-024-01741-4","DOIUrl":"https://doi.org/10.1007/s40747-024-01741-4","url":null,"abstract":"<p>Humans can predict what will happen shortly, which is essential for survival, but machines cannot. To equip machines with the ability, we introduce the innovative multi-modal knowledge extraction and reasoning (MKER) framework. This framework combines external commonsense knowledge, internal visual relation knowledge, and basic information to make inference. This framework is built on an encoder-decoder structure with three essential components: a visual language reasoning module, an adaptive cross-modality feature fusion module, and a future event description generation module. The visual language reasoning module extracts the object relationships among the most informative objects and the dynamic evolution of the relationship, which comes from the sequence scene graphs and commonsense graphs. The long short-term memory model is employed to explore changes in the object relationships at different times to form a dynamic object relationship. Furthermore, the adaptive cross-modality feature fusion module aligns video and language information by using object relationship knowledge as guidance to learn vision-language representation. Finally, the future event description generation module decodes the fused information and generates the language description of the next event. Experimental results demonstrate that MKER outperforms existing methods. Ablation studies further illustrate the effectiveness of the designed module. This work advances the field by providing a way to predict future events, enhance machine understanding, and interact with dynamic environments.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"28 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RL4CEP: reinforcement learning for updating CEP rules","authors":"Afef Mdhaffar, Ghassen Baklouti, Yassine Rebai, Mohamed Jmaiel, Bernd Freisleben","doi":"10.1007/s40747-024-01742-3","DOIUrl":"https://doi.org/10.1007/s40747-024-01742-3","url":null,"abstract":"<p>This paper presents RL4CEP, a reinforcement learning (RL) approach to dynamically update complex event processing (CEP) rules. RL4CEP uses Double Deep Q-Networks to update the threshold values used by CEP rules. It is implemented using Apache Flink as a CEP engine and Apache Kafka for message distribution. RL4CEP is a generic approach for scenarios in which CEP rules need to be updated dynamically. In this paper, we use RL4CEP in a financial trading use case. Our experimental results based on three financial trading rules and eight financial datasets demonstrate the merits of RL4CEP in improving the overall profit, when compared to baseline and state-of-the-art approaches, with a reasonable consumption of resources, i.e., RAM and CPU. Finally, our experiments indicate that RL4CEP is executed quite fast compared to traditional CEP engines processing static rules.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"204 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142936799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanlun Xie, Jie Ou, Bihan Wen, Zitong Yu, Wenhong Tian
{"title":"A joint learning method for low-light facial expression recognition","authors":"Yuanlun Xie, Jie Ou, Bihan Wen, Zitong Yu, Wenhong Tian","doi":"10.1007/s40747-024-01762-z","DOIUrl":"https://doi.org/10.1007/s40747-024-01762-z","url":null,"abstract":"<p>Existing facial expression recognition (FER) methods are mainly devoted to learning discriminative features from normal-light images. However, their performance drops sharply when they are used for low-light images. In this paper, we propose a novel low-light FER framework (termed LL-FER) that can simultaneously enhance the images and recognition tasks of low-light facial expression images. Specifically, we first meticulously design a low-light enhancement network (LLENet) to recover expressions images’ rich detail information. Then, we design a joint loss to train the LLENet with FER network in a cascade manner, so that the FER network can guide the LLENet to gradually perceive and restore discriminative features which are useful for FER during the training process. Extensive experiments show that the LLENet not only achieves competitive results both quantitatively and qualitatively, but also in the LL-FER framework, which can produce results more suitable for FER tasks, further improving the performance of the FER methods.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"67 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142936800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}