NeurocomputingPub Date : 2025-06-13DOI: 10.1016/j.neucom.2025.130710
Kehuan Yan , Peichao Lai , Yang Yang , Yi Ren , Tuyatsetseg Badarch , Yiwei Chen , Xianghan Zheng
{"title":"Quantum-inspired multimodal fusion with Lindblad master equation for sentiment analysis","authors":"Kehuan Yan , Peichao Lai , Yang Yang , Yi Ren , Tuyatsetseg Badarch , Yiwei Chen , Xianghan Zheng","doi":"10.1016/j.neucom.2025.130710","DOIUrl":"10.1016/j.neucom.2025.130710","url":null,"abstract":"<div><div>In multimodal sentiment analysis, the primary challenge lies in effectively modeling the complicated interactions among different data modalities. A promising approach is leveraging quantum concepts like superposition and entanglement to enhance the feature representation ability. However, existing quantum-inspired models neglect the intricate nonlinear dynamics inside their multimodal components. Drawing inspiration from the Lindbladian concept in quantum mechanics, we proposes quantum-inspired neural network with the Lindblad Master Equation (LME) and complex-valued LSTM. The proposed model treats each modality as an individual quantum system and superposes them into a mixed quantum system. The trainable LME process models the interaction of this multimodal system with its semantic environment, thereby enhancing the representation of complex interactions among modalities. The efficacy of the proposed model, along with its key components, are validated through extensive experiments on the MVSA and Memotion datasets. The performance are complemented by a comparative analysis that benchmarks the model against state-of-the-art methods, including traditional methods, large language models and quantum-insipred methods. Furthermore, the interpretability of the model is enhanced by quantifying the entanglement entropy of modality combinations using the von-Neumann Entanglement entropy.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130710"},"PeriodicalIF":5.5,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144291560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-13DOI: 10.1016/j.neucom.2025.130639
Qingxin Xiao , Yangyang Zhao , Lingwei Dang , Yun Hao , Le Che , Qingyao Wu
{"title":"Beyond game environments: Evolutionary algorithms with parameter space noise for task-oriented dialogue policy exploration","authors":"Qingxin Xiao , Yangyang Zhao , Lingwei Dang , Yun Hao , Le Che , Qingyao Wu","doi":"10.1016/j.neucom.2025.130639","DOIUrl":"10.1016/j.neucom.2025.130639","url":null,"abstract":"<div><div>Reinforcement learning (RL) has achieved significant success in task-oriented dialogue (TOD) policy learning. Nevertheless, training dialogue policy through RL faces a critical challenge: insufficient exploration, which leads to the policy getting trapped in local optima. Evolutionary algorithms (EAs) enhance exploration breadth by maintaining and selecting diverse individuals, and they often add parameter space noise among different individuals to simulate mutation, thereby increasing exploration depth. This approach has proven to be an effective method for enhancing RL exploration and has shown promising results in game domains. However, previous research has not analyzed its effectiveness in TOD dialogue policy. Given the substantial differences between gaming contexts and TOD dialogue policy, this paper explores and validates the efficacy of EAs in TOD dialogue policy, investigating the effects of different evolutionary cycles and various noise strategies across different dialogue tasks to determine which combination of evolutionary cycle and noise strategy is most suitable for TOD dialogue policy. Additionally, we propose an adaptive noise evolution method that dynamically adjusts noise scales to improve exploration efficiency. Experiments on the MultiWOZ dataset demonstrate significant performance improvements, achieving state-of-the-art results in both on-policy and off-policy settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130639"},"PeriodicalIF":5.5,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"S2R-CMI: A Robust Deep Reinforcement Learning method based on counterfactual estimation and state importance evaluation under additive noise disturbance","authors":"Zhenyuan Chen , Zhi Zheng , Wenjun Huang , Xiaomin Lin","doi":"10.1016/j.neucom.2025.130642","DOIUrl":"10.1016/j.neucom.2025.130642","url":null,"abstract":"<div><div>The development of Deep Reinforcement Learning (DRL) overcomes the limitations of traditional reinforcement learning in discrete spaces, extending its applications to scenarios in continuous spaces. However, disturbance commonly encountered in real-world environments poses significant threats to the performance of DRL algorithms, potentially leading to erroneous decision-making by agents and severe consequences. To address this issue, this paper proposes a Robust Deep Reinforcement Learning (RDRL) method named S2R-CMI, which aims to mitigate the impact of additive noise in the state space on DRL performance without requiring prior knowledge of the noise disturbance. Specifically, a state-based and reward-based conditional mutual information mechanism is designed to dynamically capture state importance and estimate its contribution to rewards. To address the lack of counterfactual data during training, a counterfactual label estimation method is proposed to approximate the counterfactual reward distribution while avoiding local optima during network training. State importance is then evaluated to quantify the impact of disturbance on states. Finally, we validate the proposed method in five scenarios: Cartpole, LunarLander-Discrete, LunarLander-Continuous, Build-Marine, and Half-Cheetah, under state disturbance. Experimental results demonstrate that S2R-CMI significantly enhances the robustness of DRL algorithms. Furthermore, we conduct experiments in some scenarios without state disturbance, and the results indicate that the method also achieves strong performance, further verifying its superiority and generalization capabilities.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130642"},"PeriodicalIF":5.5,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144296982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-13DOI: 10.1016/j.neucom.2025.130644
Libin Hou , Linyuan Wang , Senbao Hou, Tianyuan Liu, Shuxiao Ma, Jian Chen, Bin Yan
{"title":"Low redundancy cell-based Neural Architecture Search for large convolutional neural networks","authors":"Libin Hou , Linyuan Wang , Senbao Hou, Tianyuan Liu, Shuxiao Ma, Jian Chen, Bin Yan","doi":"10.1016/j.neucom.2025.130644","DOIUrl":"10.1016/j.neucom.2025.130644","url":null,"abstract":"<div><div>The cell-based search space is one of the main paradigms in Neural Architecture Search (NAS). However, the current research on this search space tends to optimize on small-size models, and the performance improvement of NAS might be stuck in a bottleneck. This situation has led to a growing performance gap between NAS and hand-designed models in recent years. In this paper, we focus on how to effectively expand the cell-based search space and proposes <em>Low redundancy Cell-based Neural Architecture Search for Large Convolutional neural networks</em> (<span><math><mrow><mi>L</mi><msup><mrow><mi>C</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>N</mi><mi>A</mi><mi>S</mi></mrow></math></span>), a gradient-based NAS method to search large-scale convolutional models with better performance based on low redundant cell search space. Specifically, a cell-based search space with low redundancy and large kernel is designed. Then train and sample a super network under computational constraints. Finally the network structure is optimized by gradient-based search. Experimental results show that the performance of the proposed search method is comparable to the popular hand-designed models in recent years at different scales. Moreover, LC-NASNet-B achieves an 83.7% classification accuracy on the ImageNet-1k dataset with 86.2M parameters, surpassing previous NAS methods and comparable to the most prominent hand-designed models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"649 ","pages":"Article 130644"},"PeriodicalIF":5.5,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144469901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-12DOI: 10.1016/j.neucom.2025.130641
Jiaxu Zhao , Tianjin Huang , Shiwei Liu , Jie Yin , Yulong Pei , Meng Fang , Mykola Pechenizkiy
{"title":"FS-GNN: Improving Fairness in Graph Neural Networks via Joint Sparsification","authors":"Jiaxu Zhao , Tianjin Huang , Shiwei Liu , Jie Yin , Yulong Pei , Meng Fang , Mykola Pechenizkiy","doi":"10.1016/j.neucom.2025.130641","DOIUrl":"10.1016/j.neucom.2025.130641","url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing graph-structured data, but their widespread adoption in critical applications is hindered by inherent biases related to sensitive attributes such as gender and race. While existing debiasing approaches typically focus on either modifying input graphs or incorporating fairness constraints into model objectives, we propose Fair Sparse GNN (FS-GNN), a novel framework that simultaneously enhances fairness and efficiency through joint sparsification of both input graphs and model architectures. Our approach iteratively identifies and removes less informative edges from input graphs while pruning redundant weights from the GNN model, guided by carefully designed fairness-aware objective functions. Through extensive experiments on real-world datasets, we demonstrate that FS-GNN achieves superior fairness metrics (reducing Statistical Parity from 7.94 to 0.6) while maintaining competitive prediction accuracy compared to state-of-the-art methods. Additionally, our theoretical analysis reveals distinct fairness implications of graph versus architecture sparsification, providing insights for future fairness-aware GNN designs. The proposed method not only advances fairness in GNNs but also offers substantial computational benefits through reduced model complexity, with FLOPs reductions ranging from 24% to 67%.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130641"},"PeriodicalIF":5.5,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-12DOI: 10.1016/j.neucom.2025.130638
Anran Hao , Shuo Sun , Jian Su , Siu Cheung Hui , Anh Tuan Luu
{"title":"Dynamic task balancing for joint information extraction","authors":"Anran Hao , Shuo Sun , Jian Su , Siu Cheung Hui , Anh Tuan Luu","doi":"10.1016/j.neucom.2025.130638","DOIUrl":"10.1016/j.neucom.2025.130638","url":null,"abstract":"<div><div>Joint Information Extraction (IE) aims for joint extraction of various semantic structures such as entities and relations. Most recent joint IE works use static weighting methods by combining task losses with predefined and fixed weights. In this paper, we identify the limitations of the static weighting methods with empirical analysis. We then study the feasibility of applying several dynamic weighting methods for the joint IE problem and evaluate the methods on three benchmark IE datasets in terms of their performance. We find that existing dynamic weighting methods can achieve reasonably good results in a single run, demonstrating their effectiveness and advantages over the static weighting methods. Further, we propose a hybrid dynamic weighting method, Adaptive Weighting for Joint IE (AWIE), based on gradient dynamic task weighting. Experimental results show that our proposed method obtains competitive performance results across datasets cost-effectively with task preference accommodation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130638"},"PeriodicalIF":5.5,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-11DOI: 10.1016/j.neucom.2025.130746
Juan D. Velasquez, Lorena Cadavid, Carlos J. Franco
{"title":"Emerging trends and strategic opportunities in tiny machine learning: A comprehensive thematic analysis","authors":"Juan D. Velasquez, Lorena Cadavid, Carlos J. Franco","doi":"10.1016/j.neucom.2025.130746","DOIUrl":"10.1016/j.neucom.2025.130746","url":null,"abstract":"<div><div>This study comprehensively reviews 779 Scopus-indexed documents, critically analyzing the trends, challenges, and opportunities in Tiny Machine Learning (TinyML). Through thematic analysis, 21 key themes are identified, covering areas such as edge computing, deep learning, IoT integration, microcontroller efficiency, and energy utilization. Unlike previous reviews focusing on specific domains or applications, this study adopts a text mining-based thematic approach to identify cross-sector research patterns and uncover underexplored areas. This positions the review as a broad yet deep mapping of the TinyML landscape. By synthesizing insights from the literature, this research highlights strategic opportunities, future directions, and technological advancements necessary to expand the application of TinyML in resource-constrained environments, neural networks, and real-time systems. The review also identifies key challenges such as balancing accuracy and energy efficiency in low-power devices, optimizing on-device learning, and ensuring data privacy without cloud dependency. In doing so, it outlines actionable directions for future research, including scalable deployment in large-scale IoT systems and application expansion into areas such as UAV security and smart cities. The findings are crucial for advancing AI applications in low-power, embedded systems and contribute to the growing body of knowledge on TinyML.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130746"},"PeriodicalIF":5.5,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-11DOI: 10.1016/j.neucom.2025.130604
R. Sriraman , N. Manoj , P. Agarwal , J. Vigo-Aguiar , Shilpi Jain
{"title":"Event-triggered control for exponential synchronization of reaction–diffusion fractional-order Clifford-valued delayed neural networks and its application to image encryption","authors":"R. Sriraman , N. Manoj , P. Agarwal , J. Vigo-Aguiar , Shilpi Jain","doi":"10.1016/j.neucom.2025.130604","DOIUrl":"10.1016/j.neucom.2025.130604","url":null,"abstract":"<div><div>This paper investigates the <span><math><mi>α</mi></math></span>-exponential synchronization problem for reaction–diffusion fractional-order Clifford-valued delayed neural networks (RDFOCLVDNNs) using an event-triggered control (ETC) strategy. First, a general form of neural networks (NNs), namely RDFOCLVDNNs is considered, which provides deeper insights into the dynamics of Clifford-valued neural networks (CLVNNs). To address the challenges posed by the non-commutative nature of Clifford algebra, the RDFOCLVDNNs are decomposed into multi-dimensional real-valued neural networks (RVNNs), which avoid the complexities of Clifford number multiplication and also simplify the analysis. Then, by constructing a suitable Lyapunov–Krasovskii functional (LKF) and applying appropriate inequalities new robust conditions are derived to guarantee the <span><math><mi>α</mi></math></span>-exponential synchronization of RDFOCLVDNNs under the proposed ETC strategy. To validate the synchronization criteria, a numerical example is presented along with graphical analysis. Furthermore, proposed theoretical framework is utilized to develop an effective color image encryption algorithm for secure image transmission. Finally, the effectiveness and security of the proposed encryption scheme are verified through simulations and various performance analyses.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130604"},"PeriodicalIF":5.5,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-11DOI: 10.1016/j.neucom.2025.130612
Wenhao Bai, Liqing Qiu, Weidong Zhao
{"title":"Dynamic heterogeneous graph contrastive learning based on multi-prior tasks","authors":"Wenhao Bai, Liqing Qiu, Weidong Zhao","doi":"10.1016/j.neucom.2025.130612","DOIUrl":"10.1016/j.neucom.2025.130612","url":null,"abstract":"<div><div>Dynamic heterogeneous graph embedding aims to enable a variety of graph-related tasks by efficiently capturing structures and attributes over time in complex graph data. During recent decades, the self-supervised contrastive learning techniques has presented immense advantage as an approach to understanding dynamic heterogeneous graph. However, most forms of self-supervised contrastive learning approach for dynamic heterogeneous graph embedding nowadays emphasize single prior task, which results in its failure to capture multiscale knowledge in dynamic heterogeneous graph. Thus, this study presents a new self-supervised graph contrastive learning framework based on multi-prior tasks for multiscale knowledge in dynamic heterogeneous graph (MTDG). In the proposed model, this study first obtains four embedding vectors as contrastive samples using the local, global, long-term, and short-term encoders. Additionally, this study develops single-contrastive learning comprising four prior tasks to optimize the model’s ability to understand the knowledge on the local, global, long-term, and short-term of the dynamic heterogeneous graph. Besides, this study designs cross-contrastive learning comprising two prior tasks to achieve complementary knowledge between the four embedding vectors generated by the encoders. Furthermore, this study introduces slight random noise and a shuffling strategy to prevent the generation of similar contrastive samples. Ultimately, this study assesses MTDG on twelve dynamic graph datasets from the real world, and the experimental results show that the proposed model achieves the better results on all datasets compared to the eleven baselines.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"647 ","pages":"Article 130612"},"PeriodicalIF":5.5,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144261954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-06-11DOI: 10.1016/j.neucom.2025.130618
Zechen Guo , Peng Wu , Xiaoliang Liu , Li Pan
{"title":"CoST: Comprehensive structural and temporal learning of social propagation for fake news detection","authors":"Zechen Guo , Peng Wu , Xiaoliang Liu , Li Pan","doi":"10.1016/j.neucom.2025.130618","DOIUrl":"10.1016/j.neucom.2025.130618","url":null,"abstract":"<div><div>The widespread dissemination of fake news on social media platforms presents significant threats to individual privacy and societal stability. Traditional content-based fake news detection methods are vulnerable to sophisticated adversarial manipulations, while existing propagation-based approaches often fail to fully capture the complex structural and temporal dynamics of news diffusion. To address these limitations, this paper proposes CoST, a Comprehensive Structural and Temporal learning of social propagation for fake news detection that jointly models propagation structural patterns and multi-grained temporal dynamics. Specifically, for structural patterns, as the existing Graph Convolution Networks (GCN) based methods are inadequate to embed news’ propagation graphs that typically have hub structures and deep propagation paths, we propose a bi-directional Graph Attention LSTM module to capture the social hub and deep propagation patterns of news’ propagation graphs. Besides structural patterns, news propagation may also have complicated and diverse temporal patterns. To model the multi-grained temporal dynamics of propagation, we adopt a temporal-aware attention mechanism and a Transformer encoder-based self-attention mechanism to learn the local temporal interval and global propagation sequence features, respectively. Experimental results on several real-world datasets demonstrate the superiority of CoST over various state-of-the-arts, especially in the early detection of fake news.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130618"},"PeriodicalIF":5.5,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}