Neural NetworksPub Date : 2025-09-18DOI: 10.1016/j.neunet.2025.108129
Ke Liu , Mengxuan Li , Jiajun Bu , Hongwei Wang , Haishuai Wang
{"title":"CSTSINR: improving temporal continuity via convolutional structured implicit neural representations for time series anomaly detection","authors":"Ke Liu , Mengxuan Li , Jiajun Bu , Hongwei Wang , Haishuai Wang","doi":"10.1016/j.neunet.2025.108129","DOIUrl":"10.1016/j.neunet.2025.108129","url":null,"abstract":"<div><div>Time series anomaly detection plays a crucial role in identifying significant deviations from expected behavior. Implicit Neural Representation (INR) has been explored for time series modeling due to its ability to learn continuous functions. The inherent spectral bias of INRs, which prioritizes low-frequency signal fitting, further enables the detection of high-frequency anomalies. However, current INR-based approaches demonstrate limited capability in representing complex temporal patterns, particularly when the normal data itself contains significant high-frequency components. To address these challenges, we propose CSTSINR, a novel anomaly detection model that integrates the structured feature map and convolutional mechanisms with the INR continuous function. By leveraging a structured feature map and convolutional layers, CSTSINR addresses the limitations of directive prediction of all parameters and point-wise query processing, providing improved modeling of temporal continuity and enhanced anomaly detection. Our extensive experiments demonstrate that CSTSINR outperforms existing state-of-the-art methods across ten benchmark datasets, highlighting its superior ability to detect anomalies, particularly in high-frequency or complex time series data.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108129"},"PeriodicalIF":6.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-18DOI: 10.1016/j.neunet.2025.108125
Shuangjie Li , Baoming Zhang , Jianqing Song , Gaoli Ruan , Chongjun Wang , Junyuan Xie
{"title":"TGSL: Trade-off graph structure learning via multifaceted graph information bottleneck","authors":"Shuangjie Li , Baoming Zhang , Jianqing Song , Gaoli Ruan , Chongjun Wang , Junyuan Xie","doi":"10.1016/j.neunet.2025.108125","DOIUrl":"10.1016/j.neunet.2025.108125","url":null,"abstract":"<div><div>Graph neural networks (GNNs) are prominent for their effectiveness in processing graph-structured data for semi-supervised node classification tasks. Most existing GNNs perform message passing directly based on the observed graph structure. However, in real-world scenarios, the observed structure is often suboptimal due to multiple factors, significantly degrading the performance of GNNs. To address this challenge, we first conduct an empirical analysis showing that different graph structures significantly impact empirical risk and classification performance. Motivated by our observations, we propose a novel method named <strong>T</strong>rade-off <strong>G</strong>raph <strong>S</strong>tructure <strong>L</strong>earning (TGSL), guided by the multifaceted Graph Information Bottleneck (GIB) principle based on Mutual Information (MI). The key idea behind TGSL is to learn a minimal sufficient graph structure that minimizes empirical risk while maintaining performance. Specifically, we introduce global feature augmentation to capture the structural roles of nodes, and global structure augmentation to uncover global relationships between nodes. The augmented graphs are then processed by structure estimators with different parameters for refinement and redefinition, respectively. Additionally, we innovatively leverage multifaceted GIB as the optimization objective by maximizing the MI between the labels and the representation derived from the final structure, while constraining the MI between this representation and that based on the redefined structures. This trade-off helps avoid capturing irrelevant information from the redefined structures and enhances the final representation for node classification. We conduct extensive experiments across a range of datasets under clean and attacked conditions. The results demonstrate the outstanding performance and robustness of TGSL over state-of-the-art baselines.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108125"},"PeriodicalIF":6.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-18DOI: 10.1016/j.neunet.2025.108127
Siqi Cai , Zheyuan Lin , Xiaoli Liu , Wenjie Wei , Shuai Wang , Malu Zhang , Tanja Schultz , Haizhou Li
{"title":"Spiking neural networks for EEG signal analysis: From theory to practice","authors":"Siqi Cai , Zheyuan Lin , Xiaoli Liu , Wenjie Wei , Shuai Wang , Malu Zhang , Tanja Schultz , Haizhou Li","doi":"10.1016/j.neunet.2025.108127","DOIUrl":"10.1016/j.neunet.2025.108127","url":null,"abstract":"<div><div>The intricate and efficient information processing of the human brain, driven by spiking neural interactions, has led to the development of spiking neural networks (SNNs) as a cutting-edge neural network paradigm. Unlike traditional artificial neural networks (ANNs) that use continuous values, SNNs emulate the brain’s spiking mechanisms, offering enhanced temporal information processing and computational efficiency. This review addresses the critical gap between theoretical advancements and practical applications of SNNs in EEG signal analysis. We provide a comprehensive examination of recent SNN methodologies and their application to EEG signals, highlighting their potential benefits over conventional deep learning approaches. The review encompasses foundational knowledge of SNNs, detailed implementation strategies for EEG analysis, and challenges inherent to SNN-based methods. Practical guidance is provided through step-by-step instructions and accessible code available on GitHub, aimed at facilitating researchers’ adoption of these techniques. Additionally, we explore emerging trends and future research directions, emphasizing the potential of SNNs to advance brain-computer interfaces and neurofeedback systems. This paper serves as a valuable resource for bridging the gap between theoretical developments in SNNs and their practical implementation in EEG signal analysis.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108127"},"PeriodicalIF":6.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-18DOI: 10.1016/j.neunet.2025.108130
Zhinan Peng , Xingyu Zhang , Zhuo Xia , Lin Hao , Linpu He , Hong Cheng
{"title":"Fixed-time learning-based optimal tracking control for robotic systems with prescribed performance constraints","authors":"Zhinan Peng , Xingyu Zhang , Zhuo Xia , Lin Hao , Linpu He , Hong Cheng","doi":"10.1016/j.neunet.2025.108130","DOIUrl":"10.1016/j.neunet.2025.108130","url":null,"abstract":"<div><div>This paper presents a fixed-time learning-based dynamic event-triggered control framework to address the optimal tracking control problem in robotic systems with the prescribed performance constraints. In many practical scenarios, the states of robotic systems are often subject to performance constraints imposed by structural characteristics and task requirements. To address this issue, prescribed performance control (PPC) theory is employed to ensure performance state constraints and construct an unconstrained tracking error system. Subsequently, a critic-only adaptive dynamic programming (ADP) control framework is designed to approximate the optimal control law for the transformed unconstrained system. Furthermore, in the design of critic neural network (NN), a novel fixed-time convergence (FTC) weight update law based on concurrent learning (CL) techniques is proposed, which guarantees the fixed-time convergence of weight estimation error under relaxed persistent excitation (PE) condition. Throughout the controller design, a dynamic event-triggered mechanism is adopted to reduce the number of sampling instances and computational resources. Meanwhile, the stability of the closed-loop system under this mechanism is rigorously proven. Finally, the effectiveness of the proposed method is demonstrated through simulation results and comparative analysis.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108130"},"PeriodicalIF":6.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-17DOI: 10.1016/j.neunet.2025.108122
Hongjia Xu , Sheng Zhou , Zhuonan Zheng , Ning Ma , Jiawei Chen , Jiajun Bu
{"title":"Contrastive learning unlocks geometric insights for dataset pruning","authors":"Hongjia Xu , Sheng Zhou , Zhuonan Zheng , Ning Ma , Jiawei Chen , Jiajun Bu","doi":"10.1016/j.neunet.2025.108122","DOIUrl":"10.1016/j.neunet.2025.108122","url":null,"abstract":"<div><div>Dataset pruning aims at selecting a subset of the data so that the model trained on the subset performs comparably to the one trained on the full dataset. In the era of big data, unsupervised pruning of the dataset can alleviate the issue of the expensive labeling process from the beginning. Existing methods sort and select instances by well-designed importance metrics, while the unsupervised ones commonly regard representation learning as a black box employed to get embeddings, with its properties remaining insufficiently explored for dataset pruning. In this study, we revisit self-supervised Contrastive Learning by observing the learned embedding manifold, introducing Curvature Estimation to characterize the geometrical properties of the manifold. The statistical results reveal that the embedding distribution of instances on manifold surfaces is not uniform. Based on this observation, we propose an unsupervised dataset pruning strategy by performing downsampling in geometric areas with high instance density, namely KITTY sampling. Extensive experiments demonstrate that our proposed methods have achieved leading performances on CV dataset pruning compared to the baselines. Code is available at <span><span>https://github.com/Frostland12138/KITTY</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108122"},"PeriodicalIF":6.3,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-17DOI: 10.1016/j.neunet.2025.108120
Jun Zheng , Runda Jia , Shaoning Liu , Ranmeng Lin , Dakuo He , Fuli Wang
{"title":"Offline-to-online reinforcement learning with efficient unconstrained fine-tuning","authors":"Jun Zheng , Runda Jia , Shaoning Liu , Ranmeng Lin , Dakuo He , Fuli Wang","doi":"10.1016/j.neunet.2025.108120","DOIUrl":"10.1016/j.neunet.2025.108120","url":null,"abstract":"<div><div>Offline reinforcement learning provides the capability to learn a policy only from pre-collected datasets, but its performance is often limited by the quality of the offline dataset and the coverage of the state-action space. Offline-to-online reinforcement learning is promising to address these limitations and achieve high sample efficiency by integrating the advantages of both offline and online learning paradigms. However, existing methods typically struggle to adapt to online learning and improve the performance of pre-trained policies due to the distributional shift and conservative training. To address these issues, we propose an efficient unconstrained fine-tuning framework that removes conservative constraints on the policy during fine-tuning, allowing thorough exploration of state-action pairs not covered by the offline data. This framework leverages three key techniques: dynamics representation learning, layer normalization, and increasing the update frequency of the value network to improve sample efficiency and mitigate value function estimation bias caused by the distributional shift. Dynamics representation learning accelerates fine-tuning by capturing meaningful features, layer normalization bounds <span><math><mi>Q</mi></math></span>-value to suppress catastrophic value function divergence, and increasing the update frequency of the value network enhances the sample efficiency and reduces value function estimation bias. Extensive experiments on the D4RL benchmark demonstrate that our algorithm outperforms state-of-the-art offline-to-online reinforcement learning algorithms across various tasks with minimal online interactions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108120"},"PeriodicalIF":6.3,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-16DOI: 10.1016/j.neunet.2025.108114
Yurong Wang , Min Lin , Qitu Hu , Shuangcheng Bai , Yanling Li , Longjie Bao
{"title":"A domain-specific cross-lingual semantic alignment learning model for low-resource languages","authors":"Yurong Wang , Min Lin , Qitu Hu , Shuangcheng Bai , Yanling Li , Longjie Bao","doi":"10.1016/j.neunet.2025.108114","DOIUrl":"10.1016/j.neunet.2025.108114","url":null,"abstract":"<div><div>Cross-lingual semantic alignment models facilitate the sharing and utilization of multilingual domain-specific data (e.g., medical, legal), offering cost-effective solutions for improving low-resource language tasks. However, existing methods are challenged by parallel data scarcity, semantic space heterogeneity, morphological complexity, and weak robustness-particularly for agglutinative languages. Therefore, this paper proposes CLWKD, a cross-lingual mapping and knowledge distillation framework. CLWKD leverages domain-specific pretrained models from high-resource languages as teachers and integrates multi-granularity alignment matrices with limited parallel data to guide cross-lingual knowledge transfer. CLWKD jointly learns multi-granularity semantic alignment mapping matrices at the token, word, and sentence levels from general-domain data. It eases domain data scarcity and helps bridge structural gaps caused by morphological and syntactic differences. To alleviate data sparsity and out-of-vocabulary issues in agglutinative languages, multilingual embedding sharing and morphological segmentation strategies are introduced. To improve the stability of unsupervised mapping training, generator pretraining is introduced and further combined with high-confidence word and sentence pairs to optimize the mapping matrix.To preserve alignment with fewer parameters, a parameter recycling and embedding bottleneck design is adopted. Experiments across the medical, legal, and educational domains on Mongolian-Chinese and Korean-Chinese language pairs demonstrate the effectiveness of CLWKD in three cross-lingual tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108114"},"PeriodicalIF":6.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-16DOI: 10.1016/j.neunet.2025.108115
Wei Li , Zhixin Li , Fuyun Deng , Kun Zeng , Canlong Zhang
{"title":"Counterfactual causal inference for robust visual question answering","authors":"Wei Li , Zhixin Li , Fuyun Deng , Kun Zeng , Canlong Zhang","doi":"10.1016/j.neunet.2025.108115","DOIUrl":"10.1016/j.neunet.2025.108115","url":null,"abstract":"<div><div>Visual Question Answering (VQA) systems have seen remarkable progress with the incorporation of multimodal data. However, their performance is still hampered by biases ingrained in language and vision modalities, frequently resulting in subpar generalization. In this study, we introduce a novel counterfactual causal framework (CC-VQA). This framework utilizes Counterfactual Sample Synthesis (CSS) and causal inference to tackle cross-modality biases. Our approach innovatively employs a strategy based on causal graphs, which effectively disentangles spurious correlations in multimodal data. This ensures a balanced and precise multimodal reasoning process, enabling the model to make more accurate and unbiased decisions. Moreover, we propose a contrastive loss mechanism. By contrasting the embeddings of positive and negative samples, this mechanism significantly enhances the robustness of VQA models. Additionally, we develop a robust training strategy that improves both the visual-explainable and question-sensitive capabilities of these models. Our experimental evaluations on benchmark datasets, such as VQA-CP v2 and VQA v2, demonstrate substantial improvements in bias mitigation and overall accuracy. The proposed CC-VQA framework outperforms state-of-the-art methods, highlighting its effectiveness in enhancing the performance of VQA systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108115"},"PeriodicalIF":6.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-09-16DOI: 10.1016/j.neunet.2025.108117
Yuqing Xing, Haodong Chen, Quan Zheng
{"title":"ToBaFu: Topology-based fusion model for classification of two-dimensional cancer images","authors":"Yuqing Xing, Haodong Chen, Quan Zheng","doi":"10.1016/j.neunet.2025.108117","DOIUrl":"10.1016/j.neunet.2025.108117","url":null,"abstract":"<div><div>Medical images play a pivotal role in disease diagnosis. Numerous studies on cancer image analysis focus on end-to-end deep neural networks, neglecting the analysis of global topological features in images. In cancer diagnosis, pathological images frequently display structures like holes or loops that are absent in healthy images, highlighting the benefits of topological analysis of images. In our study, we employ persistent homology (PH) to extract topological features from two-dimensional cancer images. Then, we propose a topology-based model (Topo) for image classification by implementing a shallow neural module following the feature extraction. More importantly, we integrate the Topo model with an end-to-end enhanced ResNet architecture to develop a novel topology-based fusion model (ToBaFu), aimed at enhancing diagnostic performance and model robustness. The proposed ToBaFu model achieves remarkable performance across three cancer image datasets: 99.98 % accuracy and F1-score on the LC-25000 lung and colon cancer histopathological dataset, 99.60 % accuracy and F1-score on the CRC-5000 colorectal cancer histological dataset, and 99.80 % accuracy with 99.83 % F1-score on the BUS-250 breast ultrasound dataset.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108117"},"PeriodicalIF":6.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tacit mechanism: Bridging pre-training of individuality to multi-agent adversarial coordination.","authors":"Shiqing Yao, Jiajun Chai, Haixin Yu, Yongzhe Chang, Tiantian Zhang, Yuanheng Zhu, Xueqian Wang","doi":"10.1016/j.neunet.2025.108121","DOIUrl":"https://doi.org/10.1016/j.neunet.2025.108121","url":null,"abstract":"<p><p>To tackle the multi-agent adversarial coordination problem, current multi-agent reinforcement learning (MARL) algorithms primarily depend on team-based rewards to update agent policies. However, they do not fully exploit the spatial relationships and their variant trends, thereby limiting overall performance. Inspired by human tactics, we propose the concept of tacit behavior to enhance the efficiency of multi-agent reinforcement learning through the refinement of the learning process. This paper introduces a novel two-phase framework to learn Pre-trained Tacit Behavior for efficient multi-agent adversarial Coordination (PTBC). The framework consists of a tacit pre-training phase and a centralized adversarial training phase. For pre-training the tacit behaviors, we develop a pattern mechanism and a tacit mechanism to integrate spatial relationships among agents, which dynamically guide agents' actions to gain spatial advantages for coordination. In the subsequent centralized adversarial training phase, we utilize the pre-trained network to enhance the formation of advantageous spatial positioning, achieving more efficient learning performance. Our experimental results in the predator-prey and StarCraft Multi-Agent Challenge (SMAC) environments demonstrate the effectiveness of our method through comparisons with several algorithms exhibiting distinct strengths. Additionally, by visualizing the agents' performance in adversarial tasks, we validate that incorporating inter-agent relationships enables agents with pre-trained tacit behavior to achieve more advantageous coordination. Extensive ablation studies demonstrate the critical role of tacit guidance and the general applicability of the PTBC framework.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"108121"},"PeriodicalIF":6.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}