NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131378
Ziyu Guan , Beilei Ling , Weigang Lu , Meng Yan , Yaming Yang , Wei Zhao , Yibing Zhan , Dapeng Tao
{"title":"G-NodeMixup: Enhancing graph neural networks reachability under extremely limited labels","authors":"Ziyu Guan , Beilei Ling , Weigang Lu , Meng Yan , Yaming Yang , Wei Zhao , Yibing Zhan , Dapeng Tao","doi":"10.1016/j.neucom.2025.131378","DOIUrl":"10.1016/j.neucom.2025.131378","url":null,"abstract":"<div><div>Graph Neural Networks (<span>Gnns</span>) have shown remarkable performance in semi-supervised node classification, but their effectiveness diminishes in settings with extremely limited labeled data. The scarcity of labeled nodes leads to an under-reaching issue, where unlabeled nodes receive insufficient supervision, resulting in poor generalization. In this paper, we propose <span>G-NodeMixup</span>, a generalized extension of our previously proposed <span>NodeMixup</span> framework which was designed to address under-reaching by improving communication between labeled and unlabeled nodes. <span>G-NodeMixup</span> introduces three novel components: (1) Multi-set Pairing, which facilitates mixup between Labeled-Labeled, Labeled-Unlabeled, and Unlabeled-Unlabeled nodes to enhance node interactions and promote smoother decision boundaries; (2) Subgraph-based Mixup, which focuses mixup within <span><math></math></span>-hop subgraphs to preserve graph locality and avoid disruptive global edge modifications; and (3) Consistency Regularization-based Mixup Loss, which reduces reliance on noisy pseudo-labels by enforcing consistency between mixed node predictions. Our framework remains architecture-agnostic and can be applied to various <span>Gnn</span> models without requiring significant architectural changes or excessive computational overhead. Experimental results across several benchmark datasets demonstrate that <span>G-NodeMixup</span> consistently improves <span>Gnn</span> performance in extremely limited labeled settings, achieving state-of-the-art results and establishing its practical effectiveness.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131378"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144997258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131364
Zhongzhi Li , Jingqi Tu , Jiacheng Zhu , Rong Fan , Jianliang Ai , Yiqun Dong
{"title":"Scalable and reliable deep transfer learning for intelligent fault detection via multi-scale neural processes embedded with prior knowledge","authors":"Zhongzhi Li , Jingqi Tu , Jiacheng Zhu , Rong Fan , Jianliang Ai , Yiqun Dong","doi":"10.1016/j.neucom.2025.131364","DOIUrl":"10.1016/j.neucom.2025.131364","url":null,"abstract":"<div><div>Deep Transfer Learning (DTL) is used to mitigate the degradation of method performance that arises from the discrepancies in data distribution between different domains. Considering the fact that fault data collection in the field of Intelligent Fault Detection (IFD) is challenging and certain faults are scarce, DTL-based methods face the limitation of available observable data. Furthermore, DTL-based methods lack comprehensive uncertainty analysis which is essential for building reliable IFD systems. To address the aforementioned problems, this paper proposes a scalable and reliable DTL-based method known as Neural Processes-based Deep Transfer Learning with Graph Convolution Network (GTNP). The graph convolution network embedded with knowledge, the joint modeling based on global and local latent variables and sparse sampling strategy are used to reduce the demand for observable data in the target domain. The multi-scale uncertainty analysis is obtained by using the distribution characteristics of global and local latent variables, which enhances the reliability of the model’s detection results. The validation of the proposed method is conducted across 3 IFD tasks, consistently showing the superior detection performance of GTNP compared to the other advanced DTL-based methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131364"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131380
Zuhe Li , Xiang Guo , Huaiguang Wu , Jun Yu , Haoran Chen , Yifan Gao , Xiaowei Huang , Yushan Pan
{"title":"A multi-view and multi-granularity emotional semantic interaction framework combining graph and attention mechanism for multimodal sentiment analysis","authors":"Zuhe Li , Xiang Guo , Huaiguang Wu , Jun Yu , Haoran Chen , Yifan Gao , Xiaowei Huang , Yushan Pan","doi":"10.1016/j.neucom.2025.131380","DOIUrl":"10.1016/j.neucom.2025.131380","url":null,"abstract":"<div><div>Multimodal sentiment analysis aims to distinguish emotional leanings within data by examining information across multiple modalities. The primary responsibility is to effectively harness both the internal and external emotional correlations among modalities while thoroughly extracting implicit emotional cues to accommodate a wide range of semantic contexts. We present MMIF, a Multi-view and Multi-granularity Sentiment Semantic Interaction Framework that tackles these challenges in four concise steps: the Quantum-inspired Temporal Feature Extraction (QTFE) enriches each modality’s temporal dynamics with a quantum-structured LSTM; the Graph Neural Networks Enhanced Intra-modal Representation (GEIR) builds modality-specific graphs and taps diverse GNN variants to strengthen intra-modal reasoning; the Inter-modal Deep Interaction Fusion (IDIF) fuses cross-modal cues using a bidirectionally enhanced attention mechanism; the Emotion Representation Distribution Matching (ERDM) refines the final emotion predictions by capturing multi-granularity distributions of sentiment intensity information. Experimental results on the public datasets CMU-MOSI, CMU-MOSEI, and CH-SIMS show that the proposed model outperforms the compared methods in terms of performance, achieving the best accuracy scores of 89.12 %, 86.79 %, and 80.53 %, as well as F1 scores of 89.13 %, 86.80 %, and 80.71 %, respectively, demonstrating a significant performance advantage.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131380"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131374
Kyoung Ok Yang , Junho Koh , Jun Won Choi
{"title":"UCFFormer: Recognizing human actions from multimodal sensors using unified contrastive fusion transformer","authors":"Kyoung Ok Yang , Junho Koh , Jun Won Choi","doi":"10.1016/j.neucom.2025.131374","DOIUrl":"10.1016/j.neucom.2025.131374","url":null,"abstract":"<div><div>Human Action Recognition (HAR) is fundamental to intelligent systems, enabling machines to accurately interpret and respond to human activities. Multimodal sensor fusion has emerged as a key approach to enhancing HAR performance by leveraging complementary information from diverse sensing modalities. In this paper, we propose the Unified Contrastive Fusion Transformer (UCFFormer), a novel multimodal fusion architecture designed to integrate heterogeneous sensor data for robust HAR in AI-driven applications. UCFFormer employs a unified transformer framework to model interdependencies across both temporal and modality domains, effectively capturing cross-modal interactions. To improve computational efficiency, we introduce the Factorized Time-Modality Transformer (FTMT), which reduces the complexity of self-attention while preserving rich contextual representations. Additionally, we propose the Multimodal Contrastive Alignment Network (MCANet), which utilizes contrastive loss to align feature distributions across modalities, ensuring semantically consistent feature fusion. Extensive experiments on three benchmark datasets demonstrate the state-of-the-art performance of UCFFormer: 99.99 % Top-1 accuracy on UTD-MHAD, setting a new record; 97.5 % cross-subject and 99.7 % cross-view accuracies on NTU RGB+D. These concrete results highlight the effectiveness of our unified contrastive fusion transformer for robust human action recognition.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131374"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131340
Fang Gao , Yunxiang Cai , Haotian Yao , Shaodong Li , Qing Gao , Linfei Yin
{"title":"Factorizing value function with hierarchical residual Q-network in multi-agent reinforcement learning","authors":"Fang Gao , Yunxiang Cai , Haotian Yao , Shaodong Li , Qing Gao , Linfei Yin","doi":"10.1016/j.neucom.2025.131340","DOIUrl":"10.1016/j.neucom.2025.131340","url":null,"abstract":"<div><div>Value function decomposition has achieved notable success in Multi-Agent Reinforcement Learning (MARL) under the centralized training with decentralized execution paradigm. Traditional value function decomposition methods typically employ monotonic mixing networks to decompose the optimal joint action-value function in order to ensure consistency between joint and local action selections. However, these networks often face limitations in representational capacity and sample efficiency, making it difficult to accurately fit the reward function and achieve stable convergence, thus leading to suboptimal results. To address these challenges, we propose a novel MARL framework called Hierarchical Residual Q-network (HRQ). The HRQ framework adheres to the Individual-Global-Max principle while applying more relaxed constraints. It features an Outer Residual Network (ORN) that adjusts the joint action-value function to enhance the representational capacity of the mixing network. Additionally, HRQ incorporates an Inner Residual Entropy Auxiliary Network (IREAN) to refine individual action-value functions, addressing credit assignment and value overestimation problems arising from task diversity and agent independence in MARL. Our approach enhances exploration efficiency, sample efficiency, and convergence stability. Extensive experiments on multi-agent cooperative benchmarks, including predator-prey and StarCraft, demonstrate that HRQ outperforms existing methods in convergence speed, stability, and adaptability. Compared with the best comparison method, HRQ achieves an overall performance improvement of 10 %–20 %.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131340"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144933322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131336
Dingge Liang , Marco Corneli , Charles Bouveyron , Pierre Latouche , Junping Yin
{"title":"The multiplex deep latent position model for the clustering of nodes in multiview networks","authors":"Dingge Liang , Marco Corneli , Charles Bouveyron , Pierre Latouche , Junping Yin","doi":"10.1016/j.neucom.2025.131336","DOIUrl":"10.1016/j.neucom.2025.131336","url":null,"abstract":"<div><div>Multiplex networks capture multiple types of interactions among the same set of nodes, creating a complex, multi-relational framework. A typical example is a social network where nodes (actors) are connected by various types of ties, such as professional, familial, or social relationships. Clustering nodes in these networks is a key challenge in unsupervised learning, given the increasing prevalence of multiview data across domains. While previous research has focused on extending statistical models to handle such networks, these adaptations often struggle to fully capture complex network structures and rely on computationally intensive Markov chain Monte Carlo (MCMC) for inference, rendering them less feasible for effective network analysis. To overcome these limitations, we propose the multiplex deep latent position model (MDLPM), which generalizes and extends latent position models to multiplex networks. MDLPM combines deep learning with variational inference to effectively tackle both the modeling and computational challenges raised by multiplex networks. Unlike most existing deep learning models for graphs that require external clustering algorithms (e.g., k-means) to group nodes based on their latent embeddings, MDLPM integrates clustering directly into the learning process, enabling a fully unsupervised, end-to-end approach. This integration improves the ability to uncover and interpret clusters in multiplex networks without relying on external procedures. Numerical experiments across various synthetic data sets and two real-world networks demonstrate the performance of MDLPM compared to state-of-the-art methods, highlighting its applicability and effectiveness for multiplex network analysis.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131336"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144933402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131377
Xiaoxu Li , Shaoying Xue , Jiyang Xie , Xiaochen Yang , Zhanyu Ma , Jing-Hao Xue
{"title":"Interactive triplet attention for few-shot fine-grained image classification","authors":"Xiaoxu Li , Shaoying Xue , Jiyang Xie , Xiaochen Yang , Zhanyu Ma , Jing-Hao Xue","doi":"10.1016/j.neucom.2025.131377","DOIUrl":"10.1016/j.neucom.2025.131377","url":null,"abstract":"<div><div>Few-shot fine-grained classification aims to identify novel fine-grained classes from extremely few examples with ultra-high semantic similarity between classes, hence a notoriously hard task. To extract discriminative features from <em>few samples</em> for recognizing subtle differences between <em>fine-grained classes</em>, it is pivotal to exploit comprehensive interactions across all dimensions in space and channel, which, however, is unexplored yet by state-of-the-art methods in this challenging area. To address this issue, in this paper we show that a simple adjustment to the existing triplet attention module (TAM) can be highly effective for few-shot fine-grained image classification. More specifically, building on TAM which comprises three parallel branches for pairwise interactions between height, width, and channel dimensions, we introduce an additional interaction between the outputs of these three branches, capable of modeling the dependency across all three dimensions; the revised method is dubbed interactive triplet attention module (ITAM). ITAM is a plug-and-play module, which can be inserted into any metric-based few-shot fine-grained image classifiers for performance enhancement. Extensive experiments, on CUB-200-2011, Flowers, Stanford-Cars, and Stanford-Dogs, showcase the superiority of ITAM against state-of-the-art few-shot fine-grained image classifiers.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131377"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-26DOI: 10.1016/j.neucom.2025.131356
Ke Wang , Yuanyuan Liu , Chang Tang , Kun Sun , Yibing Zhan , Zhe Chen
{"title":"Degradation-adaptive attack-robust self-supervised facial representation learning","authors":"Ke Wang , Yuanyuan Liu , Chang Tang , Kun Sun , Yibing Zhan , Zhe Chen","doi":"10.1016/j.neucom.2025.131356","DOIUrl":"10.1016/j.neucom.2025.131356","url":null,"abstract":"<div><div>Self-supervised face representation learning (SFRL) shows strong potential for scalable face-related applications, yet remains vulnerable to adversarial attacks that cause dual facial semantic degradations, namely (1) structured distortions in key facial regions (<em>e.g.</em>, subtle inter-ocular distance shifts) that disrupt identity-related features, and (2) unstructured additive noise (<em>e.g.</em>, illumination artifacts) that entangles with face-related features in latent space. Existing defense methods struggle to deal with both facial semantic degradations in SFRL, resulting in limited robustness. To address this, inspired by existing reverse Diffusion approaches that effectively tackle the image denoising, we propose <strong>DAR-SFRL</strong>, a novel Degradation-adaptive Attack-Robust Self-supervised Face Representation Learning framework. DAR-SFRL models adversarial attacks as a degradation-based function composed of geometric distortions and additive noise, applying a multi-stage reverse Diffusion iterative process to recover facial semantics. At each stage of the process, DAR-SFRL employs: (1) an adaptive degraded-face restoration method that progressively reverses the degradation function and recovers fine-grained details from structured distortions, and (2) a noise-orthogonal contrastive learning mechanism to mitigate the impact of unstructured additive noise by maximizing the dissimilarity between noisy and clean image features in the latent space. Extensive experiments across tasks—including face recognition, facial expression recognition, and facial action unit detection—demonstrate that DAR-SFRL significantly outperforms state-of-the-art defenses under various adversarial attacks, highlighting its robustness and generalization in real-world face-aware applications. Our evaluation code is available at <span><span>https://github.com/23wk/DAR-SFRL</span><svg><path></path></svg></span></div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131356"},"PeriodicalIF":6.5,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131372
Zhongqi Sun , Yuan Zhou , Shuwei Huo , Sun-Yuan Kung
{"title":"Adaptive motion enhancement for passive non-line-of-sight action recognition","authors":"Zhongqi Sun , Yuan Zhou , Shuwei Huo , Sun-Yuan Kung","doi":"10.1016/j.neucom.2025.131372","DOIUrl":"10.1016/j.neucom.2025.131372","url":null,"abstract":"<div><div>Most current recognition methods for human action are designed for line-of-sight (LOS) scenarios, where the targets are assumed to be directly visible. However, in numerous action recognition applications, we often encounter non-line-of-sight (NLOS) situations, e.g., rescue operations, security, and autonomous vehicles. In this case, it becomes imperative that we develop methods that can effectively identify human actions outside the line of sight, such as those behind obstacles or around corners. It is also important to note that most existing NLOS approaches rely on expensive active imaging equipment, which hinders practical deployment. To address these challenges, we propose <em>AME-Net</em> (Adaptive Motion Enhancement Network), a novel passive NLOS action recognition framework that identifies human actions by analyzing reflections on visible relay walls using only standard RGB cameras. AME-Net adaptively amplifies subtle motion cues and mitigates environmental variability, enabling accurate and robust recognition in NLOS conditions. Furthermore, we introduce NLOS-Action, the first dataset specifically designed for passive NLOS action recognition, containing both synthetic and real-world sequences. Pursuant to our extensive experiments based on the dataset, this paper convincingly demonstrates effectiveness and practicality of the proposed AME-Net for NLOS action recognitions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131372"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131379
Chenxia Liao , Huan Wan , Jianfeng Xu , Pengxiang Su , Xin Wei
{"title":"An open-set few-shot face recognition framework with balanced adaptive cohesive mixing","authors":"Chenxia Liao , Huan Wan , Jianfeng Xu , Pengxiang Su , Xin Wei","doi":"10.1016/j.neucom.2025.131379","DOIUrl":"10.1016/j.neucom.2025.131379","url":null,"abstract":"<div><div>Face recognition has made great progress in closed-set and big data-supported scenarios. However, open-set few-shot face recognition (OSFSFR) is also in great demand for real-world applications, but the research on it remains at a rather preliminary and limited stage. The limited research on OSFSFR methods focuses on the optimization of models and metrics, yet they discard valuable base-class knowledge after pre-training. Although using pseudo-labels to retain this knowledge is a recent trend, it is hampered by two significant drawbacks: the injection of feature noise from unreliable labels and aggravated class imbalance caused by the base-novel class distribution gap. Therefore, we propose an innovative framework named Balanced Adaptive Cohesive Mixing (BACM), which effectively leverages base-class samples through three key strategies to tackle data scarcity and generate samples that align with novel class distributions: (1) a quality-based sample filtering mechanism to select informative base samples, (2) Center-guided Adaptive Cohesive Mixing (CGACM) for enhanced sample, and (3) Alleviating the Class Imbalance (ACI) which integrates linear and semantic pseudo-labels to mitigate class imbalance caused by pseudo-labeling. Extensive experimental evaluations on the CASIA and IJB-C datasets demonstrate the superior performance of our method in optimizing recognition accuracy for known identities and maintaining robust rejection capabilities for unknown subjects, surpassing the state-of-the-art approaches. The code is available at: <span><span>https://github.com/chenxialiao/BACM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131379"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144911800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}