NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131389
Zhen Li , Bo Li
{"title":"SAS-MRI: A semantic-guided approach for arbitrary scale super-resolution in multi-contrast brain MRI","authors":"Zhen Li , Bo Li","doi":"10.1016/j.neucom.2025.131389","DOIUrl":"10.1016/j.neucom.2025.131389","url":null,"abstract":"<div><div>Super-resolution (SR) techniques have become increasingly significant in medical imaging. In clinical practice, radiologists are frequently zooming in on the magnetic resonance imaging (MRI) scans at arbitrary scales to fully visualize the lesions. However, most current arbitrary-scale SR techniques predominantly focus on learning pixel-to-pixel mapping from low-resolution (LR) to high-resolution (HR) images, thereby neglecting crucial high-level semantic information such as gray/white matter and inter-slice volumetric context. To tackle this problem, we introduce an innovative Semantic-guided Arbitrary-scale Super-resolution method for brain MRI (SAS-MRI). Our approach centers around leveraging semantic information from three main components: (i) a semantic-aware embedding module that effectively incorporates semantic priors into the feature representation space; (ii) a convolutional attention module that utilizes the information from adjacent slices within a volume; and (iii) a semantic-guided perceptual loss that enhances image quality by leveraging prior knowledge from medical image segmentation. Extensive experiments on two publicly available MRI datasets, fastMRI and IXI, demonstrate that our proposed SAS-MRI achieves superior Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) performance across multiple scaling factors. It consistently outperforms leading methods, achieving an average PSNR/SSIM of 31.997 dB / 0.899 on fastMRI and 37.871 dB / 0.985 on IXI, surpassing the next best methods by up to 0.20 dB / 0.002 and 0.78 dB / 0.006, respectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131389"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144911797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131387
Hao Li, Shiyi Lei, Yuyang Feng, Xin Zhao, Tao Zhang
{"title":"Parallel dual-branch network with multi-scale features for unsupervised domain adaption person re-identification","authors":"Hao Li, Shiyi Lei, Yuyang Feng, Xin Zhao, Tao Zhang","doi":"10.1016/j.neucom.2025.131387","DOIUrl":"10.1016/j.neucom.2025.131387","url":null,"abstract":"<div><div>Extracting discriminative features is a significant challenge for unsupervised person re-identification (Re-ID) when training from the source domain to the target domain. Most unsupervised domain adaptive methods for person Re-ID rely on single-scale appearance information during feature extraction. These methods may overlook the potentially useful explicit information of other different scales, and the pseudo-labels obtained through clustering often contain noise. Furthermore, relying on a single branch network model structure can lead to learning stagnation, limiting model generalizability. To resolve these problems, this paper proposes a parallel dual-branch network (PDBN) which comprises two branches: a backbone branch and a pyramid branch, PDBN leverages multi-scale pyramid branch features and global backbone branch features to jointly learn discriminative features. The two branches are not independent to each other but synergistically correlated. Specifically, we formulate a novel mutual teaching PDBN training strategy for unsupervised domain adaptation in person Re-ID by hard classification loss, hard triplet loss, soft classification loss and soft triplet loss. The backbone branch aims to maximise global feature discriminative capability, while the pyramid branch extracts multi-scale details from the backbone using a feature pyramid network to concurrently optimise the underlying complementary advantages across scales. To reduce pseudo-label noise, Segmenting Dynamic Clustering (SDC) is introduced, which divides training into two phases. In phase 1, SDC uses backbone branch features for clustering to ensure accurate feature learning. In phase 2, SDC fuses features from both branches by max operation as the final discriminative features for clustering, enabling interactive reinforcement between them. Extensive experiments conducted on Market1501, DukeMTMC-reID, and MSMT17 demonstrate the superiority of our method compared with state-of-the-art methods for unsupervised cross-domain person Re-ID.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131387"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131386
Yuwei Chen , Jiawei Chen , Zhefei Cai , Yingle Fan , Yanming Wang , Minwei Zhu
{"title":"WEI-SNNs: Spiking neural networks based on excitation-inhibition neurons and widening learnable time constants","authors":"Yuwei Chen , Jiawei Chen , Zhefei Cai , Yingle Fan , Yanming Wang , Minwei Zhu","doi":"10.1016/j.neucom.2025.131386","DOIUrl":"10.1016/j.neucom.2025.131386","url":null,"abstract":"<div><div>Spiking Neural Networks (SNNs) have gained significant attention in neuromorphic computing for their ability to process spatiotemporal information. However, the discontinuity and non-differentiability of spiking neuron models often lead to gradient vanishing during training, limiting network performance. Inspired by the spatial organization of excitation-inhibition (E-I) mechanisms in biological systems and the modularity of spiking neurons, a novel Excitation-Inhibition Leaky Integrate-and-Fire (EI-LIF) neuron model is proposed. Through dynamic regulation of the membrane potential, the model facilitates more effective gradient propagation. Furthermore, to construct high-performance SNNs from a temporal perspective, this study develops a step-wise time constant update strategy, jointly modulated by excitatory drive and temporal gating. This approach mitigates convergence difficulties in time constant optimization under strong E-I coupling. By integrating both spatial and temporal mechanisms, the Widened Learnable Time Constants and Excitation-Inhibition Spiking Neural Networks (WEI-SNNs) are proposed to enhance training stability and efficiency while maintaining biological plausibility. Experimental validation on static (CIFAR-10, CIFAR-100) and dynamic (DVS-Gesture, DVS-CIFAR10) datasets demonstrates Top-1 accuracies of 94.47 %, 75.66 %, 98.26 %, and 80.50 %, respectively, and shows good generalization on large-scale datasets such as ImageNet. Compared to state-of-the-art (SOTA) methods, WEI-SNNs achieve comparable accuracy with a significantly smaller number of parameters. This paper offers innovative algorithmic insights into efficient SNN design, supporting the development of brain-inspired intelligence.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131386"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144908929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131316
Lewu Lin , Yufa Duan , Yingying Wang , Jialing Huang , Rongjin Zhuang , Xiaotong Tu , Xinghao Ding , Na Shen , Qing Lu
{"title":"Shuffle-reshuffle Gradient Mamba for multimodal medical image fusion","authors":"Lewu Lin , Yufa Duan , Yingying Wang , Jialing Huang , Rongjin Zhuang , Xiaotong Tu , Xinghao Ding , Na Shen , Qing Lu","doi":"10.1016/j.neucom.2025.131316","DOIUrl":"10.1016/j.neucom.2025.131316","url":null,"abstract":"<div><div>Multimodal Medical Image Fusion (MMIF) aims to integrate information from different modalities into a single image with comprehensive information and rich spatial details. In recent years, significant progress has been made in MMIF tasks due to advancements in deep neural networks. State Space Models, such as Mamba, have demonstrated a strong capability in modeling long-range dependencies with linear complexity, offering a promising solution to MMIF. However, the fixed sequence scanning mechanism in Mamba may introduce bias, compromising its global receptive field and preventing effective global modeling capabilities. Additionally, Mamba tends to overly focus on global dependency modeling, hindering its effectiveness in representing long-range dependencies while retaining rich fine-grained spatial details, which are critical for MMIF. Furthermore, directly applying Mamba for cross-modal relationship exploration may result in feature redundancy and attenuation of critical information, making it challenging to effectively integrate complementary cross-modal features. To address these issues, we propose the Shuffle-Reshuffle Gradient Mamba (SRGM) tailored to MMIF. Specifically, we design the Local and Global Gradient Mamba (LGGM) to extract modality-specific features while preserving abundant spatial details. Additionally, we introduce the Attention-Guided Cross Mamba (AGCM) to facilitate effective cross-modal feature fusion and reduce feature redundancy. To further enhance fusion performance, we incorporate a Shuffle-Reshuffle Scanning (SRC) strategy into LGGM to overcome the limitations of Mamba’s fixed scanning, achieving unbiased local and global modeling. Through extensive experiments on CT-MRI, PET-MRI, and SPECT-MRI image fusion tasks, our proposed approach surpasses state-of-the-art (SOTA) methods, showcasing superior fusion results.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131316"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144911798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131206
Huaguang Zhu , Ran Chen , Liwen Xu
{"title":"ADP-GNN: A spectral graph neural network framework unifying homophilic and heterophilic patterns via adaptive dual-polynomial filters","authors":"Huaguang Zhu , Ran Chen , Liwen Xu","doi":"10.1016/j.neucom.2025.131206","DOIUrl":"10.1016/j.neucom.2025.131206","url":null,"abstract":"<div><div>Spectral Graph Neural Networks (GNNs), also known as spectral-domain graph filters, have demonstrated state-of-the-art performance in graph-structured data processing tasks. Initially, these filters typically relied on the eigendecomposition of the graph Laplacian matrix to implement the graph Fourier transform, resulting in high computational complexity. To reduce this complexity, numerous polynomial filters have been proposed. However, existing polynomial approaches often utilize predefined, fixed polynomial forms that cannot adapt to varying degrees of graph heterogeneity encountered in practical applications. To address this limitation, this paper first theoretically examines the intrinsic relationship between graph heterogeneity and filter characteristics. Inspired by the principles of ensemble learning, we then propose an <strong>A</strong>daptive <strong>D</strong>ual-<strong>P</strong>olynomial spectral <strong>GNN</strong> model (<strong>ADP-GNN</strong>), which integrates Bernstein and Taylor bases through an adaptive filtering mechanism. Unlike conventional single-basis approaches, our formulation leverages the complementary strengths of Bernstein polynomials, which offer global stability and boundary robustness, and Taylor polynomials, known for their local approximation accuracy. This model effectively exploits initial node features and adaptively captures both homogeneous and heterogeneous structural properties of graphs with varying homophily levels. Experimental evaluations demonstrate that ADP-GNN can effectively learn diverse filter structures, significantly mitigate the oversmoothing problem, and consistently outperform current state-of-the-art methods across multiple benchmark datasets. The source code is publicly available at: <span><span>https://github.com/zhu165/ADP-GNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131206"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144911794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131375
Fulong Yao , Wanqing Zhao , Matthew Forshaw , Yang Song
{"title":"A holistic power optimization approach for microgrid control based on deep reinforcement learning","authors":"Fulong Yao , Wanqing Zhao , Matthew Forshaw , Yang Song","doi":"10.1016/j.neucom.2025.131375","DOIUrl":"10.1016/j.neucom.2025.131375","url":null,"abstract":"<div><div>Microgrid systems integrated with renewable energy sources (RES) and energy storage systems (ESS) have played a crucial role in providing more secure and reliable energy and deepening the penetration of renewables. However, optimizing the operational control of such an integrated energy system lacks the joint consideration of environmental, infrastructural and economic objectives. This paper presents a holistic data-driven power optimization approach based on deep reinforcement learning (DRL) for microgrid control, considering the multiple needs of decarbonization, sustainability and cost-efficiency. First, two control schemes, namely the prediction-based (PB) and prediction-free (PF) schemes, are devised to formulate the control problem. Second, a holistic reward function is designed to account for the operational costs, carbon emissions, peak load and battery degradation together. Third, a double dueling deep Q network (D3QN) is built to optimize the ESS charging and discharging strategies for real-time energy management. Finally, extensive simulations are conducted on a US microgrid to demonstrate the effectiveness of the proposed approach. Results show that the PB scheme outperforms the PF scheme when the prediction error is below 12.5 %, while the PF scheme becomes more effective as the error increases. It is also found that the D3QN with PB scheme can significantly reduce the annual operational cost and carbon emissions, while also greatly mitigating battery degradation and peak load.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131375"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144911673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131384
Chao Zhang, Jiaqing Fan, Mengjuan Jiang, Fanzhang Li
{"title":"S2Trans: Structured spectrum transformer for robust unsupervised video object segmentation","authors":"Chao Zhang, Jiaqing Fan, Mengjuan Jiang, Fanzhang Li","doi":"10.1016/j.neucom.2025.131384","DOIUrl":"10.1016/j.neucom.2025.131384","url":null,"abstract":"<div><div>Unsupervised Video Object Segmentation (UVOS) aims to identify and segment spatiotemporal regions of the most prominent object in videos. However, achieving accurate and robust segmentation in complex scenarios is challenging due to imbalanced multimodal features and insufficient fine-grained structural representation. To address these issues, we propose the Structured Spectrum Transformer (S2Trans) for UVOS. S2Trans decouples the Transformer into temporal and spatial components, facilitating the integration of locally correlated optical flow to guide spatial attention. Specifically, we introduce a Guided Bimodal Synthesis (GBS) module that dynamically balances the contributions of video frame and flow features by leveraging their interactions, while effectively suppressing misleading information. To capture multi-scale structural and textural details of objects, which are crucial for accurate UVOS, we propose the Dual-scale Spectrum Gated Feed-Forward (DSGFF). Additionally, to relieving global context loss, we design a Hybrid Context spatial Multi-Head Self-Attention (HC-MHSA) mechanism that integrates global tokens into window-based attention. Extensive experiments on three mainstream UVOS benchmarks demonstrate that the proposed framework achieves state-of-the-art performance, validating its effectiveness in balancing multimodal contributions and enriching feature representation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131384"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131368
Youhuizi Li, Yi Wang, Yuyu Yin, Tingting Liang
{"title":"ADGAT: Anomaly detection-based graph adversarial defense framework","authors":"Youhuizi Li, Yi Wang, Yuyu Yin, Tingting Liang","doi":"10.1016/j.neucom.2025.131368","DOIUrl":"10.1016/j.neucom.2025.131368","url":null,"abstract":"<div><div>Graph neural networks (GNNs) are powerful tools for graph representation learning and have achieved excellent performance on tasks such as node classification and link prediction. However, GNNs are vulnerable to adversarial attacks, which inject imperceptible perturbations into the graph and fool GNNs into making wrong predictions. Therefore, we propose the ADGAT framework, which integrates graph anomaly detection and graph purification through the dynamic attention mechanism. The anomaly detector in ADGAT leverages deep neural networks to decode latent edge values in the pre-processing stage. The attention is applied in GNN for message-passing with an extra purification layer. The attention coefficients are dynamically configured for different edge clusters based on decoded edge values. ADGAT defends against attacks through purification while preserving clean graph information as much as possible. Extensive experiments on real graph datasets demonstrate that ADGAT has certain adaptability and can achieve better performance. For instance, on the Cora dataset under Metattack, its defensive performance surpasses traditional baselines by approximately 8 % and outperforms advanced robust baselines by about 1.5 %.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131368"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131363
Chang-E Ren , Siyao Cheng , Zehua Xuan
{"title":"FOATLBL: Federated online active transfer learning via broad learning system","authors":"Chang-E Ren , Siyao Cheng , Zehua Xuan","doi":"10.1016/j.neucom.2025.131363","DOIUrl":"10.1016/j.neucom.2025.131363","url":null,"abstract":"<div><div>Federated learning (FL) is a new paradigm of machine learning in which clients send trained local models to server, and the server aggregates these models into a global model. FL has received a lot of attention due to its security and effectiveness. However, the existing methods depend heavily on a large number of labeled samples from clients, and FL performance deteriorates when clients have only a few labeled samples or poor quality labeled samples. To face this challenge, we propose federated online active transfer learning via broad learning (FOATLBL). In FOATLBL, clients can obtain more important labeled samples in the target domain and adapt to model training with limited labeled samples. FOATLBL also features a new personalized aggregation algorithm that allows the aggregated model to better adapt to local data on the client. Finally, extensive experiments on the transfer learning public datasets and the human activity recognition datasets show superior performance of FOATLBL over the baseline.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"654 ","pages":"Article 131363"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-08-25DOI: 10.1016/j.neucom.2025.131385
Shuxin Qin , Jing Zhu , Aipeng Guo , Yansong Yang , Lu Wang , Gaofeng Tao
{"title":"MambaAD: Multivariate time series anomaly detection in IoT via multi-view Mamba","authors":"Shuxin Qin , Jing Zhu , Aipeng Guo , Yansong Yang , Lu Wang , Gaofeng Tao","doi":"10.1016/j.neucom.2025.131385","DOIUrl":"10.1016/j.neucom.2025.131385","url":null,"abstract":"<div><div>Multivariate time series anomaly detection (MTSAD) in Internet of Things (IoT) systems is a crucial area of research that aims to increase cybersecurity, prevent disruptions and improve quality of service. To cope with the growing complexity of sensory data, recent approaches emphasize learning the temporal dynamics of each signal with Transformers and capturing correlations between signals with graph learning. However, these methods still struggle with two major challenges. First, the deep models using Transformer architecture and graph structure are computationally inefficient due to their quadratic complexity, which limits their scalability and applicability. Second, it is difficult to learn the generalized latent patterns from the limited but highly sensitive training data, which makes the models prone to overfitting. To address these challenges, we propose MambaAD, an efficient anomaly detection framework based on the linear-time state space model. Specifically, we segment and tokenize the time series to retain local semantic information in the embeddings. Then, we develop a bidirectional Mamba module to extract temporal dependencies and inter-signal correlations from different perspectives. Finally, the captured features are fused and projected for reconstruction and scoring. We also provide a multi-view token masking strategy and a contrastive learning mechanism to elevate both representation quality and generalization performance. Thorough experimentation with six real-world datasets across different fields indicates that the proposed method surpasses current state-of-the-art benchmarks in terms of both efficiency and accuracy.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131385"},"PeriodicalIF":6.5,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}