Neural Networks最新文献

筛选
英文 中文
StochCA: A novel approach for exploiting pretrained models with cross-attention StochCA:利用交叉关注预训练模型的新方法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-23 DOI: 10.1016/j.neunet.2024.106663
{"title":"StochCA: A novel approach for exploiting pretrained models with cross-attention","authors":"","doi":"10.1016/j.neunet.2024.106663","DOIUrl":"10.1016/j.neunet.2024.106663","url":null,"abstract":"<div><p>Utilizing large-scale pretrained models is a well-known strategy to enhance performance on various target tasks. It is typically achieved through fine-tuning pretrained models on target tasks. However, naï ve fine-tuning may not fully leverage knowledge embedded in pretrained models. In this study, we introduce a novel fine-tuning method, called stochastic cross-attention (StochCA), specific to Transformer architectures. This method modifies the Transformer’s self-attention mechanism to selectively utilize knowledge from pretrained models during fine-tuning. Specifically, in each block, instead of self-attention, cross-attention is performed stochastically according to the predefined probability, where keys and values are extracted from the corresponding block of a pretrained model. By doing so, queries and channel-mixing multi-layer perceptron layers of a target model are fine-tuned to target tasks to learn how to effectively exploit rich representations of pretrained models. To verify the effectiveness of StochCA, extensive experiments are conducted on benchmarks in the areas of transfer learning and domain generalization, where the exploitation of pretrained models is critical. Our experimental results show the superiority of StochCA over state-of-the-art approaches in both areas. Furthermore, we demonstrate that StochCA is complementary to existing approaches, i.e., it can be combined with them to further improve performance. We release the code at <span><span>https://github.com/daintlab/stochastic_cross_attention</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inter-participant transfer learning with attention based domain adversarial training for P300 detection 利用基于注意力的领域对抗训练进行 P300 检测的参与者间迁移学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106655
{"title":"Inter-participant transfer learning with attention based domain adversarial training for P300 detection","authors":"","doi":"10.1016/j.neunet.2024.106655","DOIUrl":"10.1016/j.neunet.2024.106655","url":null,"abstract":"<div><p>A Brain-computer interface (BCI) system establishes a novel communication channel between the human brain and a computer. Most event related potential-based BCI applications make use of decoding models, which requires training. This training process is often time-consuming and inconvenient for new users. In recent years, deep learning models, especially participant-independent models, have garnered significant attention in the domain of ERP classification. However, individual differences in EEG signals hamper model generalization, as the ERP component and other aspects of the EEG signal vary across participants, even when they are exposed to the same stimuli. This paper proposes a novel One-source domain transfer learning method based Attention Domain Adversarial Neural Network (OADANN) to mitigate data distribution discrepancies for cross-participant classification tasks. We train and validate our proposed model on both a publicly available OpenBMI dataset and a Self-collected dataset, employing a leave one participant out cross validation scheme. Experimental results demonstrate that the proposed OADANN method achieves the highest and most robust classification performance and exhibits significant improvements when compared to baseline methods (CNN, EEGNet, ShallowNet, DeepCovNet) and domain generalization methods (ERM, Mixup, and Groupdro). These findings underscore the efficacy of our proposed method.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142122537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruct incomplete relation for incomplete modality brain tumor segmentation 为不完整模态脑肿瘤分割重建不完整关系。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106657
{"title":"Reconstruct incomplete relation for incomplete modality brain tumor segmentation","authors":"","doi":"10.1016/j.neunet.2024.106657","DOIUrl":"10.1016/j.neunet.2024.106657","url":null,"abstract":"<div><p>Different brain tumor magnetic resonance imaging (MRI) modalities provide diverse tumor-specific information. Previous works have enhanced brain tumor segmentation performance by integrating multiple MRI modalities. However, multi-modal MRI data are often unavailable in clinical practice. An incomplete modality leads to missing tumor-specific information, which degrades the performance of existing models. Various strategies have been proposed to transfer knowledge from a full modality network (teacher) to an incomplete modality one (student) to address this issue. However, they neglect the fact that brain tumor segmentation is a structural prediction problem that requires voxel semantic relations. In this paper, we propose a Reconstruct Incomplete Relation Network (RIRN) that transfers voxel semantic relational knowledge from the teacher to the student. Specifically, we propose two types of voxel relations to incorporate structural knowledge: Class-relative relations (CRR) and Class-agnostic relations (CAR). The CRR groups voxels into different tumor regions and constructs a relation between them. The CAR builds a global relation between all voxel features, complementing the local inter-region relation. Moreover, we use adversarial learning to align the holistic structural prediction between the teacher and the student. Extensive experimentation on both the BraTS 2018 and BraTS 2020 datasets establishes that our method outperforms all state-of-the-art approaches.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new hybrid learning control system for robots based on spiking neural networks 基于尖峰神经网络的新型机器人混合学习控制系统
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106656
{"title":"A new hybrid learning control system for robots based on spiking neural networks","authors":"","doi":"10.1016/j.neunet.2024.106656","DOIUrl":"10.1016/j.neunet.2024.106656","url":null,"abstract":"<div><p>This paper presents a new hybrid learning and control method that can tune their parameters based on reinforcement learning. In the new proposed method, nonlinear controllers are considered multi-input multi-output functions and then the functions are replaced with SNNs with reinforcement learning algorithms. Dopamine-modulated spike-timing-dependent plasticity (STDP) is used for reinforcement learning and manipulating the synaptic weights between the input and output of neuronal groups (for parameter adjustment). Details of the method are presented and some case studies are done on nonlinear controllers such as Fractional Order PID (FOPID) and Feedback Linearization. The structure and the dynamic equations for learning are presented, and the proposed algorithm is tested on robots and results are compared with other works. Moreover, to demonstrate the effectiveness of SNNFOPID, we conducted rigorous testing on a variety of systems including a two-wheel mobile robot, a double inverted pendulum, and a four-link manipulator robot. The results revealed impressively low errors of 0.01 m, 0.03 rad, and 0.03 rad for each system, respectively. The method is tested on another controller named Feedback Linearization, which provides acceptable results. Results show that the new method has better performance in terms of Integral Absolute Error (IAE) and is highly useful in hardware implementation due to its low energy consumption, high speed, and accuracy. The duration necessary for achieving full and stable proficiency in the control of various robotic systems using SNNFOPD, and SNNFL on an Asus Core i5 system within Simulink’s Simscape environment is as follows:</p><p>– Two-link robot manipulator with SNNFOPID: 19.85656 hours</p><p>– Two-link robot manipulator with SNNFL: 0.45828 hours</p><p>– Double inverted pendulum with SNNFOPID: 3.455 hours</p><p>– Mobile robot with SNNFOPID: 3.71948 hours</p><p>– Four-link robot manipulator with SNNFOPID: 16.6789 hours.</p><p>This method can be generalized to other controllers and systems like robots.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S089360802400580X/pdfft?md5=52c30df5a4230d66467ce3fcee2b3a54&pid=1-s2.0-S089360802400580X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-based stackable graph convolutional network for multi-view learning 用于多视角学习的基于注意力的可堆叠图卷积网络
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106648
{"title":"Attention-based stackable graph convolutional network for multi-view learning","authors":"","doi":"10.1016/j.neunet.2024.106648","DOIUrl":"10.1016/j.neunet.2024.106648","url":null,"abstract":"<div><p>In multi-view learning, graph-based methods like Graph Convolutional Network (GCN) are extensively researched due to effective graph processing capabilities. However, most GCN-based methods often require complex preliminary operations such as sparsification, which may bring additional computation costs and training difficulties. Additionally, as the number of stacking layers increases in most GCN, over-smoothing problem arises, resulting in ineffective utilization of GCN capabilities. In this paper, we propose an attention-based stackable graph convolutional network that captures consistency across views and combines attention mechanism to exploit the powerful aggregation capability of GCN to effectively mitigate over-smoothing. Specifically, we introduce node self-attention to establish dynamic connections between nodes and generate view-specific representations. To maintain cross-view consistency, a data-driven approach is devised to assign attention weights to views, forming a common representation. Finally, based on residual connectivity, we apply an attention mechanism to the original projection features to generate layer-specific complementarity, which compensates for the information loss during graph convolution. Comprehensive experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in multi-view semi-supervised tasks.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SFT-SGAT: A semi-supervised fine-tuning self-supervised graph attention network for emotion recognition and consciousness detection SFT-SGAT:用于情感识别和意识检测的半监督微调自监督图注意力网络
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106643
{"title":"SFT-SGAT: A semi-supervised fine-tuning self-supervised graph attention network for emotion recognition and consciousness detection","authors":"","doi":"10.1016/j.neunet.2024.106643","DOIUrl":"10.1016/j.neunet.2024.106643","url":null,"abstract":"<div><p>Emotional recognition is highly important in the field of brain-computer interfaces (BCIs). However, due to the individual variability in electroencephalogram (EEG) signals and the challenges in obtaining accurate emotional labels, traditional methods have shown poor performance in cross-subject emotion recognition. In this study, we propose a cross-subject EEG emotion recognition method based on a semi-supervised fine-tuning self-supervised graph attention network (SFT-SGAT). First, we model multi-channel EEG signals by constructing a graph structure that dynamically captures the spatiotemporal topological features of EEG signals. Second, we employ a self-supervised graph attention neural network to facilitate model training, mitigating the impact of signal noise on the model. Finally, a semi-supervised approach is used to fine-tune the model, enhancing its generalization ability in cross-subject classification. By combining supervised and unsupervised learning techniques, the SFT-SGAT maximizes the utility of limited labeled data in EEG emotion recognition tasks, thereby enhancing the model’s performance. Experiments based on leave-one-subject-out cross-validation demonstrate that SFT-SGAT achieves state-of-the-art cross-subject emotion recognition performance on the SEED and SEED-IV datasets, with accuracies of 92.04% and 82.76%, respectively. Furthermore, experiments conducted on a self-collected dataset comprising ten healthy subjects and eight patients with disorders of consciousness (DOCs) revealed that the SFT-SGAT attains high classification performance in healthy subjects (maximum accuracy of 95.84%) and was successfully applied to DOC patients, with four patients achieving emotion recognition accuracies exceeding 60%. The experiments demonstrate the effectiveness of the proposed SFT-SGAT model in cross-subject EEG emotion recognition and its potential for assessing levels of consciousness in patients with DOC.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142058046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiagSWin: A multi-scale vision transformer with diagonal-shaped windows for object detection and segmentation DiagSWin:多尺度视觉转换器,带对角线形窗口,用于物体检测和分割
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106653
{"title":"DiagSWin: A multi-scale vision transformer with diagonal-shaped windows for object detection and segmentation","authors":"","doi":"10.1016/j.neunet.2024.106653","DOIUrl":"10.1016/j.neunet.2024.106653","url":null,"abstract":"<div><p>Recently, Vision Transformer and its variants have demonstrated remarkable performance on various computer vision tasks, thanks to its competence in capturing global visual dependencies through self-attention. However, global self-attention suffers from high computational cost due to quadratic computational overhead, especially for the high-resolution vision tasks (e.g., object detection and semantic segmentation). Many recent works have attempted to reduce the cost by applying fine-grained local attention, but these approaches cripple the long-range modeling power of the original self-attention mechanism. Furthermore, these approaches usually have similar receptive fields within each layer, thus limiting the ability of each self-attention layer to capture multi-scale features, resulting in performance degradation when handling images with objects of different scales. To address these issues, we develop the Diagonal-shaped Window (DiagSWin) attention mechanism for modeling attentions in diagonal regions at hybrid scales per attention layer. The key idea of DiagSWin attention is to inject multi-scale receptive field sizes into tokens: before computing the self-attention matrix, each token attends its closest surrounding tokens at fine granularity and the tokens far away at coarse granularity. This mechanism is able to effectively capture multi-scale context information while reducing computational complexity. With DiagSwin attention, we present a new variant of Vision Transformer models, called DiagSWin Transformers, and demonstrate their superiority in extensive experiments across various tasks. Specifically, the DiagSwin Transformer with a large size achieves 84.4% Top-1 accuracy and outperforms the SOTA CSWin Transformer on ImageNet with 40% fewer model size and computation cost. When employed as backbones, DiagSWin Transformers achieve significant improvements over the current SOTA modules. In addition, our DiagSWin-Base model yields 51.1 box mAP and 45.8 mask mAP on COCO for object detection and segmentation, and 52.3 mIoU on the ADE20K for semantic segmentation.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A surrogate-assisted extended generative adversarial network for parameter optimization in free-form metasurface design 用于自由曲面设计参数优化的代理辅助扩展生成式对抗网络
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106654
{"title":"A surrogate-assisted extended generative adversarial network for parameter optimization in free-form metasurface design","authors":"","doi":"10.1016/j.neunet.2024.106654","DOIUrl":"10.1016/j.neunet.2024.106654","url":null,"abstract":"<div><p>Metasurfaces have widespread applications in fifth-generation (5G) microwave communication. Among the metasurface family, free-form metasurfaces excel in achieving intricate spectral responses compared to regular-shape counterparts. However, conventional numerical methods for free-form metasurfaces are time-consuming and demand specialized expertise. Alternatively, recent studies demonstrate that deep learning has great potential to accelerate and refine metasurface designs. Here, we present XGAN, an extended generative adversarial network (GAN) with a surrogate for high-quality free-form metasurface designs. The proposed surrogate provides a physical constraint to XGAN so that XGAN can accurately generate metasurfaces monolithically from input spectral responses. In comparative experiments involving 20000 free-form metasurface designs, XGAN achieves 0.9734 average accuracy and is 500 times faster than the conventional methodology. This method facilitates the metasurface library building for specific spectral responses and can be extended to various inverse design problems, including optical metamaterials, nanophotonic devices, and drug discovery.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-task heterogeneous graph learning on electronic health records 电子健康记录的多任务异构图学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106644
{"title":"Multi-task heterogeneous graph learning on electronic health records","authors":"","doi":"10.1016/j.neunet.2024.106644","DOIUrl":"10.1016/j.neunet.2024.106644","url":null,"abstract":"<div><p>Learning electronic health records (EHRs) has received emerging attention because of its capability to facilitate accurate medical diagnosis. Since the EHRs contain enriched information specifying complex interactions between entities, modeling EHRs with graphs is shown to be effective in practice. The EHRs, however, present a great degree of heterogeneity, sparsity, and complexity, which hamper the performance of most of the models applied to them. Moreover, existing approaches modeling EHRs often focus on learning the representations for a single task, overlooking the multi-task nature of EHR analysis problems and resulting in limited generalizability across different tasks. In view of these limitations, we propose a novel framework for EHR modeling, namely MulT-EHR (<u>Mul</u>ti-<u>T</u>ask EHR), which leverages a heterogeneous graph to mine the complex relations and model the heterogeneity in the EHRs. To mitigate the large degree of noise, we introduce a denoising module based on the causal inference framework to adjust for severe confounding effects and reduce noise in the EHR data. Additionally, since our model adopts a single graph neural network for simultaneous multi-task prediction, we design a multi-task learning module to leverage the inter-task knowledge to regularize the training process. Extensive empirical studies on MIMIC-III and MIMIC-IV datasets validate that the proposed method consistently outperforms the state-of-the-art designs in four popular EHR analysis tasks — drug recommendation, and predictions of the length of stay, mortality, and readmission. Thorough ablation studies demonstrate the robustness of our method upon variations to key components and hyperparameters.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning functional brain networks with heterogeneous connectivities for brain disease identification 学习具有异质连接性的脑功能网络,用于脑疾病识别
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-08-22 DOI: 10.1016/j.neunet.2024.106660
{"title":"Learning functional brain networks with heterogeneous connectivities for brain disease identification","authors":"","doi":"10.1016/j.neunet.2024.106660","DOIUrl":"10.1016/j.neunet.2024.106660","url":null,"abstract":"<div><p>Functional brain networks (FBNs), which are used to portray interactions between different brain regions, have been widely used to identify potential biomarkers of neurological and mental disorders. The FBNs estimated using current methods tend to be homogeneous, indicating that different brain regions exhibit the same type of correlation. This homogeneity limits our ability to accurately encode complex interactions within the brain. Therefore, to the best of our knowledge, in the present study, for the first time, we propose the existence of heterogeneous FBNs and introduce a novel FBN estimation model that adaptively assigns heterogeneous connections to different pairs of brain regions, thereby effectively encoding the complex interaction patterns in the brain. Specifically, we first construct multiple types of candidate correlations from different views or based on different methods and then develop an improved orthogonal matching pursuit algorithm to select at most one correlation for each brain region pair under the guidance of label information. These adaptively estimated heterogeneous FBNs were then used to distinguish subjects with neurological/mental disorders from healthy controls and identify potential biomarkers related to these disorders. Experimental results on real datasets show that the proposed scheme improves classification performance by 7.07% and 7.58% at the two sites, respectively, compared with the baseline approaches. This emphasizes the plausibility of the heterogeneity hypothesis and effectiveness of the heterogeneous connection assignment algorithm.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142087807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信