IEEE transactions on neural networks and learning systems最新文献

筛选
英文 中文
Dual Consistency Constraint-Based Self-Supervised Representation Learning for Heterogeneous Graphs With Missing Attributes.
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-18 DOI: 10.1109/TNNLS.2025.3547463
Yajie Lei, Yujie Mo, Luping Ji, Xiaofeng Zhu
{"title":"Dual Consistency Constraint-Based Self-Supervised Representation Learning for Heterogeneous Graphs With Missing Attributes.","authors":"Yajie Lei, Yujie Mo, Luping Ji, Xiaofeng Zhu","doi":"10.1109/TNNLS.2025.3547463","DOIUrl":"10.1109/TNNLS.2025.3547463","url":null,"abstract":"<p><p>Missing attribute completion for unattributed nodes in heterogeneous graphs has received increasing attention, but previous works still suffer from the following issues: 1) they ignore the noise in the raw attributes, resulting in noise propagation and even inaccurate information generation during attribute completion, thus further influencing the representation learning; and 2) they ignore constraints on unattributed nodes when conducting consistency learning across augmented graph views, resulting in data inconsistency across views. To solve these issues, in this article, we propose a new dual consistency constraint-based self-supervised representation learning method for heterogeneous graphs with missing attributes. Specifically, we first investigate the representation completion and the within-view consistency loss to complete missing information in the representation space, and then, we investigate the cross-view consistency loss to ensure data consistency across views. We further reconstruct the masked data to avoid information loss due to the masking process. As a result, our method effectively filters out noise and inaccurate information by the representation completion process as well as achieves discriminative representation learning for heterogeneous graphs with missing attributes. Experimental results on various downstream tasks verify the superiority of our method.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuroadaptive Control With Enhanced Stability and Reliability.
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-18 DOI: 10.1109/TNNLS.2025.3542551
Kaili Xiang, Ruotong Ming, Siyu Chen, Frank L Lewis
{"title":"Neuroadaptive Control With Enhanced Stability and Reliability.","authors":"Kaili Xiang, Ruotong Ming, Siyu Chen, Frank L Lewis","doi":"10.1109/TNNLS.2025.3542551","DOIUrl":"10.1109/TNNLS.2025.3542551","url":null,"abstract":"<p><p>The performance of neural network (NN)-driven control systems hinges on the reliability and functionality of the NN unit in the controller. Maintaining the compact set condition for NN training signals (inputs) during operation is crucial for preserving the NN's universal learning and approximation capabilities, yet this requirement is often overlooked in existing studies. This article introduces a constraint transformation-based design method that ensures excitation signals always originate from a fixed region, regardless of initial conditions. By meeting the compactness condition required by the universal approximation theorem, this approach safeguards the functionality of the NN-driven control unit. Additionally, a decaying damping rate is employed to enable the tracking error to asymptotically converge to zero, rather than being ultimately uniformly bounded (UUB). To further ensure robust operation even if the NN underperforms due to an insufficient number of neurons or violation of the compact set condition, a new control strategy is developed based on the worst case behavior of NNs. This \"fail-secure\" mechanism significantly enhances the reliability of the NN-based control scheme. The effectiveness and benefits of the proposed method are confirmed through numerical simulations, demonstrating its potential to substantially improve the robustness and performance of NN-driven control systems.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143656992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuron Perception Inspired EEG Emotion Recognition With Parallel Contrastive Learning.
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3546283
Dongdong Li, Shengyao Huang, Li Xie, Zhe Wang, Jiazhen Xu
{"title":"Neuron Perception Inspired EEG Emotion Recognition With Parallel Contrastive Learning.","authors":"Dongdong Li, Shengyao Huang, Li Xie, Zhe Wang, Jiazhen Xu","doi":"10.1109/TNNLS.2025.3546283","DOIUrl":"10.1109/TNNLS.2025.3546283","url":null,"abstract":"<p><p>Considerable interindividual variability exists in electroencephalogram (EEG) signals, resulting in challenges for subject-independent emotion recognition tasks. Current research in cross-subject EEG emotion recognition has been insufficient in uncovering the shared neural underpinnings of affective processing in the human brain. To address this issue, we propose the parallel contrastive multisource domain adaptation (PCMDA) model, inspired by the neural representation mechanism in the ventral visual cortex. Our model employs a neuron-perception-inspired contrastive learning architecture for EEG-based emotion recognition in subject-independent scenarios. A two-stage alignment methodology is employed for the purpose of aligning numerous source domains with the target domain. This approach integrates a parallel contrastive loss (PCL) which simulates the self-supervised learning mechanism inherent in the neural representation of the human brain. Furthermore, a self-attention mechanism is integrated to extract emotion weights for each frequency band. Extensive experiments were conducted on three publicly available EEG emotion datasets, SJTU emotion EEG dataset (SEED), database for emotion analysis using physiological signals (DEAP), and finer-grained affective computing EEG dataset (FACED), to evaluate our proposed method. The results demonstrate that the PCMDA effectively utilizes the unique EEG features and frequency band information of each subject, leading to improved generalization across different subjects in comparison to other methods.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Partial Differential Equations Meet Deep Neural Networks: A Survey. 偏微分方程与深度神经网络相遇:调查。
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3545967
Shudong Huang, Wentao Feng, Chenwei Tang, Zhenan He, Caiyang Yu, Jiancheng Lv
{"title":"Partial Differential Equations Meet Deep Neural Networks: A Survey.","authors":"Shudong Huang, Wentao Feng, Chenwei Tang, Zhenan He, Caiyang Yu, Jiancheng Lv","doi":"10.1109/TNNLS.2025.3545967","DOIUrl":"10.1109/TNNLS.2025.3545967","url":null,"abstract":"<p><p>Many problems in science and engineering can be mathematically modeled using partial differential equations (PDEs), which are essential for fields like computational fluid dynamics (CFD), molecular dynamics, and dynamical systems. Although traditional numerical methods like the finite difference/element method are widely used, their computational inefficiency, due to the large number of iterations required, has long been a challenge. Recently, deep learning (DL) has emerged as a promising alternative for solving PDEs, offering new paradigms beyond conventional methods. Despite the growing interest in techniques like physics-informed neural networks (PINNs), a systematic review of the diverse neural network (NN) approaches for PDEs is still missing. This survey fills that gap by categorizing and reviewing the current progress of deep NNs (DNNs) for PDEs. Unlike previous reviews focused on specific methods like PINNs, we offer a broader taxonomy and analyze applications across scientific, engineering, and medical fields. We also provide a historical overview, key challenges, and future trends, aiming to serve both researchers and practitioners with insights into how DNNs can be effectively applied to solve PDEs.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Provably Bounded Dynamic Sparsifying Transform Network for Compressive Imaging. 用于压缩成像的可证明有界动态稀疏化变换网络
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3543766
Baoshun Shi, Dan Li
{"title":"Provably Bounded Dynamic Sparsifying Transform Network for Compressive Imaging.","authors":"Baoshun Shi, Dan Li","doi":"10.1109/TNNLS.2025.3543766","DOIUrl":"https://doi.org/10.1109/TNNLS.2025.3543766","url":null,"abstract":"<p><p>Compressive imaging (CI) aims to recover the underlying image from the under-sampled observations. Recently, deep unfolded CI (DUCI) algorithms, which unfold the iterative algorithms into deep neural networks (DNNs), have achieved remarkable results. Theoretically, unfolding a convergent iterative algorithm could ensure a stable DUCI algorithm, i.e., its performance increases as the increasing stage. However, ensuring convergence often involves imposing constraints, such as bounded spectral norm or tight property, on the filter weights or sparsifying transform. Unfortunately, these constraints may compromise algorithm performance. To address this challenge, we present a provably bounded dynamic sparsifying transform network (BSTNet), which can be explicitly proven to be a bounded network without imposing constraints on the analysis sparsifying transform. Leveraging this advantage, the analysis sparsifying transform can be adaptively generated via a trainable DNN. Specifically, we elaborate a dynamic sparsifying transform generator capable of extracting multiple feature information from input instances, facilitating the creation of a faithful content-adaptive sparsifying transform. We explicitly demonstrate that the proposed BSTNet is a bounded network, and further embed it as the prior network into a DUCI framework to evaluate its performance on two CI tasks, i.e., spectral snapshot CI (SCI) and compressed sensing magnetic resonance imaging (CSMRI). Experimental results showcase that our DUCI algorithms can achieve competitive recovery quality compared to benchmark algorithms. Theoretically, we explicitly prove that the proposed BSTNet is bounded, and we provide a comprehensive theoretical convergence analysis of the proposed iteration algorithms.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Building Human-Like Sequential Memory Using Brain-Inspired Spiking Neural Models.
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3543673
Malu Zhang, Xiaoling Luo, Jibin Wu, Ammar Belatreche, Siqi Cai, Yang Yang, Haizhou Li
{"title":"Toward Building Human-Like Sequential Memory Using Brain-Inspired Spiking Neural Models.","authors":"Malu Zhang, Xiaoling Luo, Jibin Wu, Ammar Belatreche, Siqi Cai, Yang Yang, Haizhou Li","doi":"10.1109/TNNLS.2025.3543673","DOIUrl":"https://doi.org/10.1109/TNNLS.2025.3543673","url":null,"abstract":"<p><p>The brain is able to acquire and store memories of everyday experiences in real-time. It can also selectively forget information to facilitate memory updating. However, our understanding of the underlying mechanisms and coordination of these processes within the brain remains limited. However, no existing artificial intelligence models have yet matched human-level capabilities in terms of memory storage and retrieval. This study introduces a brain-inspired spiking neural model that integrates the learning and forgetting processes of sequential memory. The proposed model closely mimics the distributed and sparse temporal coding observed in the biological neural system. It employs one-shot online learning for memory formation and uses biologically plausible mechanisms of neural oscillation and phase precession to retrieve memorized sequences reliably. In addition, an active forgetting mechanism is integrated into the spiking neural model, enabling memory removal, flexibility, and updating. The proposed memory model not only enhances our understanding of human memory processes but also provides a robust framework for addressing temporal modeling tasks.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INGC-GAN: An Implicit Neural-Guided Cycle Generative Approach for Perceptual-Friendly Underwater Image Enhancement. INGC-GAN:用于感知友好型水下图像增强的隐式神经引导循环生成方法。
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3539841
Weiming Li, Xuelong Wu, Shuaishuai Fan, Songjie Wei, Glyn Gowing
{"title":"INGC-GAN: An Implicit Neural-Guided Cycle Generative Approach for Perceptual-Friendly Underwater Image Enhancement.","authors":"Weiming Li, Xuelong Wu, Shuaishuai Fan, Songjie Wei, Glyn Gowing","doi":"10.1109/TNNLS.2025.3539841","DOIUrl":"10.1109/TNNLS.2025.3539841","url":null,"abstract":"<p><p>The key requirement for underwater image enhancement (UIE) is to overcome the unpredictable color degradation caused by the underwater environment and light attenuation, while addressing issues, such as color distortion, reduced contrast, and blurring. However, most existing unsupervised methods fail to effectively solve these problems, resulting in a visual disparity in metric-optimal qualitative results compared with undegraded images. In this work, we propose an implicit neural-guided cyclic generative model for UIE tasks, and the bidirectional mapping structure solves the aforementioned ill-posed problem from the perspective of bridging the gap between the metric-favorable and the perceptual-friendly versions. The multiband-aware implicit neural normalization effectively alleviates the degradation distribution. The U-shaped generator simulates human visual attention mechanisms, which enables the aggregation of global coarse-grained and local fine-grained features, and enhances the texture and edge features under the guidance of shallow semantics. The discriminator ensures perception-friendly visual results through a dual-branch structure via appearance and color. Extensive experiments and ablation analyses on the full-reference and nonreference underwater benchmarks demonstrate the superiority of our proposed method. It can restore degraded images in most underwater scenes with good generalization and robustness, and the code is available at https://github.com/SUIEDDM/INGC-GAN.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NiSNN-A: Noniterative Spiking Neural Network With Attention With Application to Motor Imagery EEG Classification.
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3538335
Chuhan Zhang, Wei Pan, Cosimo Della Santina
{"title":"NiSNN-A: Noniterative Spiking Neural Network With Attention With Application to Motor Imagery EEG Classification.","authors":"Chuhan Zhang, Wei Pan, Cosimo Della Santina","doi":"10.1109/TNNLS.2025.3538335","DOIUrl":"https://doi.org/10.1109/TNNLS.2025.3538335","url":null,"abstract":"<p><p>Motor imagery (MI), an important category in electroencephalogram (EEG) research, often intersects with scenarios demanding low energy consumption, such as portable medical devices and isolated environment operations. Traditional deep learning (DL) algorithms, despite their effectiveness, are characterized by significant computational demands accompanied by high energy usage. As an alternative, spiking neural networks (SNNs), inspired by the biological functions of the brain, emerge as a promising energy-efficient solution. However, SNNs typically exhibit lower accuracy than their counterpart convolutional neural networks (CNNs). Although attention mechanisms successfully increase network accuracy by focusing on relevant features, their integration in the SNN framework remains an open question. In this work, we combine the SNN and the attention mechanisms for the EEG classification, aiming to improve precision and reduce energy consumption. To this end, we first propose a noniterative leaky integrate-and-fire (NiLIF) neuron model, overcoming the gradient issues in traditional SNNs that use iterative LIF neurons for long time steps. Then, we introduce the sequence-based attention mechanisms to refine the feature map. We evaluated the proposed noniterative SNN with attention (NiSNN-A) model on two MI EEG datasets, OpenBMI and BCIC IV 2a. Experimental results demonstrate that: 1) our model outperforms other SNN models by achieving higher accuracy and 2) our model increases energy efficiency compared with the counterpart CNN models (i.e., by 2.13 times) while maintaining comparable accuracy.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Q-Learning-Based Robust Control for Nonlinear Systems With Mismatched Perturbations.
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-14 DOI: 10.1109/TNNLS.2025.3543336
Qian Cui, Gang Feng, Xuesong Xu
{"title":"Q-Learning-Based Robust Control for Nonlinear Systems With Mismatched Perturbations.","authors":"Qian Cui, Gang Feng, Xuesong Xu","doi":"10.1109/TNNLS.2025.3543336","DOIUrl":"10.1109/TNNLS.2025.3543336","url":null,"abstract":"<p><p>This brief presents a novel optimal control (OC) approach based on Q-learning to address robust control challenges for uncertain nonlinear systems subject to mismatched perturbations. Unlike conventional methodologies that solve the robust control problem directly, our approach reformulates the problem by minimizing a value function that integrates perturbation information. The Q-function is subsequently constructed by coupling the optimal value function with the Hamiltonian function. To estimate the parameters of the Q-function, an integral reinforcement learning (IRL) technique is employed to develop a critic neural network (NN). Leveraging this parameterized Q-function, we derive a model-free OC solution that generalizes the model-based formulation. Furthermore, using Lyapunov's direct method, the resulting closed-loop system is guaranteed to have uniform ultimate bounded stability. A case study is presented to showcase the effectiveness and applicability of the proposed approach.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143630270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neighbor-Based Completion for Addressing Incomplete Multiview Clustering
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-03-13 DOI: 10.1109/tnnls.2025.3540437
Wenbiao Yan, Jihua Zhu, Yiyang Zhou, Jinqian Chen, Haozhe Cheng, Kun Yue, Qinghai Zheng
{"title":"Neighbor-Based Completion for Addressing Incomplete Multiview Clustering","authors":"Wenbiao Yan, Jihua Zhu, Yiyang Zhou, Jinqian Chen, Haozhe Cheng, Kun Yue, Qinghai Zheng","doi":"10.1109/tnnls.2025.3540437","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3540437","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"33 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信