IEEE transactions on neural networks and learning systems最新文献

筛选
英文 中文
Feature Enhancement Module Based on Class-Centric Loss for Fine-Grained Visual Classification. 基于类中心损失的细粒度视觉分类特征增强模块。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-09 DOI: 10.1109/tnnls.2025.3613791
Daohui Wang,He Xinyu,Shujing Lyu,Wei Tian,Yue Lu
{"title":"Feature Enhancement Module Based on Class-Centric Loss for Fine-Grained Visual Classification.","authors":"Daohui Wang,He Xinyu,Shujing Lyu,Wei Tian,Yue Lu","doi":"10.1109/tnnls.2025.3613791","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3613791","url":null,"abstract":"We propose a novel feature enhancement module designed for fine-grained visual classification tasks, which can be seamlessly integrated into various backbone architectures, including both convolutional neural network (CNN)-based and Transformer-based networks. The plug-and-play module outputs pixel-level feature maps and performs a weighted fusion of filtered features to enhance fine-grained feature representation. We introduce a class-centric loss function that optimizes the alignment of samples with their target class centers by pulling them toward the center of the target class while simultaneously pushing them away from the center of the most visually similar nontarget classes. Soft labels are employed to mitigate overfitting, ensuring the model generalizes well to unseen examples. Our approach consistently delivers significant improvements in accuracy across various mainstream backbone architectures, underscoring its versatility and robustness. Furthermore, we achieved the highest accuracy on the NABirds (NAB) and our proprietary lock cylinder datasets. We have released our source code and pretrained model on GitHub: https://github.com/Richard5413/FEM-CC.git.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"57 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Breast Lesion Segmentation Using Confidence-Ranked Features and Bi-Level Prototypes. 使用置信度排序特征和双水平原型的半监督乳腺病变分割。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-09 DOI: 10.1109/tnnls.2025.3616332
Siyao Jiang,Huisi Wu,Yu Zhou,Junyang Chen,Jing Qin
{"title":"Semi-Supervised Breast Lesion Segmentation Using Confidence-Ranked Features and Bi-Level Prototypes.","authors":"Siyao Jiang,Huisi Wu,Yu Zhou,Junyang Chen,Jing Qin","doi":"10.1109/tnnls.2025.3616332","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3616332","url":null,"abstract":"Automated lesion segmentation through breast ultrasound (BUS) images is an essential prerequisite in computer-aided diagnosis. However, the task of breast segmentation remains challenging, due to the time-consuming and labor-intensive process of acquiring precise labeled data, as well as severely ambiguous lesion boundaries and low contrast in BUS images. In this article, we propose a novel semi-supervised breast segmentation framework based on confidence-ranked features and bi-level prototypes (CoBiNet) to alleviate these issues. Our outputs are derived from two branches: classifier and projector. In the projector branch, we first rank the features by multilevel sampling to obtain multiple feature sets with different confidence levels. Then, these sets are progressed in two directions. One is to acquire local prototypes at each level by local sampling and perform trans-confidence level (TCL) contrastive learning. This encourages the low-confidence features to converge to the high-confidence features, which enhances the model's ability to recognize ambiguous regions. The other process is to generate more representative global prototypes by global sampling, followed by generating more reliable predictions and performing cross-guidance (CG) consistency learning with the classifier output predictions, facilitating knowledge transfer between the structure-aware projector and the category-discriminative classifier branches. Extensive experiments on two well-known public datasets, BUSI and UDIAT, demonstrate the superiority of our method over state-of-the-art approaches. Codes will be released upon publication.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"54 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distill to Delete: Unlearning in Graph Networks With Knowledge Distillation 从提炼到删除:基于知识提炼的图网络的学习
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-09 DOI: 10.1109/tnnls.2025.3607995
Yash Sinha, Murari Mandal, Mohan Kankanhalli
{"title":"Distill to Delete: Unlearning in Graph Networks With Knowledge Distillation","authors":"Yash Sinha, Murari Mandal, Mohan Kankanhalli","doi":"10.1109/tnnls.2025.3607995","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3607995","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"32 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145255613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Cross-City Semantic Segmentation Based on Similarity-Inspired Fusion and Invertible Transformation Learning Network. 基于相似性启发融合和可逆转换学习网络的多模态跨城市语义分割。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-09 DOI: 10.1109/tnnls.2025.3617345
Lijia Dong,Wen Jiang,Zhengyi Xu,Jie Geng
{"title":"Multimodal Cross-City Semantic Segmentation Based on Similarity-Inspired Fusion and Invertible Transformation Learning Network.","authors":"Lijia Dong,Wen Jiang,Zhengyi Xu,Jie Geng","doi":"10.1109/tnnls.2025.3617345","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3617345","url":null,"abstract":"Multimodal cross-city semantic segmentation aims to adapt a network trained on multiple labeled source domains (MSDs) from one city to multiple unlabeled target domains (MTDs) in another city, where the multiple domains refer to different sensor modalities. However, remote sensing data from different sensors increases the extent of domain shift in the fused domain space, making feature alignment more challenging. Meanwhile, traditional fusion methods only consider complementarity within MSDs (or MTDs), which wastes cross-domain relevant information and neglects control over domain shift. To address the above issues, we propose a similarity-inspired fusion and invertible transformation learning network (SFITNet) for multimodal cross-city semantic segmentation. To alleviate the increasing alignment difficulty in multimodal fused domains, an invertible transformation learning strategy (ITLS) is proposed, which adopts a topological perspective on unsupervised domain adaptation. This strategy aims to simulate the potential distribution transformation function between the MSD and the MTD based on invertible neural networks (INNs) after feature fusion, thereby performing distribution alignment independently within the two feature spaces. A cross-domain similarity-inspired information interaction module (CDSiM) is also designed, which considers the correspondence between the MSD and the MTD in the fusion stage, effectively utilizes multimodal complementary information and promotes the subsequent alignment of fused domain shifts. The semantic segmentation tests are completed on the public C2Seg-AB dataset and a new multimodal cross-city Su-Wu dataset. Compared with some state-of-the-art techniques, the experimental results demonstrated the superiority of the proposed SFITNet.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"12 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inhibiting Error Exacerbation in Offline Reinforcement Learning With Data Sparsity. 利用数据稀疏性抑制离线强化学习中的错误加剧。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-09 DOI: 10.1109/tnnls.2025.3615982
Fan Zhang,Malu Zhang,Wenyu Chen,Siying Wang,Xin Zhang,Jiayin Li,Yang Yang
{"title":"Inhibiting Error Exacerbation in Offline Reinforcement Learning With Data Sparsity.","authors":"Fan Zhang,Malu Zhang,Wenyu Chen,Siying Wang,Xin Zhang,Jiayin Li,Yang Yang","doi":"10.1109/tnnls.2025.3615982","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3615982","url":null,"abstract":"Offline reinforcement learning (RL) aims to learn effective agents from previously collected datasets, facilitating the safety and efficiency of RL by avoiding real-time interaction. However, in practical applications, the approximation error of the out-of-distribution (OOD) state-actions can cause considerable overestimation due to error exacerbation during training, finally degrading the performance. In contrast to prior works that merely addressed the OOD state-actions, we discover that all data introduces estimation error whose magnitude is directly related to data sparsity. Consequently, the impact of data sparsity is inevitable and vital when inhibiting the error exacerbation. In this article, we propose an offline RL approach to inhibit error exacerbation with data sparsity (IEEDS), which includes a novel value estimation method to consider the impact of data sparsity on the training of agents. Specifically, the value estimation phase includes two innovations: 1) replace Q-net with V-net, a smaller and denser state space makes data more concentrated, contributing to more accurate value estimation and 2) introduce state sparsity to the training by design state-aware-sparsity Markov decision process (MDP), further lessening the impact of sparse states. We theoretically prove the convergence of IEEDS under state-aware-sparsity MDP. Extensive experiments on offline RL benchmarks reveal that IEEDS's superior performance.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"158 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IML-Spikeformer: Input-Aware Multilevel Spiking Transformer for Speech Processing IML-Spikeformer:用于语音处理的输入感知多电平spike变压器
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-08 DOI: 10.1109/tnnls.2025.3615971
Zeyang Song, Shimin Zhang, Yuhong Chou, Jibin Wu, Haizhou Li
{"title":"IML-Spikeformer: Input-Aware Multilevel Spiking Transformer for Speech Processing","authors":"Zeyang Song, Shimin Zhang, Yuhong Chou, Jibin Wu, Haizhou Li","doi":"10.1109/tnnls.2025.3615971","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3615971","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"27 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145247018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HKANLP: Link Prediction With Hyperspherical Embeddings and Kolmogorov-Arnold Networks. 超球面嵌入与Kolmogorov-Arnold网络的链接预测。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-08 DOI: 10.1109/tnnls.2025.3614341
Wenchuan Zhang,Wentao Fan,Weifeng Su,Nizar Bouguila
{"title":"HKANLP: Link Prediction With Hyperspherical Embeddings and Kolmogorov-Arnold Networks.","authors":"Wenchuan Zhang,Wentao Fan,Weifeng Su,Nizar Bouguila","doi":"10.1109/tnnls.2025.3614341","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3614341","url":null,"abstract":"Link prediction (LP) is fundamental to graph-based applications, yet existing graph autoencoders (GAEs) and variational GAEs (VGAEs) often struggle with intrinsic graph properties, particularly the presence of negative eigenvalues in adjacency matrices, which limits their adaptability and predictive performance. To address this limitation, we propose Hyperspherical Kolmogorov-Arnold Networks for LP (HKANLP), a novel framework that combines multiple graph neural network (GNN)-based representation learning strategies with Kolmogorov-Arnold networks (KANs) in a hyperspherical embedding space. Specifically, our model leverages the von Mises-Fisher (vMF) distribution to impose geometric consistency in the latent space and employs KANs as universal function approximators to reconstruct adjacency matrices, thereby mitigating the impact of negative eigenvalues and enhancing spectral diversity. Extensive experiments on homophilous, heterophilous, and large-scale graph datasets demonstrate that HKANLP achieves superior LP performance and robustness compared to state-of-the-art baselines. Furthermore, visualization analyses illustrate the model's effectiveness in capturing complex structural patterns. The source code of our model is publicly available at https://github.com/zxj8806/HKANLP/.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"53 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FGPLFA: Fine-Grained Pseudo-Labeling and Feature Alignment for Source-Free Unsupervised Domain Adaptation. FGPLFA:无源无监督域自适应的细粒度伪标记和特征对齐。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-07 DOI: 10.1109/tnnls.2025.3616236
Zhongyi Wen,Qiang Li,Yatong Wang,Huaizong Shao,Guoming Sun
{"title":"FGPLFA: Fine-Grained Pseudo-Labeling and Feature Alignment for Source-Free Unsupervised Domain Adaptation.","authors":"Zhongyi Wen,Qiang Li,Yatong Wang,Huaizong Shao,Guoming Sun","doi":"10.1109/tnnls.2025.3616236","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3616236","url":null,"abstract":"Source-free unsupervised domain adaptation (SFUDA) aims to improve performance in unlabeled target domain data without accessing source domain data. This is crucial in scenarios with data-sharing restrictions due to privacy or compliance constraints. Existing SFUDA approaches often rely on pseudo-labeling techniques based on entropy or confidence metrics. These often overlook fine-grained data features, resulting in noisy pseudo-labels that degrade model performance. To overcome this limitation, we develop a new method called fine-grained pseudo-labeling and feature alignment (FGPLFA) to enhance SFUDA's performance. FGPLFA starts with a gradient-based metric that integrates insights from both model knowledge and data features, creating a more reliable sample metric. To enhance fine granularity, the fine-grained pseudo-labeling (FGPL) module was introduced. This module clusters data based on the magnitude and direction of gradients, allowing for dataset partitioning into subsets at the sample level. The subsets are pseudo-labeled with category-specificity and domain specificity, establishing a multilevel granularity structure that reduces noisy pseudo-labels. Subsequently, the mean-covariance adjustment feature alignment (MCAFA) method was introduced. Features from the subsets are aligned in a specified sequence, enhancing model adaptability in the target domain. Extensive experiments conducted across multiple datasets validate the superiority of FGPLFA.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"53 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedMKD: Hybrid Feature Guided Multilayer Fusion Knowledge Distillation in Heterogeneous Federated Learning. FedMKD:异构联邦学习中混合特征引导的多层融合知识蒸馏。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-07 DOI: 10.1109/tnnls.2025.3615230
Peng Han,Han Xiao,Shenhai Zheng,Yuanyuan Li,Guanqiu Qi,Zhiqin Zhu
{"title":"FedMKD: Hybrid Feature Guided Multilayer Fusion Knowledge Distillation in Heterogeneous Federated Learning.","authors":"Peng Han,Han Xiao,Shenhai Zheng,Yuanyuan Li,Guanqiu Qi,Zhiqin Zhu","doi":"10.1109/tnnls.2025.3615230","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3615230","url":null,"abstract":"In recent years, federated learning (FL) has received widespread attention for its ability to enable collaborative training across multiple clients while protecting user privacy, especially demonstrating significant value in scenarios such as medical data analysis, where strict privacy protection is required. However, most existing FL frameworks mainly focus on data heterogeneity without fully addressing the challenge of heterogeneous model aggregation among clients. To address this problem, this article proposes a novel FL framework called FedMKD. This framework introduces proxy models as a medium for knowledge sharing between clients, ensuring efficient and secure interactions while effectively utilizing the knowledge in each client's data. In order to improve the efficiency of asymmetric knowledge transfer between proxy models and private models, a hybrid feature-guided multilayer fusion knowledge distillation (MKD) learning method is proposed, which eliminates the dependence on public data. Extensive experiments were conducted using a combination of multiple heterogeneous models under diverse data distributions. The results demonstrate that FedMKD efficiently aggregates model knowledge.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"128 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Few-Shot Hyperspectral Image Classification Through Dynamic Fusion and Hierarchical Enhancement. 基于动态融合和层次增强的少镜头高光谱图像分类。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-10-07 DOI: 10.1109/tnnls.2025.3615950
Ying Guo,Bin Fan,Yuchao Dai,Yan Feng,Mingyi He
{"title":"Boosting Few-Shot Hyperspectral Image Classification Through Dynamic Fusion and Hierarchical Enhancement.","authors":"Ying Guo,Bin Fan,Yuchao Dai,Yan Feng,Mingyi He","doi":"10.1109/tnnls.2025.3615950","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3615950","url":null,"abstract":"Few-shot learning has garnered increasing attention in hyperspectral image classification (HSIC) due to its potential to reduce dependency on labor-intensive and costly labeled data. However, most existing methods are constrained to feature extraction using a single image patch of fixed size, and typically neglect the pivotal role of the central pixel in feature fusion, leading to inefficient information utilization. In addition, the correlations among sample features have not been fully explored, thereby weakening feature expressiveness and hindering cross-domain knowledge transfer. To address these issues, we propose a novel few-shot HSIC framework incorporating dynamic fusion and hierarchical enhancement. Specifically, we first introduce a robust feature extraction module, which effectively combines the content concentration of small patches with the noise robustness of large patches, and further captures local spatial correlations through a central-pixel-guided dynamic pooling strategy. Such patch-to-pixel dynamic fusion enables a more comprehensive and robust extraction of ground object information. Then, we develop a support-query hierarchical enhancement module that integrates intraclass self-attention and interclass cross-attention mechanisms. This process not only enhances support-level and query-level feature representation but also facilitates the learning of more informative prior knowledge from the abundantly labeled source domain. Moreover, to further increase feature discriminability, we design an intraclass consistency loss and an interclass orthogonality loss, which collaboratively encourage intraclass samples to be closer together and interclass samples to be more separable in the metric space. Experimental results on four benchmark datasets demonstrate that our method substantially improves classification accuracy and consistently outperforms competing approaches. Code is available at https://github.com/guoying918/DFHE2025.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"19 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信