Neural Networks最新文献

筛选
英文 中文
From ReLU to GeMU: Activation functions in the lens of cone projection 从ReLU到GeMU:锥体投影透镜中的激活函数
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-03 DOI: 10.1016/j.neunet.2025.107654
Jiayun Li, Yuxiao Cheng, Yiwen Lu, Zhuofan Xia, Yilin Mo, Gao Huang
{"title":"From ReLU to GeMU: Activation functions in the lens of cone projection","authors":"Jiayun Li,&nbsp;Yuxiao Cheng,&nbsp;Yiwen Lu,&nbsp;Zhuofan Xia,&nbsp;Yilin Mo,&nbsp;Gao Huang","doi":"10.1016/j.neunet.2025.107654","DOIUrl":"10.1016/j.neunet.2025.107654","url":null,"abstract":"<div><div>Activation functions are essential to introduce nonlinearity into neural networks, with the Rectified Linear Unit (ReLU) often favored for its simplicity and effectiveness. Motivated by the structural similarity between a single layer of the Feedforward Neural Network (FNN) and a single iteration of the Projected Gradient Descent (PGD) algorithm for constrained optimization problems, we consider ReLU as a projection from <span><math><mi>R</mi></math></span> onto the nonnegative half-line <span><math><msub><mrow><mi>R</mi></mrow><mrow><mo>+</mo></mrow></msub></math></span>. Building on this interpretation, we generalize ReLU to a Generalized Multivariate projection Unit (GeMU), a projection operator onto a convex cone, such as the Second-Order Cone (SOC). We prove that the expressive power of FNNs activated by our proposed GeMU is strictly greater than those activated by ReLU. Experimental evaluations further corroborate that GeMU is versatile across prevalent architectures and distinct tasks, and that it can outperform various existing activation functions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107654"},"PeriodicalIF":6.0,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144271956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Community-influencing path explanation for link prediction in heterogeneous graph neural network 异构图神经网络链接预测的社区影响路径解释
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-03 DOI: 10.1016/j.neunet.2025.107645
Yanhong Wen , Yuhua Li , Yixiong Zou , Kai Shu , Han Chen , Ziwen Zhao , Jinxian Ye , Quan Fu , Ruixuan Li
{"title":"Community-influencing path explanation for link prediction in heterogeneous graph neural network","authors":"Yanhong Wen ,&nbsp;Yuhua Li ,&nbsp;Yixiong Zou ,&nbsp;Kai Shu ,&nbsp;Han Chen ,&nbsp;Ziwen Zhao ,&nbsp;Jinxian Ye ,&nbsp;Quan Fu ,&nbsp;Ruixuan Li","doi":"10.1016/j.neunet.2025.107645","DOIUrl":"10.1016/j.neunet.2025.107645","url":null,"abstract":"<div><div>Most existing research on the interpretability of Graph Neural Networks (GNNs) for Link Prediction (LP) focuses on homogeneous graphs, with relatively few studies on heterogeneous graphs. Community is a crucial structure of a graph and can often improve LP performance. However, existing GNN explanation methods for heterogeneous LP rarely consider the impact of communities, leading to generated explanations that do not align with human understanding. To fill this gap, we consider community influence in GNN explanation for heterogeneous LP. We first demonstrate the effectiveness of communities in GNN explanations for heterogeneous LP through a preliminary analysis. Under this premise, we propose CI-Path, a Community-Influencing Path explanation for heterogeneous GNN-based LP that considers the influence of communities throughout the entire learning process. Specifically, we conduct degree centrality pruning and employ a community detection algorithm for data preprocessing. Then we propose a community-influencing objective, comprising community-influencing prediction loss and community-influencing path loss. Finally, we identify the reasonable explanatory paths that are the shortest with the minimum sum of node degrees and the fewest number of communities crossed. Extensive experiments on five heterogeneous datasets demonstrate the superior performance of CI-Path compared to baselines. Our code is available at <span><span>https://github.com/wenyhsmile/CI-Path</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107645"},"PeriodicalIF":6.0,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144223328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Noise-resistant predefined-time convergent ZNN models for dynamic least squares and multi-agent systems” [Neural Networks 187 (2025) 107412] “动态最小二乘和多智能体系统的抗噪声预定义时间收敛ZNN模型”的勘误表[Neural Networks 187 (2025) 107412]
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-02 DOI: 10.1016/j.neunet.2025.107689
Yiwei Li , Jiaxin Liu , Lei Jia , Liangze Yin , Xingpei Li , Yong Zhang
{"title":"Corrigendum to “Noise-resistant predefined-time convergent ZNN models for dynamic least squares and multi-agent systems” [Neural Networks 187 (2025) 107412]","authors":"Yiwei Li ,&nbsp;Jiaxin Liu ,&nbsp;Lei Jia ,&nbsp;Liangze Yin ,&nbsp;Xingpei Li ,&nbsp;Yong Zhang","doi":"10.1016/j.neunet.2025.107689","DOIUrl":"10.1016/j.neunet.2025.107689","url":null,"abstract":"","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107689"},"PeriodicalIF":6.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144189777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic event-triggered H∞ state estimation for discrete-time complex-valued memristive neural networks with mixed time delays 混合时滞离散复值记忆神经网络的动态事件触发H∞状态估计
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-31 DOI: 10.1016/j.neunet.2025.107631
Yufei Liu , Bo Shen , Hongjian Liu , Tingwen Huang , Hailong Tan , Jie Sun
{"title":"Dynamic event-triggered H∞ state estimation for discrete-time complex-valued memristive neural networks with mixed time delays","authors":"Yufei Liu ,&nbsp;Bo Shen ,&nbsp;Hongjian Liu ,&nbsp;Tingwen Huang ,&nbsp;Hailong Tan ,&nbsp;Jie Sun","doi":"10.1016/j.neunet.2025.107631","DOIUrl":"10.1016/j.neunet.2025.107631","url":null,"abstract":"<div><div>This paper explores the <span><math><msub><mrow><mi>H</mi></mrow><mrow><mi>∞</mi></mrow></msub></math></span> state estimation problem for a category of discrete-time complex-valued memristive neural networks (CVMNNs). Regarding the studied CVMNNs, the phenomena of the distributed delay and time-varying delay are taken into account so as to describe the system more practically. Firstly, for further effective analysis, the examined CVMNNs are converted to an augmented system that integrates both the real and imaginary dynamics about the initial CVMNNs. To alleviate the communication burden, a representative dynamic event-triggered scheme is employed, for the first time, in the state estimator design of discrete-time CVMNNs. By establishing the Lyapunov functional, a sufficient condition is derived to assure the asymptotical stability of the estimation error system. Subsequently, the explicit expression of the desired estimator is obtained by resolving several matrix inequalities. Ultimately, the efficacy of the designed state estimator is substantiated through a simulation example.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107631"},"PeriodicalIF":6.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144195254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CDAFormer: Hybrid Transformer-based contrastive domain adaptation framework for unsupervised hyperspectral change detection CDAFormer:基于混合变压器的无监督高光谱变化检测对比域自适应框架
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-31 DOI: 10.1016/j.neunet.2025.107633
Jiahui Qu, Jingyu Zhao, Wenqian Dong, Jie He, Zan Li, Yunsong Li
{"title":"CDAFormer: Hybrid Transformer-based contrastive domain adaptation framework for unsupervised hyperspectral change detection","authors":"Jiahui Qu,&nbsp;Jingyu Zhao,&nbsp;Wenqian Dong,&nbsp;Jie He,&nbsp;Zan Li,&nbsp;Yunsong Li","doi":"10.1016/j.neunet.2025.107633","DOIUrl":"10.1016/j.neunet.2025.107633","url":null,"abstract":"<div><div>Hyperspectral image (HSI) change detection is a technique used to identify changes between HSIs captured in the same scene at different times. Most of the existing deep learning-based methods can achieve wonderful results, but it is difficult to generalize well to other HSIs with different data distributions. Moreover, it is expensive and laborious to obtain the annotated dataset for model training. To address above issues, we propose a hybrid Transformer-based contrastive domain adaptation (CDAFormer) framework for unsupervised hyperspectral change detection, which can effectively use prior information to improve the detection performance in the absence of labeled training samples by separately aligning the changed and unchanged deference features of two domains. Concretely, the difference features of the two domains are fed into the hybrid Transformer block for preliminary coarse contrastive domain alignment. Then, the positive and negative feature pairs generated from the hybrid Transformer block are prepared for the loss function level fine alignment. Particularly, the domain discrepancy can be bridged by pulling the category-consistent difference feature representation closer and pushing the category-inconsistent difference feature representation far away, so as to maintain the separability between domain invariant difference features. The acquired domain invariant distinguishing features are subsequently fed into the fully connected layers to derive the detection results. Extensive experiments on widely used datasets show that the proposed method can achieve superior performance compared with other state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107633"},"PeriodicalIF":6.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph positive-unlabeled learning via Bootstrapping Label Disambiguation 基于自举标签消歧的图正无标签学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-31 DOI: 10.1016/j.neunet.2025.107630
Chunquan Liang , Luyue Wang , Xinyuan Feng , Yuying Cheng , Mei Li , Shirui Pan , Hongming Zhang
{"title":"Graph positive-unlabeled learning via Bootstrapping Label Disambiguation","authors":"Chunquan Liang ,&nbsp;Luyue Wang ,&nbsp;Xinyuan Feng ,&nbsp;Yuying Cheng ,&nbsp;Mei Li ,&nbsp;Shirui Pan ,&nbsp;Hongming Zhang","doi":"10.1016/j.neunet.2025.107630","DOIUrl":"10.1016/j.neunet.2025.107630","url":null,"abstract":"<div><div>Graph positive-unlabeled learning is an important task that tries to learn binary classification models from only positive and unlabeled (PU) nodes. While state-of-the-art methods focus on training graph neural networks, they frequently rely on weak objective functions derived solely from a given class prior probability or inferred exclusively from the graph structure, leading their performance significantly lags behind that of fully labeled counterparts. In this paper, we fill this gap by treating unlabeled nodes as samples ambiguously labeled as both positive and negative, and by introducing a learning method called Bootstrap Label Disambiguation (BLD), which progressively resolves label ambiguities during the training of binary classifiers. BLD comprises a node representation learning module via bootstrapping and a novel central region-based label disambiguation strategy. The learning module leverages both previous representations and the derived positive centriod as targets to train positive-aligned representations, eliminating the need for a prior. Consequently, the disambiguation strategy constructs a central-region to identify ambiguous nodes and steadily transforms them into effective supervision. Extensive experiments on a range of real-world datasets show that our BLD method significantly outperforms existing approaches and in many cases even surpasses fully labeled classification models. The source code is available at <span><span>https://github.com/yunyun85/BLD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107630"},"PeriodicalIF":6.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144195251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revision: Advancing the biological plausibility and efficacy of Hebbian Convolutional Neural Networks 修订:提高Hebbian卷积神经网络的生物学合理性和有效性
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-31 DOI: 10.1016/j.neunet.2025.107628
Julian Jiménez Nimmo , Esther Mondragón
{"title":"Revision: Advancing the biological plausibility and efficacy of Hebbian Convolutional Neural Networks","authors":"Julian Jiménez Nimmo ,&nbsp;Esther Mondragón","doi":"10.1016/j.neunet.2025.107628","DOIUrl":"10.1016/j.neunet.2025.107628","url":null,"abstract":"<div><div>The research presented in this paper advances the integration of Hebbian learning into Convolutional Neural Networks (CNNs) for image processing, systematically exploring different architectures to build an optimal configuration, adhering to biological tenability. Hebbian learning operates on local unsupervised neural information to form feature representations, providing an alternative to the popular but arguably biologically implausible and computationally intensive backpropagation learning algorithm. The suggested optimal architecture significantly enhances recent research aimed at integrating Hebbian learning with competition mechanisms and CNNs, expanding their representational capabilities by incorporating hard Winner-Takes-All (WTA) competition, Gaussian lateral inhibition mechanisms and Bienenstock–Cooper–Munro (BCM) learning rule in a single model. Mean accuracy classification measures during the last half of test epochs on CIFAR-10 revealed that the resulting optimal model matched its end-to-end backpropagation variant with 75.2% each, critically surpassing the state-of-the-art hard-WTA performance in CNNs of the same network depth (64.6%) by 10.6%. It also achieved competitive performance on MNIST (98%) and STL-10 (69.5%). Moreover, results showed clear indications of sparse hierarchical learning through increasingly complex and abstract receptive fields. In summary, our implementation enhances both the performance and the generalisability of the learnt representations and constitutes a crucial step towards more biologically realistic artificial neural networks</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107628"},"PeriodicalIF":6.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of mixed sample data augmentation on interpretability of neural networks 混合样本数据增强对神经网络可解释性的影响
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-30 DOI: 10.1016/j.neunet.2025.107611
Soyoun Won, Sung-Ho Bae, Seong Tae Kim
{"title":"Effects of mixed sample data augmentation on interpretability of neural networks","authors":"Soyoun Won,&nbsp;Sung-Ho Bae,&nbsp;Seong Tae Kim","doi":"10.1016/j.neunet.2025.107611","DOIUrl":"10.1016/j.neunet.2025.107611","url":null,"abstract":"<div><div>Mixed sample data augmentation strategies are actively used when training deep neural networks (DNNs). Recent studies suggest that they are effective at various tasks. However, the impact of mixed sample data augmentation on model interpretability has not been widely studied. In this paper, we explore the relationship between model interpretability and mixed sample data augmentation, specifically in terms of feature attribution maps. To this end, we introduce a new metric that allows a comparison of model interpretability while minimizing the impact of occlusion robustness of the model. Experimental results show that several mixed sample data augmentation decreases the interpretability of the model and label mixing during data augmentation plays a significant role in this effect. This new finding suggests it is important to carefully adopt the mixed sample data augmentation method, particularly in applications where attribution map-based interpretability is important.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107611"},"PeriodicalIF":6.0,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144195253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised brain lesion generation for effective data augmentation of medical images 用于医学图像有效数据增强的自监督脑损伤生成
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-30 DOI: 10.1016/j.neunet.2025.107629
Jiayu Huo, Sébastien Ourselin, Rachel Sparks
{"title":"Self-supervised brain lesion generation for effective data augmentation of medical images","authors":"Jiayu Huo,&nbsp;Sébastien Ourselin,&nbsp;Rachel Sparks","doi":"10.1016/j.neunet.2025.107629","DOIUrl":"10.1016/j.neunet.2025.107629","url":null,"abstract":"<div><div>Accurate brain lesion delineation is important for planning neurosurgical treatment. Automatic brain lesion segmentation methods based on convolutional neural networks have demonstrated remarkable performance. However, neural network performance is constrained by the lack of large-scale well-annotated training datasets. In this manuscript, we propose a comprehensive framework to efficiently generate new samples for training a brain lesion segmentation model. We first train a self-supervised lesion generator based on the adversarial autoencoder to model lesion appearance and shape. Next, we utilize a novel image composition algorithm, Soft Poisson Blending, to seamlessly combine synthetic lesions and brain images to obtain training samples. Finally, to effectively train the brain lesion segmentation model with augmented images we introduce a new prototype consistence regularization to align real and synthetic features. Our framework is validated by extensive experiments on two public brain lesion segmentation datasets: ATLAS v2.0 and Shift MS. Our method outperforms existing brain image data augmentation schemes. For instance, our method improves the Dice from 50.36% to 60.23% compared to the UNet with conventional data augmentation techniques for the ATLAS v2.0 dataset.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107629"},"PeriodicalIF":6.0,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyTract: Advancing tractography for neurosurgical planning with a hybrid method integrating neural networks and a path search algorithm HyTract:利用结合神经网络和路径搜索算法的混合方法,推进神经外科计划的神经束造影
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-05-29 DOI: 10.1016/j.neunet.2025.107624
Mateusz Korycinski , Konrad A. Ciecierski , Ewa Niewiadomska-Szynkiewicz
{"title":"HyTract: Advancing tractography for neurosurgical planning with a hybrid method integrating neural networks and a path search algorithm","authors":"Mateusz Korycinski ,&nbsp;Konrad A. Ciecierski ,&nbsp;Ewa Niewiadomska-Szynkiewicz","doi":"10.1016/j.neunet.2025.107624","DOIUrl":"10.1016/j.neunet.2025.107624","url":null,"abstract":"<div><div>The advent of advanced MRI techniques has opened up promising avenues for exploring the intricacies of brain neurophysiology, including the network of neural connections. A more comprehensive understanding of this network provides invaluable insights into the human brain’s underlying structural architecture and dynamic functionalities. Consequently, determining the location of the neural fibers, known as tractography, has emerged as a subject of significant interest to both basic scientific research and practical domains, such as preoperative planning. This work presents a novel tractography method, HyTract, constructed using artificial neural networks and a path search algorithm. Our findings demonstrate that this method can accurately identify the location of nerve fibers in close proximity to the surgical field. Compared with well established methods, tracts computed with HyTract show Mean Euclidean Distance of 9 or lower, indicating a good accuracy in tract reconstruction. Furthermore, its architecture ensures the explainability of the obtained tracts and facilitates adaptation to new tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107624"},"PeriodicalIF":6.0,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信