Neural Networks最新文献

筛选
英文 中文
Plane coexistence behaviors for Hopfield neural network with two-memristor-interconnected neurons.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-12 DOI: 10.1016/j.neunet.2024.107049
Fangyuan Li, Wangsheng Qin, Minqi Xi, Lianfa Bai, Bocheng Bao
{"title":"Plane coexistence behaviors for Hopfield neural network with two-memristor-interconnected neurons.","authors":"Fangyuan Li, Wangsheng Qin, Minqi Xi, Lianfa Bai, Bocheng Bao","doi":"10.1016/j.neunet.2024.107049","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107049","url":null,"abstract":"<p><p>Memristors are commonly used as the connecting parts of neurons in brain-like neural networks. The memristors, unlike the existing literature, possess the capability to function as both self-connected synaptic weights and interconnected synaptic weights, thereby enabling the generation of intricate initials-regulated plane coexistence behaviors. To demonstrate this dynamical effect, a Hopfield neural network with two-memristor-interconnected neurons (TMIN-HNN) is proposed. On this basis, the stability distribution of the equilibrium points is analyzed, the related bifurcation behaviors are studied by utilizing some numerical simulation methods, and the plane coexistence behaviors are proved theoretically and revealed numerically. The results clarify that TMIN-HNN not only exhibits complex bifurcation behaviors, but also has initials-regulated plane coexistence behaviors. In particular, the coexistence attractors can be switched to different plane locations by the initial states of the two memristors. Finally, a digital experiment device is developed based on STM32 hardware board to verify the initials-regulated plane coexistence attractors.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107049"},"PeriodicalIF":6.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cluster synchronization of fractional-order two-layer networks and application in image encryption/decryption. 分数阶双层网络的簇同步及在图像加密/解密中的应用。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-11 DOI: 10.1016/j.neunet.2024.107023
Juan Yu, Yanwei Yin, Tingting Shi, Cheng Hu
{"title":"Cluster synchronization of fractional-order two-layer networks and application in image encryption/decryption.","authors":"Juan Yu, Yanwei Yin, Tingting Shi, Cheng Hu","doi":"10.1016/j.neunet.2024.107023","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107023","url":null,"abstract":"<p><p>In this paper, a type of fractional-order two-layer network model is constructed, wherein each layer in the network exhibits distinct topology. Subsequently, the cluster synchronization problem of fractional-order two-layer networks is investigated through a two-step approach. The initial step involves the implementation of finite-time cluster synchronization in the first layer by utilizing a fractional-order finite-time convergence lemma. Based upon this, the second step employs a novel approach of collectively treating the nodes within the same cluster in the first layer, thereby offering a significant insight for analyzing fractional-order two-layer networks cluster synchronization. In addition, the paper proposes a novel encryption/decryption scheme based on the cluster synchronization of fractional-order two-layer networks. By leveraging the complexity of chaotic sequences generated by fractional-order two-layer networks, the security of the encryption/decryption strategy is enhanced. Furthermore, three illustrative examples are provided to validate the theoretical findings.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107023"},"PeriodicalIF":6.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Output sampling synchronization and state estimation in flux-charge domain memristive neural networks with leakage and time-varying delays.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-10 DOI: 10.1016/j.neunet.2024.107018
G Soundararajan, R Suvetha, Minvydas Ragulskis, P Prakash
{"title":"Output sampling synchronization and state estimation in flux-charge domain memristive neural networks with leakage and time-varying delays.","authors":"G Soundararajan, R Suvetha, Minvydas Ragulskis, P Prakash","doi":"10.1016/j.neunet.2024.107018","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107018","url":null,"abstract":"<p><p>This paper theoretically explores the coexistence of synchronization and state estimation analysis through output sampling measures for a class of memristive neural networks operating within the flux-charge domain. These networks are subject to constant delayed responses in self-feedback loops and time-varying delayed responses incorporated into the activation functions. A contemporary output sampling controller is designed to discretize system dynamics based on available output measurements, which enhances control performance by minimizing update frequency, thus overcoming network bandwidth limitations and addressing network synchronization and state vector estimation. By utilizing differential inclusion mapping to capture weights from discontinuous memristive switching actions and an input-delay approach to bound nonuniform sampling intervals, we present linear matrix inequality-based sufficient conditions for synchronization and vector estimation criteria under the Lyapunov-Krasovskii functional framework and relaxed integral inequality. Finally, by utilizing the preset experimental data-set, we visually verify the adaptability of the proposed theoretical findings concerning synchronization, anti-synchronization, and vector state estimation of delayed memristive neural networks operating in the flux-charge domain. Furthermore, numerical validation through simulation demonstrates the impact of leakage delay and output measurement sampling by comparative analysis with scenarios lacking leakage and sampling measurements.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107018"},"PeriodicalIF":6.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive expert fusion model for online wind power prediction.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-10 DOI: 10.1016/j.neunet.2024.107022
Renfang Wang, Jingtong Wu, Xu Cheng, Xiufeng Liu, Hong Qiu
{"title":"Adaptive expert fusion model for online wind power prediction.","authors":"Renfang Wang, Jingtong Wu, Xu Cheng, Xiufeng Liu, Hong Qiu","doi":"10.1016/j.neunet.2024.107022","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107022","url":null,"abstract":"<p><p>Wind power prediction is a challenging task due to the high variability and uncertainty of wind generation and weather conditions. Accurate and timely wind power prediction is essential for optimal power system operation and planning. In this paper, we propose a novel Adaptive Expert Fusion Model (EFM+) for online wind power prediction. EFM+ is an innovative ensemble model that integrates the strengths of XGBoost and self-attention LSTM models using dynamic weights. EFM+ can adapt to real-time changes in wind conditions and data distribution by updating the weights based on the performance and error of the models on recent similar samples. EFM+ enables Bayesian inference and real-time uncertainty updates with new data. We conduct extensive experiments on a real-world wind farm dataset to evaluate EFM+. The results show that EFM+ outperforms existing models in prediction accuracy and error, and demonstrates high robustness and stability across various scenarios. We also conduct sensitivity and ablation analyses to assess the effects of different components and parameters on EFM+. EFM+ is a promising technique for online wind power prediction that can handle nonstationarity and uncertainty in wind power generation.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107022"},"PeriodicalIF":6.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust long-tailed recognition with distribution-aware adversarial example generation.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-10 DOI: 10.1016/j.neunet.2024.106932
Bo Li, Yongqiang Yao, Jingru Tan, Dandan Zhu, Ruihao Gong, Ye Luo, Jianwei Lu
{"title":"Robust long-tailed recognition with distribution-aware adversarial example generation.","authors":"Bo Li, Yongqiang Yao, Jingru Tan, Dandan Zhu, Ruihao Gong, Ye Luo, Jianwei Lu","doi":"10.1016/j.neunet.2024.106932","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106932","url":null,"abstract":"<p><p>Confronting adversarial attacks and data imbalances, attaining adversarial robustness under long-tailed distribution presents a challenging problem. Adversarial training (AT) is a conventional solution for enhancing adversarial robustness, which generates adversarial examples (AEs) in a generation phase and subsequently trains on these AEs in a training phase. Existing long-tailed adversarial learning methods follow the AT framework and rebalance the AE classification in the training phase. However, few of them realize the impact of the long-tailed distribution on the generation phase. In this paper, we delve into the generation phase and uncover its imbalance across different classes. We evaluate the generation quality for different classes by comparing the differences between their generated AEs and natural examples. Our findings reveal that these differences are less pronounced in tail classes compared to head classes, indicating their inferior generation quality. To solve this problem, we propose the novel Distribution-Aware Adversarial Example Generation (DAG) method, which balances the AE generation for different classes using a Virtual Example Creator (VEC) and a Gradient-Guided Calibrator (GGC). The VEC creates virtual examples to introduce more adversarial perturbations for different classes, while the GGC calibrates the creation process to enhance the focus on tail classes based on their generation quality, effectively addressing the imbalance problem. Extensive experiments on three long-tailed adversarial benchmarks across five attack scenarios demonstrate DAG's effectiveness. On CIFAR-100-LT, DAG outperforms the previous RoBal by 4.0 points under the projected gradient descent (PGD) attack, highlighting its superiority in adversarial scenarios.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"106932"},"PeriodicalIF":6.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Counterfactual learning for higher-order relation prediction in heterogeneous information networks. 异构信息网络中高阶关系预测的反事实学习。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-10 DOI: 10.1016/j.neunet.2024.107024
Xuan Guo, Jie Li, Pengfei Jiao, Wang Zhang, Tianpeng Li, Wenjun Wang
{"title":"Counterfactual learning for higher-order relation prediction in heterogeneous information networks.","authors":"Xuan Guo, Jie Li, Pengfei Jiao, Wang Zhang, Tianpeng Li, Wenjun Wang","doi":"10.1016/j.neunet.2024.107024","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107024","url":null,"abstract":"<p><p>Heterogeneous Information Networks (HINs) play a crucial role in modeling complex social systems, where predicting missing links/relations is a significant task. Existing methods primarily focus on pairwise relations, but real-world scenarios often involve multi-entity interactions. For example, in academic collaboration networks, an interaction occurs between a paper, a conference, and multiple authors. These higher-order relations are prevalent but have been underexplored. Moreover, existing methods often neglect the causal relationship between the global graph structure and the state of relations, limiting their ability to capture the fundamental factors driving relation prediction. In this paper, we propose HINCHOR, an end-to-end model for higher-order relation prediction in HINs. HINCHOR introduces a higher-order structure encoder to capture multi-entity proximity information. Then, it focuses on a counterfactual question: \"If the global graph structure were different, would the higher-order relation change?\" By presenting a counterfactual data augmentation module, HINCHOR utilizes global structure information to generate counterfactual relations. Through counterfactual learning, HINCHOR estimates causal effects while predicting higher-order relations. The experimental results on four constructed benchmark datasets show that HINCHOR outperforms existing state-of-the-art methods.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107024"},"PeriodicalIF":6.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple remedy for failure modes in physics informed neural networks. 对物理信息神经网络故障模式的简单补救。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-10 DOI: 10.1016/j.neunet.2024.106963
Ghazal Farhani, Nima Hosseini Dashtbayaz, Alexander Kazachek, Boyu Wang
{"title":"A simple remedy for failure modes in physics informed neural networks.","authors":"Ghazal Farhani, Nima Hosseini Dashtbayaz, Alexander Kazachek, Boyu Wang","doi":"10.1016/j.neunet.2024.106963","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106963","url":null,"abstract":"<p><p>Physics-informed neural networks (PINNs) have shown promising results in solving a wide range of problems involving partial differential equations (PDEs). Nevertheless, there are several instances of the failure of PINNs when PDEs become more complex. Particularly, when PDE coefficients grow larger or PDEs become increasingly nonlinear, PINNs struggle to converge to the true solution. A noticeable discrepancy emerges in the convergence speed between the PDE loss and the initial/boundary conditions loss, leading to the inability of PINNs to effectively learn the true solutions to these PDEs. In the present work, leveraging the neural tangent kernels (NTKs), we investigate the training dynamics of PINNs. Our theoretical analysis reveals that when PINNs are trained using gradient descent with momentum (GDM), the gap in convergence rates between the two loss terms is significantly reduced, thereby enabling the learning of the exact solution. We also examine why training a model via the Adam optimizer can accelerate the convergence and reduce the effect of the mentioned discrepancy. Our numerical experiments validate that sufficiently wide networks trained with GDM and Adam yield desirable solutions for more complex PDEs.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106963"},"PeriodicalIF":6.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OperaGAN: A simultaneous transfer network for opera makeup and complex headwear. OperaGAN:用于歌剧化妆和复杂头饰的同步传输网络。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-09 DOI: 10.1016/j.neunet.2024.107015
Yue Ma, Chunjie Xu, Wei Song, Hanyu Liang
{"title":"OperaGAN: A simultaneous transfer network for opera makeup and complex headwear.","authors":"Yue Ma, Chunjie Xu, Wei Song, Hanyu Liang","doi":"10.1016/j.neunet.2024.107015","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107015","url":null,"abstract":"<p><p>Standard makeup transfer techniques mainly focus on facial makeup. The texture details of headwear in style examples tend to be ignored. When dealing with complex portrait style transfer, simultaneous correct headwear and facial makeup transfer often cannot be guaranteed. In this paper, we construct the Peking Opera makeup dataset and propose a makeup transfer network for Opera faces called OperaGAN. This network consists of two key components: the Makeup and Headwear Style Encoder module (MHSEnc) and the Identity Coding and Makeup Fusion module (ICMF). MHSEnc is specifically designed to extract the style features from global and local perspectives. ICMF extracts the source image's facial features and combines them with the style features to generate the final transfer result. In addition, multiple overlapping local discriminators are utilized to transfer the high-frequency details in opera makeup. Experiments demonstrate that our method achieves state-of-the-art results in simultaneously transferring opera makeup and headwear. And the method can transfer headwear with missing content and controllable intensity makeup. The code and dataset will be available at https://github.com/Ivychun/OperaGAN.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107015"},"PeriodicalIF":6.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning. FedMEKT:基于蒸馏的嵌入式知识转移,用于多模式联合学习。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-09 DOI: 10.1016/j.neunet.2024.107017
Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong
{"title":"FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning.","authors":"Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong","doi":"10.1016/j.neunet.2024.107017","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.107017","url":null,"abstract":"<p><p>Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107017"},"PeriodicalIF":6.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-memory-augmented network with a curvy metric method for video anomaly detection.
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-12-09 DOI: 10.1016/j.neunet.2024.106972
Hongjun Li, Yunlong Wang, Yating Wang, Junjie Chen
{"title":"A multi-memory-augmented network with a curvy metric method for video anomaly detection.","authors":"Hongjun Li, Yunlong Wang, Yating Wang, Junjie Chen","doi":"10.1016/j.neunet.2024.106972","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106972","url":null,"abstract":"<p><p>Anomaly detection task in video mainly refers to identifying anomalous events that do not conform to the learned normal patterns in the inferring phase. However, the Euclidean metric used in the learning and inferring phase by the most of the existing methods, which cannot measure the difference between the different high-dimensional data reasonably, because the Euclidean distance between the different high-dimensional data will gradually become the same as the dimension increases. In this paper, we propose a Multi-Memory-Augmented dual-flow network with a new curvy metric method, to remove this shortcoming of Euclidean metric. To the best of our knowledge, this is the first work to detect abnormal events using this novel curvy metric. A large number of comparative experiments show that this novel curvy metric can be inserted in any neural network based on the Euclidean metric due to its independence and the migration experiment results. In addition, the powerful representation capacity of deep network allows to take abnormal frames as normal, we employ several memory units to the dual-flow network that considers the diversity of normal patterns explicitly, while lessening the representation capacity of dual-flow network. Our model is easy to be trained and robust to be applied. Extensive experiments on five publicly available datasets verify the validity of our method, which reflect in the robustness to the normal events diversity as well as the sensitivity to abnormal events.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"106972"},"PeriodicalIF":6.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142822888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信