arXiv - CS - Neural and Evolutionary Computing最新文献

筛选
英文 中文
Evolutionary Algorithms Are Significantly More Robust to Noise When They Ignore It 当进化算法忽略噪声时,其鲁棒性显著提高
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-31 DOI: arxiv-2409.00306
Denis Antipov, Benjamin Doerr
{"title":"Evolutionary Algorithms Are Significantly More Robust to Noise When They Ignore It","authors":"Denis Antipov, Benjamin Doerr","doi":"arxiv-2409.00306","DOIUrl":"https://doi.org/arxiv-2409.00306","url":null,"abstract":"Randomized search heuristics (RHSs) are generally believed to be robust to\u0000noise. However, almost all mathematical analyses on how RSHs cope with a noisy\u0000access to the objective function assume that each solution is re-evaluated\u0000whenever it is compared to others. This is unfortunate, both because it wastes\u0000computational resources and because it requires the user to foresee that noise\u0000is present (as in a noise-free setting, one would never re-evaluate solutions). In this work, we show the need for re-evaluations could be overestimated, and\u0000in fact, detrimental. For the classic benchmark problem of how the $(1+1)$\u0000evolutionary algorithm optimizes the LeadingOnes benchmark, we show that\u0000without re-evaluations up to constant noise rates can be tolerated, much more\u0000than the $O(n^{-2} log n)$ noise rates that can be tolerated when\u0000re-evaluating solutions. This first runtime analysis of an evolutionary algorithm solving a\u0000single-objective noisy problem without re-evaluations could indicate that such\u0000algorithms cope with noise much better than previously thought, and without the\u0000need to foresee the presence of noise.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual learning with the neural tangent ensemble 利用神经切线集合进行持续学习
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-30 DOI: arxiv-2408.17394
Ari S. Benjamin, Christian Pehle, Kyle Daruwalla
{"title":"Continual learning with the neural tangent ensemble","authors":"Ari S. Benjamin, Christian Pehle, Kyle Daruwalla","doi":"arxiv-2408.17394","DOIUrl":"https://doi.org/arxiv-2408.17394","url":null,"abstract":"A natural strategy for continual learning is to weigh a Bayesian ensemble of\u0000fixed functions. This suggests that if a (single) neural network could be\u0000interpreted as an ensemble, one could design effective algorithms that learn\u0000without forgetting. To realize this possibility, we observe that a neural\u0000network classifier with N parameters can be interpreted as a weighted ensemble\u0000of N classifiers, and that in the lazy regime limit these classifiers are fixed\u0000throughout learning. We term these classifiers the neural tangent experts and\u0000show they output valid probability distributions over the labels. We then\u0000derive the likelihood and posterior probability of each expert given past data.\u0000Surprisingly, we learn that the posterior updates for these experts are\u0000equivalent to a scaled and projected form of stochastic gradient descent (SGD)\u0000over the network weights. Away from the lazy regime, networks can be seen as\u0000ensembles of adaptive experts which improve over time. These results offer a\u0000new interpretation of neural networks as Bayesian ensembles of experts,\u0000providing a principled framework for understanding and mitigating catastrophic\u0000forgetting in continual learning settings.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stepwise Weighted Spike Coding for Deep Spiking Neural Networks 深度尖峰神经网络的逐步加权尖峰编码
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-30 DOI: arxiv-2408.17245
Yiwen Gu, Junchuan Gu, Haibin Shen, Kejie Huang
{"title":"Stepwise Weighted Spike Coding for Deep Spiking Neural Networks","authors":"Yiwen Gu, Junchuan Gu, Haibin Shen, Kejie Huang","doi":"arxiv-2408.17245","DOIUrl":"https://doi.org/arxiv-2408.17245","url":null,"abstract":"Spiking Neural Networks (SNNs) seek to mimic the spiking behavior of\u0000biological neurons and are expected to play a key role in the advancement of\u0000neural computing and artificial intelligence. The efficiency of SNNs is often\u0000determined by the neural coding schemes. Existing coding schemes either cause\u0000huge delays and energy consumption or necessitate intricate neuron models and\u0000training techniques. To address these issues, we propose a novel Stepwise\u0000Weighted Spike (SWS) coding scheme to enhance the encoding of information in\u0000spikes. This approach compresses the spikes by weighting the significance of\u0000the spike in each step of neural computation, achieving high performance and\u0000low energy consumption. A Ternary Self-Amplifying (TSA) neuron model with a\u0000silent period is proposed for supporting SWS-based computing, aimed at\u0000minimizing the residual error resulting from stepwise weighting in neural\u0000computation. Our experimental results show that the SWS coding scheme\u0000outperforms the existing neural coding schemes in very deep SNNs, and\u0000significantly reduces operations and latency.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Estimation of Unique Components in Independent Component Analysis by Matrix Representation 用矩阵表示法高效估计独立成分分析中的独特成分
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-30 DOI: arxiv-2408.17118
Yoshitatsu Matsuda, Kazunori Yamaguch
{"title":"Efficient Estimation of Unique Components in Independent Component Analysis by Matrix Representation","authors":"Yoshitatsu Matsuda, Kazunori Yamaguch","doi":"arxiv-2408.17118","DOIUrl":"https://doi.org/arxiv-2408.17118","url":null,"abstract":"Independent component analysis (ICA) is a widely used method in various\u0000applications of signal processing and feature extraction. It extends principal\u0000component analysis (PCA) and can extract important and complicated components\u0000with small variances. One of the major problems of ICA is that the uniqueness\u0000of the solution is not guaranteed, unlike PCA. That is because there are many\u0000local optima in optimizing the objective function of ICA. It has been shown\u0000previously that the unique global optimum of ICA can be estimated from many\u0000random initializations by handcrafted thread computation. In this paper, the\u0000unique estimation of ICA is highly accelerated by reformulating the algorithm\u0000in matrix representation and reducing redundant calculations. Experimental\u0000results on artificial datasets and EEG data verified the efficiency of the\u0000proposed method.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ART: Actually Robust Training ART: 实际上的稳健培训
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-29 DOI: arxiv-2408.16285
Sebastian Chwilczyński, Kacper Trębacz, Karol Cyganik, Mateusz Małecki, Dariusz Brzezinski
{"title":"ART: Actually Robust Training","authors":"Sebastian Chwilczyński, Kacper Trębacz, Karol Cyganik, Mateusz Małecki, Dariusz Brzezinski","doi":"arxiv-2408.16285","DOIUrl":"https://doi.org/arxiv-2408.16285","url":null,"abstract":"Current interest in deep learning captures the attention of many programmers\u0000and researchers. Unfortunately, the lack of a unified schema for developing\u0000deep learning models results in methodological inconsistencies, unclear\u0000documentation, and problems with reproducibility. Some guidelines have been\u0000proposed, yet currently, they lack practical implementations. Furthermore,\u0000neural network training often takes on the form of trial and error, lacking a\u0000structured and thoughtful process. To alleviate these issues, in this paper, we\u0000introduce Art, a Python library designed to help automatically impose rules and\u0000standards while developing deep learning pipelines. Art divides model\u0000development into a series of smaller steps of increasing complexity, each\u0000concluded with a validation check improving the interpretability and robustness\u0000of the process. The current version of Art comes equipped with nine predefined\u0000steps inspired by Andrej Karpathy's Recipe for Training Neural Networks, a\u0000visualization dashboard, and integration with loggers such as Neptune. The code\u0000related to this paper is available at:\u0000https://github.com/SebChw/Actually-Robust-Training.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maelstrom Networks 漩涡网络
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-29 DOI: arxiv-2408.16632
Matthew Evanusa, Cornelia Fermüller, Yiannis Aloimonos
{"title":"Maelstrom Networks","authors":"Matthew Evanusa, Cornelia Fermüller, Yiannis Aloimonos","doi":"arxiv-2408.16632","DOIUrl":"https://doi.org/arxiv-2408.16632","url":null,"abstract":"Artificial Neural Networks has struggled to devise a way to incorporate\u0000working memory into neural networks. While the ``long term'' memory can be seen\u0000as the learned weights, the working memory consists likely more of dynamical\u0000activity, that is missing from feed-forward models. Current state of the art\u0000models such as transformers tend to ``solve'' this by ignoring working memory\u0000entirely and simply process the sequence as an entire piece of data; however\u0000this means the network cannot process the sequence in an online fashion, and\u0000leads to an immense explosion in memory requirements. Here, inspired by a\u0000combination of controls, reservoir computing, deep learning, and recurrent\u0000neural networks, we offer an alternative paradigm that combines the strength of\u0000recurrent networks, with the pattern matching capability of feed-forward neural\u0000networks, which we call the textit{Maelstrom Networks} paradigm. This paradigm\u0000leaves the recurrent component - the textit{Maelstrom} - unlearned, and\u0000offloads the learning to a powerful feed-forward network. This allows the\u0000network to leverage the strength of feed-forward training without unrolling the\u0000network, and allows for the memory to be implemented in new neuromorphic\u0000hardware. It endows a neural network with a sequential memory that takes\u0000advantage of the inductive bias that data is organized causally in the temporal\u0000domain, and imbues the network with a state that represents the agent's\u0000``self'', moving through the environment. This could also lead the way to\u0000continual learning, with the network modularized and ``'protected'' from\u0000overwrites that come with new data. In addition to aiding in solving these\u0000performance problems that plague current non-temporal deep networks, this also\u0000could finally lead towards endowing artificial networks with a sense of\u0000``self''.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconsidering the energy efficiency of spiking neural networks 重新考虑尖峰神经网络的能效
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-29 DOI: arxiv-2409.08290
Zhanglu Yan, Zhenyu Bai, Weng-Fai Wong
{"title":"Reconsidering the energy efficiency of spiking neural networks","authors":"Zhanglu Yan, Zhenyu Bai, Weng-Fai Wong","doi":"arxiv-2409.08290","DOIUrl":"https://doi.org/arxiv-2409.08290","url":null,"abstract":"Spiking neural networks (SNNs) are generally regarded as more\u0000energy-efficient because they do not use multiplications. However, most SNN\u0000works only consider the counting of additions to evaluate energy consumption,\u0000neglecting other overheads such as memory accesses and data movement\u0000operations. This oversight can lead to a misleading perception of efficiency,\u0000especially when state-of-the-art SNN accelerators operate with very small time\u0000window sizes. In this paper, we present a detailed comparison of the energy\u0000consumption of artificial neural networks (ANNs) and SNNs from a hardware\u0000perspective. We provide accurate formulas for energy consumption based on\u0000classical multi-level memory hierarchy architectures, commonly used\u0000neuromorphic dataflow architectures, and our proposed improved spatial-dataflow\u0000architecture. Our research demonstrates that to achieve comparable accuracy and\u0000greater energy efficiency than ANNs, SNNs require strict limitations on both\u0000time window size T and sparsity s. For instance, with the VGG16 model and a\u0000fixed T of 6, the neuron sparsity rate must exceed 93% to ensure energy\u0000efficiency across most architectures. Inspired by our findings, we explore\u0000strategies to enhance energy efficiency by increasing sparsity. We introduce\u0000two regularization terms during training that constrain weights and\u0000activations, effectively boosting the sparsity rate. Our experiments on the\u0000CIFAR-10 dataset, using T of 6, show that our SNNs consume 69% of the energy\u0000used by optimized ANNs on spatial-dataflow architectures, while maintaining an\u0000SNN accuracy of 94.18%. This framework, developed using PyTorch, is publicly\u0000available for use and further research.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142248998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiking Diffusion Models 尖峰扩散模型
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-29 DOI: arxiv-2408.16467
Jiahang Cao, Hanzhong Guo, Ziqing Wang, Deming Zhou, Hao Cheng, Qiang Zhang, Renjing Xu
{"title":"Spiking Diffusion Models","authors":"Jiahang Cao, Hanzhong Guo, Ziqing Wang, Deming Zhou, Hao Cheng, Qiang Zhang, Renjing Xu","doi":"arxiv-2408.16467","DOIUrl":"https://doi.org/arxiv-2408.16467","url":null,"abstract":"Recent years have witnessed Spiking Neural Networks (SNNs) gaining attention\u0000for their ultra-low energy consumption and high biological plausibility\u0000compared with traditional Artificial Neural Networks (ANNs). Despite their\u0000distinguished properties, the application of SNNs in the computationally\u0000intensive field of image generation is still under exploration. In this paper,\u0000we propose the Spiking Diffusion Models (SDMs), an innovative family of\u0000SNN-based generative models that excel in producing high-quality samples with\u0000significantly reduced energy consumption. In particular, we propose a\u0000Temporal-wise Spiking Mechanism (TSM) that allows SNNs to capture more temporal\u0000features from a bio-plasticity perspective. In addition, we propose a\u0000threshold-guided strategy that can further improve the performances by up to\u000016.7% without any additional training. We also make the first attempt to use\u0000the ANN-SNN approach for SNN-based generation tasks. Extensive experimental\u0000results reveal that our approach not only exhibits comparable performance to\u0000its ANN counterpart with few spiking time steps, but also outperforms previous\u0000SNN-based generative models by a large margin. Moreover, we also demonstrate\u0000the high-quality generation ability of SDM on large-scale datasets, e.g., LSUN\u0000bedroom. This development marks a pivotal advancement in the capabilities of\u0000SNN-based generation, paving the way for future research avenues to realize\u0000low-energy and low-latency generative applications. Our code is available at\u0000https://github.com/AndyCao1125/SDM.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing Common Misinterpretations of KART and UAT in Neural Network Literature 消除神经网络文献中对 KART 和 UAT 的常见误读
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-29 DOI: arxiv-2408.16389
Vugar Ismailov
{"title":"Addressing Common Misinterpretations of KART and UAT in Neural Network Literature","authors":"Vugar Ismailov","doi":"arxiv-2408.16389","DOIUrl":"https://doi.org/arxiv-2408.16389","url":null,"abstract":"This note addresses the Kolmogorov-Arnold Representation Theorem (KART) and\u0000the Universal Approximation Theorem (UAT), focusing on their common\u0000misinterpretations in some papers related to neural network approximation. Our\u0000remarks aim to support a more accurate understanding of KART and UAT among\u0000neural network specialists.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Denoising Technique and Deep Learning Based Hybrid Wind Speed Forecasting Model for Variable Terrain Conditions 基于去噪技术和深度学习的新型混合风速预报模型,适用于多变地形条件
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-28 DOI: arxiv-2408.15554
Sourav Malakar, Saptarsi Goswami, Amlan Chakrabarti, Bhaswati Ganguli
{"title":"A Novel Denoising Technique and Deep Learning Based Hybrid Wind Speed Forecasting Model for Variable Terrain Conditions","authors":"Sourav Malakar, Saptarsi Goswami, Amlan Chakrabarti, Bhaswati Ganguli","doi":"arxiv-2408.15554","DOIUrl":"https://doi.org/arxiv-2408.15554","url":null,"abstract":"Wind flow can be highly unpredictable and can suffer substantial fluctuations\u0000in speed and direction due to the shape and height of hills, mountains, and\u0000valleys, making accurate wind speed (WS) forecasting essential in complex\u0000terrain. This paper presents a novel and adaptive model for short-term\u0000forecasting of WS. The paper's key contributions are as follows: (a) The\u0000Partial Auto Correlation Function (PACF) is utilised to minimise the dimension\u0000of the set of Intrinsic Mode Functions (IMF), hence reducing training time; (b)\u0000The sample entropy (SampEn) was used to calculate the complexity of the reduced\u0000set of IMFs. The proposed technique is adaptive since a specific Deep Learning\u0000(DL) model-feature combination was chosen based on complexity; (c) A novel\u0000bidirectional feature-LSTM framework for complicated IMFs has been suggested,\u0000resulting in improved forecasting accuracy; (d) The proposed model shows\u0000superior forecasting performance compared to the persistence, hybrid, Ensemble\u0000empirical mode decomposition (EEMD), and Variational Mode Decomposition\u0000(VMD)-based deep learning models. It has achieved the lowest variance in terms\u0000of forecasting accuracy between simple and complex terrain conditions 0.70%.\u0000Dimension reduction of IMF's and complexity-based model-feature selection helps\u0000reduce the training time by 68.77% and improve forecasting quality by 58.58% on\u0000average.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信