Neural Computation最新文献

筛选
英文 中文
Prototype Analysis in Hopfield Networks With Hebbian Learning 采用 Hebbian 学习的 Hopfield 网络中的原型分析
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01704
Hayden McAlister;Anthony Robins;Lech Szymanski
{"title":"Prototype Analysis in Hopfield Networks With Hebbian Learning","authors":"Hayden McAlister;Anthony Robins;Lech Szymanski","doi":"10.1162/neco_a_01704","DOIUrl":"10.1162/neco_a_01704","url":null,"abstract":"We discuss prototype formation in the Hopfield network. Typically, Hebbian learning with highly correlated states leads to degraded memory performance. We show that this type of learning can lead to prototype formation, where unlearned states emerge as representatives of large correlated subsets of states, alleviating capacity woes. This process has similarities to prototype learning in human cognition. We provide a substantial literature review of prototype learning in associative memories, covering contributions from psychology, statistical physics, and computer science. We analyze prototype formation from a theoretical perspective and derive a stability condition for these states based on the number of examples of the prototype presented for learning, the noise in those examples, and the number of nonexample states presented. The stability condition is used to construct a probability of stability for a prototype state as the factors of stability change. We also note similarities to traditional network analysis, allowing us to find a prototype capacity. We corroborate these expectations of prototype formation with experiments using a simple Hopfield network with standard Hebbian learning. We extend our experiments to a Hopfield network trained on data with multiple prototypes and find the network is capable of stabilizing multiple prototypes concurrently. We measure the basins of attraction of the multiple prototype states, finding attractor strength grows with the number of examples and the agreement of examples. We link the stability and dominance of prototype states to the energy profile of these states, particularly when comparing the profile shape to target states or other spurious states.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2322-2364"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Space Bayesian Optimization With Latent Data Augmentation for Enhanced Exploration 潜空间贝叶斯优化与潜数据增强,以加强探索。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01708
Onur Boyar;Ichiro Takeuchi
{"title":"Latent Space Bayesian Optimization With Latent Data Augmentation for Enhanced Exploration","authors":"Onur Boyar;Ichiro Takeuchi","doi":"10.1162/neco_a_01708","DOIUrl":"10.1162/neco_a_01708","url":null,"abstract":"Latent space Bayesian optimization (LSBO) combines generative models, typically variational autoencoders (VAE), with Bayesian optimization (BO), to generate de novo objects of interest. However, LSBO faces challenges due to the mismatch between the objectives of BO and VAE, resulting in poor exploration capabilities. In this article, we propose novel contributions to enhance LSBO efficiency and overcome this challenge. We first introduce the concept of latent consistency/inconsistency as a crucial problem in LSBO, arising from the VAE-BO mismatch. To address this, we propose the latent consistent aware-acquisition function (LCA-AF) that leverages consistent points in LSBO. Additionally, we present LCA-VAE, a novel VAE method that creates a latent space with increased consistent points through data augmentation in latent space and penalization of latent inconsistencies. Combining LCA-VAE and LCA-AF, we develop LCA-LSBO. Our approach achieves high sample efficiency and effective exploration, emphasizing the significance of addressing latent consistency through the novel incorporation of data augmentation in latent space within LCA-VAE in LSBO. We showcase the performance of our proposal via de novo image generation and de novo chemical design tasks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2446-2478"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Internal Representations of 3D Transformations From 2D Projected Inputs 从二维投影输入学习三维变换的内部表征
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01695
Marissa Connor;Bruno Olshausen;Christopher Rozell
{"title":"Learning Internal Representations of 3D Transformations From 2D Projected Inputs","authors":"Marissa Connor;Bruno Olshausen;Christopher Rozell","doi":"10.1162/neco_a_01695","DOIUrl":"10.1162/neco_a_01695","url":null,"abstract":"We describe a computational model for inferring 3D structure from the motion of projected 2D points in an image, with the aim of understanding how biological vision systems learn and internally represent 3D transformations from the statistics of their input. The model uses manifold transport operators to describe the action of 3D points in a scene as they undergo transformation. We show that the model can learn the generator of the Lie group for these transformations from purely 2D input, providing a proof-of-concept demonstration for how biological systems could adapt their internal representations based on sensory input. Focusing on a rotational model, we evaluate the ability of the model to infer depth from moving 2D projected points and to learn rotational transformations from 2D training stimuli. Finally, we compare the model performance to psychophysical performance on structure-from-motion tasks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2505-2539"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiking Neural Network Pressure Sensor 尖峰神经网络压力传感器
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01706
Michał Markiewicz;Ireneusz Brzozowski;Szymon Janusz
{"title":"Spiking Neural Network Pressure Sensor","authors":"Michał Markiewicz;Ireneusz Brzozowski;Szymon Janusz","doi":"10.1162/neco_a_01706","DOIUrl":"10.1162/neco_a_01706","url":null,"abstract":"Von Neumann architecture requires information to be encoded as numerical values. For that reason, artificial neural networks running on computers require the data coming from sensors to be discretized. Other network architectures that more closely mimic biological neural networks (e.g., spiking neural networks) can be simulated on von Neumann architecture, but more important, they can also be executed on dedicated electrical circuits having orders of magnitude less power consumption. Unfortunately, input signal conditioning and encoding are usually not supported by such circuits, so a separate module consisting of an analog-to-digital converter, encoder, and transmitter is required. The aim of this article is to propose a sensor architecture, the output signal of which can be directly connected to the input of a spiking neural network. We demonstrate that the output signal is a valid spike source for the Izhikevich model neurons, ensuring the proper operation of a number of neurocomputational features. The advantages are clear: much lower power consumption, smaller area, and a less complex electronic circuit. The main disadvantage is that sensor characteristics somehow limit the parameters of applicable spiking neurons. The proposed architecture is illustrated by a case study involving a capacitive pressure sensor circuit, which is compatible with most of the neurocomputational properties of the Izhikevich neuron model. The sensor itself is characterized by very low power consumption: it draws only 3.49 μA at 3.3 V.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2299-2321"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ℓ1-Regularized ICA: A Novel Method for Analysis of Task-Related fMRI Data ℓ 1 -Regularized ICA:分析任务相关 fMRI 数据的新方法。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01709
Yusuke Endo;Koujin Takeda
{"title":"ℓ1-Regularized ICA: A Novel Method for Analysis of Task-Related fMRI Data","authors":"Yusuke Endo;Koujin Takeda","doi":"10.1162/neco_a_01709","DOIUrl":"10.1162/neco_a_01709","url":null,"abstract":"We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data. In general, matrix factorization methods including ICA have a problem regarding the interpretability of extracted features. For the improvement of interpretability, sparse constraint on a factorized matrix is helpful. With this background, we construct a new ICA method with sparsity. In our method, the ℓ1-regularization term is added to the cost function of ICA, and minimization of the cost function is performed by a difference of convex functions algorithm. For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2540-2570"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deconstructing Deep Active Inference: A Contrarian Information Gatherer 解构深度主动推理:逆向信息收集器
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01697
Théophile Champion;Marek Grześ;Lisa Bonheme;Howard Bowman
{"title":"Deconstructing Deep Active Inference: A Contrarian Information Gatherer","authors":"Théophile Champion;Marek Grześ;Lisa Bonheme;Howard Bowman","doi":"10.1162/neco_a_01697","DOIUrl":"10.1162/neco_a_01697","url":null,"abstract":"Active inference is a theory of perception, learning, and decision making that can be applied to neuroscience, robotics, psychology, and machine learning. Recently, intensive research has been taking place to scale up this framework using Monte Carlo tree search and deep learning. The goal of this activity is to solve more complicated tasks using deep active inference. First, we review the existing literature and then progressively build a deep active inference agent as follows: we (1) implement a variational autoencoder (VAE), (2) implement a deep hidden Markov model (HMM), and (3) implement a deep critical hidden Markov model (CHMM). For the CHMM, we implemented two versions, one minimizing expected free energy, CHMM[EFE] and one maximizing rewards, CHMM[reward]. Then we experimented with three different action selection strategies: the ε-greedy algorithm as well as softmax and best action selection. According to our experiments, the models able to solve the dSprites environment are the ones that maximize rewards. On further inspection, we found that the CHMM minimizing expected free energy almost always picks the same action, which makes it unable to solve the dSprites environment. In contrast, the CHMM maximizing reward keeps on selecting all the actions, enabling it to successfully solve the task. The only difference between those two CHMMs is the epistemic value, which aims to make the outputs of the transition and encoder networks as close as possible. Thus, the CHMM minimizing expected free energy repeatedly picks a single action and becomes an expert at predicting the future when selecting this action. This effectively makes the KL divergence between the output of the transition and encoder networks small. Additionally, when selecting the action down the average reward is zero, while for all the other actions, the expected reward will be negative. Therefore, if the CHMM has to stick to a single action to keep the KL divergence small, then the action down is the most rewarding. We also show in simulation that the epistemic value used in deep active inference can behave degenerately and in certain circumstances effectively lose, rather than gain, information. As the agent minimizing EFE is not able to explore its environment, the appropriate formulation of the epistemic value in deep active inference remains an open question.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2403-2445"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Representations: Building Blocks of Intelligence 预测表征:智能的基石
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-10-11 DOI: 10.1162/neco_a_01705
Wilka Carvalho;Momchil S. Tomov;William de Cothi;Caswell Barry;Samuel J. Gershman
{"title":"Predictive Representations: Building Blocks of Intelligence","authors":"Wilka Carvalho;Momchil S. Tomov;William de Cothi;Caswell Barry;Samuel J. Gershman","doi":"10.1162/neco_a_01705","DOIUrl":"10.1162/neco_a_01705","url":null,"abstract":"Adaptive behavior often requires predicting future events. The theory of reinforcement learning prescribes what kinds of predictive representations are useful and how to compute them. This review integrates these theoretical ideas with work on cognition and neuroscience. We pay special attention to the successor representation and its generalizations, which have been widely applied as both engineering tools and models of brain function. This convergence suggests that particular kinds of predictive representations may function as versatile building blocks of intelligence.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 11","pages":"2225-2298"},"PeriodicalIF":2.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electrical Signaling Beyond Neurons 神经元之外的电子信号传递
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01696
Travis Monk;Nik Dennler;Nicholas Ralph;Shavika Rastogi;Saeed Afshar;Pablo Urbizagastegui;Russell Jarvis;André van Schaik;Andrew Adamatzky
{"title":"Electrical Signaling Beyond Neurons","authors":"Travis Monk;Nik Dennler;Nicholas Ralph;Shavika Rastogi;Saeed Afshar;Pablo Urbizagastegui;Russell Jarvis;André van Schaik;Andrew Adamatzky","doi":"10.1162/neco_a_01696","DOIUrl":"10.1162/neco_a_01696","url":null,"abstract":"Neural action potentials (APs) are difficult to interpret as signal encoders and/or computational primitives. Their relationships with stimuli and behaviors are obscured by the staggering complexity of nervous systems themselves. We can reduce this complexity by observing that “simpler” neuron-less organisms also transduce stimuli into transient electrical pulses that affect their behaviors. Without a complicated nervous system, APs are often easier to understand as signal/response mechanisms. We review examples of nonneural stimulus transductions in domains of life largely neglected by theoretical neuroscience: bacteria, protozoans, plants, fungi, and neuron-less animals. We report properties of those electrical signals—for example, amplitudes, durations, ionic bases, refractory periods, and particularly their ecological purposes. We compare those properties with those of neurons to infer the tasks and selection pressures that neurons satisfy. Throughout the tree of life, nonneural stimulus transductions time behavioral responses to environmental changes. Nonneural organisms represent the presence or absence of a stimulus with the presence or absence of an electrical signal. Their transductions usually exhibit high sensitivity and specificity to a stimulus, but are often slow compared to neurons. Neurons appear to be sacrificing the specificity of their stimulus transductions for sensitivity and speed. We interpret cellular stimulus transductions as a cell’s assertion that it detected something important at that moment in time. In particular, we consider neural APs as fast but noisy detection assertions. We infer that a principal goal of nervous systems is to detect extremely weak signals from noisy sensory spikes under enormous time pressure. We discuss neural computation proposals that address this goal by casting neurons as devices that implement online, analog, probabilistic computations with their membrane potentials. Those proposals imply a measurable relationship between afferent neural spiking statistics and efferent neural membrane electrophysiology.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"1939-2029"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning 可训练的参考尖峰通过监督学习改善 SNN 的时间信息处理能力
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01702
Zeyuan Wang;Luis Cruz
{"title":"Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning","authors":"Zeyuan Wang;Luis Cruz","doi":"10.1162/neco_a_01702","DOIUrl":"10.1162/neco_a_01702","url":null,"abstract":"Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2136-2169"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference on the Macroscopic Dynamics of Spiking Neurons 尖峰神经元宏观动态推论
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-09-17 DOI: 10.1162/neco_a_01701
Nina Baldy;Martin Breyton;Marmaduke M. Woodman;Viktor K. Jirsa;Meysam Hashemi
{"title":"Inference on the Macroscopic Dynamics of Spiking Neurons","authors":"Nina Baldy;Martin Breyton;Marmaduke M. Woodman;Viktor K. Jirsa;Meysam Hashemi","doi":"10.1162/neco_a_01701","DOIUrl":"10.1162/neco_a_01701","url":null,"abstract":"The process of inference on networks of spiking neurons is essential to decipher the underlying mechanisms of brain computation and function. In this study, we conduct inference on parameters and dynamics of a mean-field approximation, simplifying the interactions of neurons. Estimating parameters of this class of generative model allows one to predict the system’s dynamics and responses under changing inputs and, indeed, changing parameters. We first assume a set of known state-space equations and address the problem of inferring the lumped parameters from observed time series. Crucially, we consider this problem in the setting of bistability, random fluctuations in system dynamics, and partial observations, in which some states are hidden. To identify the most efficient estimation or inversion scheme in this particular system identification, we benchmark against state-of-the-art optimization and Bayesian estimation algorithms, highlighting their strengths and weaknesses. Additionally, we explore how well the statistical relationships between parameters are maintained across different scales. We found that deep neural density estimators outperform other algorithms in the inversion scheme, despite potentially resulting in overestimated uncertainty and correlation between parameters. Nevertheless, this issue can be improved by incorporating time-delay embedding. We then eschew the mean-field approximation and employ deep neural ODEs on spiking neurons, illustrating prediction of system dynamics and vector fields from microscopic states. Overall, this study affords an opportunity to predict brain dynamics and responses to various perturbations or pharmacological interventions using deep neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 10","pages":"2030-2072"},"PeriodicalIF":2.7,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713873","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信