Neural Computation最新文献

筛选
英文 中文
Rapid Memory Encoding in a Spiking Hippocampus Circuit Model. 快速记忆编码在海马峰电路模型中的应用。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01762
Jiashuo Wang, Mengwen Yuan, Jiangrong Shen, Qingao Chai, Huajin Tang
{"title":"Rapid Memory Encoding in a Spiking Hippocampus Circuit Model.","authors":"Jiashuo Wang, Mengwen Yuan, Jiangrong Shen, Qingao Chai, Huajin Tang","doi":"10.1162/neco_a_01762","DOIUrl":"10.1162/neco_a_01762","url":null,"abstract":"<p><p>Memory is a complex process in the brain that involves the encoding, consolidation, and retrieval of previously experienced stimuli. The brain is capable of rapidly forming memories of sensory input. However, applying the memory system to real-world data poses challenges in practical implementation. This article demonstrates that through the integration of sparse spike pattern encoding scheme population tempotron, and various spike-timing-dependent plasticity (STDP) learning rules, supported by bounded weights and biological mechanisms, it is possible to rapidly form stable neural assemblies of external sensory inputs in a spiking neural circuit model inspired by the hippocampal structure. The model employs neural ensemble module and competitive learning strategies that mimic the pattern separation mechanism of the hippocampal dentate gyrus (DG) area to achieve nonoverlapping sparse coding. It also uses population tempotron and NMDA-(N-methyl-D-aspartate)mediated STDP to construct associative and episodic memories, analogous to the CA3 and CA1 regions. These memories are represented by strongly connected neural assemblies formed within just a few trials. Overall, this model offers a robust computational framework to accommodate rapid memory throughout the brain-wide memory process.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1320-1352"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision Threshold Learning in the Basal Ganglia for Multiple Alternatives. 多选项下基底神经节的决策阈值学习。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01760
Thom Griffith, Sophie-Anne Baker, Nathan F Lepora
{"title":"Decision Threshold Learning in the Basal Ganglia for Multiple Alternatives.","authors":"Thom Griffith, Sophie-Anne Baker, Nathan F Lepora","doi":"10.1162/neco_a_01760","DOIUrl":"10.1162/neco_a_01760","url":null,"abstract":"<p><p>In recent years, researchers have integrated the historically separate, reinforcement learning (RL), and evidence-accumulation-to-bound approaches to decision modeling. A particular outcome of these efforts has been the RL-DDM, a model that combines value learning through reinforcement with a diffusion decision model (DDM). While the RL-DDM is a conceptually elegant extension of the original DDM, it faces a similar problem to the DDM in that it does not scale well to decisions with more than two options. Furthermore, in its current form, the RL-DDM lacks flexibility when it comes to adapting to rapid, context-cued changes in the reward environment. The question of how to best extend combined RL and DDM models so they can handle multiple choices remains open. Moreover, it is currently unclear how these algorithmic solutions should map to neurophysical processes in the brain, particularly in relation to so-called go/no-go-type models of decision making in the basal ganglia. Here, we propose a solution that addresses these issues by combining a previously proposed decision model based on the multichoice sequential probability ratio test (MSPRT), with a dual-pathway model of decision threshold learning in the basal ganglia region of the brain. Our model learns decision thresholds to optimize the trade-off between time cost and the cost of errors and so efficiently allocates the amount of time for decision deliberation. In addition, the model is context dependent and hence flexible to changes to the speed-accuracy trade-off (SAT) in the environment. Furthermore, the model reproduces the magnitude effect, a phenomenon seen experimentally in value-based decisions and is agnostic to the types of evidence and so can be used on perceptual decisions, value-based decisions, and other types of modeled evidence. The broader significance of the model is that it contributes to the active research area of how learning systems interact by linking the previously separate models of RL-DDM to dopaminergic models of motivation and risk taking in the basal ganglia, as well as scaling to multiple alternatives.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1256-1287"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Artificial Neural Networks in Human-Robot Interaction. 人工神经网络在人机交互中的研究进展。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01764
Aleksandra Świetlicka
{"title":"A Survey on Artificial Neural Networks in Human-Robot Interaction.","authors":"Aleksandra Świetlicka","doi":"10.1162/neco_a_01764","DOIUrl":"10.1162/neco_a_01764","url":null,"abstract":"<p><p>Artificial neural networks (ANNs) have shown great potential in enhancing human-robot interaction (HRI). ANNs are computational models inspired by the structure and function of biological neural networks in the brain, which can learn from examples and generalize to new situations. ANNs can be used to enable robots to interact with humans in a more natural and intuitive way by allowing them to recognize human gestures and expressions, understand natural language, and adapt to the environment. ANNs can also be used to improve robot autonomy, allowing robots to learn from their interactions with humans and to make more informed decisions. However, there are also challenges to using ANNs in HRI, including the need for large amounts of training data, issues with explainability, and the potential for bias. This review explores the current state of research on ANNs in HRI, highlighting both the opportunities and challenges of this approach and discussing potential directions for future research. The AI contribution involves applying ANNs to various aspects of HRI, while the application in engineering involves using ANNs to develop more interactive and intuitive robotic systems.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1193-1255"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Closed-Loop Multistep Planning. 闭环多步骤规划。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01761
Giulia Lafratta, Bernd Porr, Christopher Chandler, Alice Miller
{"title":"Closed-Loop Multistep Planning.","authors":"Giulia Lafratta, Bernd Porr, Christopher Chandler, Alice Miller","doi":"10.1162/neco_a_01761","DOIUrl":"10.1162/neco_a_01761","url":null,"abstract":"<p><p>Living organisms interact with their surroundings in a closed-loop fashion, where sensory inputs dictate the initiation and termination of behaviors. Even simple animals are able to develop and execute complex plans, which has not yet been replicated in robotics using pure closed-loop input control. We propose a solution to this problem by defining a set of discrete and temporary closed-loop controllers, called \"Tasks,\" each representing a closed-loop behavior. We further introduce a supervisory module that has an innate understanding of physics and causality, through which it can simulate the execution of Task sequences over time and store the results in a model of the environment. On the basis of this model, plans can be made by chaining temporary closed-loop controllers. Our proposed framework was implemented for a robot and tested in two scenarios as proof of concept.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1288-1319"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Excitation-Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators. 激励-抑制平衡控制耦合相位振荡器的简单模型同步。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01763
Satoshi Kuroki, Kenji Mizuseki
{"title":"Excitation-Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators.","authors":"Satoshi Kuroki, Kenji Mizuseki","doi":"10.1162/neco_a_01763","DOIUrl":"10.1162/neco_a_01763","url":null,"abstract":"<p><p>Collective neuronal activity in the brain synchronizes during rest and desynchronizes during active behaviors, influencing cognitive processes such as memory consolidation, knowledge abstraction, and creative thinking. These states involve significant modulation of inhibition, which alters the excitation-inhibition (EI) balance of synaptic inputs. However, the influence of the EI balance on collective neuronal oscillation remains only partially understood. In this study, we introduce the EI-Kuramoto model, a modified version of the Kuramoto model, in which oscillators are categorized into excitatory and inhibitory groups with four distinct interaction types: excitatory-excitatory, excitatory-inhibitory, inhibitory-excitatory, and inhibitory-inhibitory. Numerical simulations identify three dynamic states-synchronized, bistable, and desynchronized-that can be controlled by adjusting the strength of the four interaction types. Theoretical analysis further demonstrates that the balance among these interactions plays a critical role in determining the dynamic states. This study provides valuable insights into the role of EI balance in synchronizing coupled oscillators and neurons.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1353-1372"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reformulation of RBM to Unify Linear and Nonlinear Dimensionality Reduction 统一线性降维和非线性降维的RBM的重新表述。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01751
Jiangsheng You;Chun-Yen Liu
{"title":"Reformulation of RBM to Unify Linear and Nonlinear Dimensionality Reduction","authors":"Jiangsheng You;Chun-Yen Liu","doi":"10.1162/neco_a_01751","DOIUrl":"10.1162/neco_a_01751","url":null,"abstract":"A restricted Boltzmann machine (RBM) is a two-layer neural network with shared weights and has been extensively studied for dimensionality reduction, data representation, and recommendation systems in the literature. The traditional RBM requires a probabilistic interpretation of the values on both layers and a Markov chain Monte Carlo (MCMC) procedure to generate samples during the training. The contrastive divergence (CD) is efficient to train the RBM, but its convergence has not been proved mathematically. In this letter, we investigate the RBM by using a maximum a posteriori (MAP) estimate and the expectation–maximization (EM) algorithm. We show that the CD algorithm without MCMC is convergent for the conditional likelihood object function. Another key contribution in this letter is the reformulation of the RBM into a deterministic model. Within the reformulated RBM, the CD algorithm without MCMC approximates the gradient descent (GD) method. This reformulated RBM can take the continuous scalar and vector variables on the nodes with flexibility in choosing the activation functions. Numerical experiments show its capability in both linear and nonlinear dimensionality reduction, and for the nonlinear dimensionality reduction, the reformulated RBM can outperform principal component analysis (PCA) by choosing the proper activation functions. Finally, we demonstrate its application to vector-valued nodes for the CIFAR-10 data set (color images) and the multivariate sequence data, which cannot be configured naturally with the traditional RBM. This work not only provides theoretical insights regarding the traditional RBM but also unifies the linear and nonlinear dimensionality reduction for scalar and vector variables.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"1034-1055"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Generalized Time Rescaling Theorem for Temporal Point Processes 时间点过程的广义时间重标定理。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01745
Xi Zhang;Akshay Aravamudan;Georgios C. Anagnostopoulos
{"title":"A Generalized Time Rescaling Theorem for Temporal Point Processes","authors":"Xi Zhang;Akshay Aravamudan;Georgios C. Anagnostopoulos","doi":"10.1162/neco_a_01745","DOIUrl":"10.1162/neco_a_01745","url":null,"abstract":"Temporal point processes are essential for modeling event dynamics in fields such as neuroscience and social media. The time rescaling theorem is commonly used to assess model fit by transforming a point process into a homogeneous Poisson process. However, this approach requires that the process be nonterminating and that complete (hence, unbounded) realizations are observed—conditions that are often unmet in practice. This article introduces a generalized time-rescaling theorem to address these limitations and, as such, facilitates a more widely applicable evaluation framework for point process models in diverse real-world scenarios.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"871-885"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adding Space to Random Networks of Spiking Neurons: A Method Based on Scaling the Network Size 给脉冲神经元随机网络增加空间:一种基于网络大小缩放的方法。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01747
Cecilia Romaro;Jose Roberto Castilho Piqueira;A. C. Roque
{"title":"Adding Space to Random Networks of Spiking Neurons: A Method Based on Scaling the Network Size","authors":"Cecilia Romaro;Jose Roberto Castilho Piqueira;A. C. Roque","doi":"10.1162/neco_a_01747","DOIUrl":"10.1162/neco_a_01747","url":null,"abstract":"Many spiking neural network models are based on random graphs that do not include topological and structural properties featured in real brain networks. To turn these models into spatial networks that describe the topographic arrangement of connections is a challenging task because one has to deal with neurons at the spatial network boundary. Addition of space may generate spurious network behavior like oscillations introduced by periodic boundary conditions or unbalanced neuronal spiking due to lack or excess of connections. Here, we introduce a boundary solution method for networks with added spatial extension that prevents the occurrence of spurious spiking behavior. The method is based on a recently proposed technique for scaling the network size that preserves first- and second-order statistics.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"957-986"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elucidating the Theoretical Underpinnings of Surrogate Gradient Learning in Spiking Neural Networks 阐明尖峰神经网络中替代梯度学习的理论基础
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01752
Julia Gygax;Friedemann Zenke
{"title":"Elucidating the Theoretical Underpinnings of Surrogate Gradient Learning in Spiking Neural Networks","authors":"Julia Gygax;Friedemann Zenke","doi":"10.1162/neco_a_01752","DOIUrl":"10.1162/neco_a_01752","url":null,"abstract":"Training spiking neural networks to approximate universal functions is essential for studying information processing in the brain and for neuromorphic computing. Yet the binary nature of spikes poses a challenge for direct gradient-based training. Surrogate gradients have been empirically successful in circumventing this problem, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to the lack of support for automatic differentiation, are impractical for training multilayer spiking neural networks but provide derivatives equivalent to surrogate gradients for single neurons. On the other hand, we investigate stochastic automatic differentiation, which is compatible with discrete randomness but has not yet been used to train spiking neural networks. We find that the latter gives surrogate gradients a theoretical basis in stochastic spiking neural networks, where the surrogate derivative matches the derivative of the neuronal escape noise function. This finding supports the effectiveness of surrogate gradients in practice and suggests their suitability for stochastic spiking neural networks. However, surrogate gradients are generally not gradients of a surrogate loss despite their relation to stochastic automatic differentiation. Nevertheless, we empirically confirm the effectiveness of surrogate gradients in stochastic multilayer spiking neural networks and discuss their relation to deterministic networks as a special case. Our work gives theoretical support to surrogate gradients and the choice of a suitable surrogate derivative in stochastic spiking neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"886-925"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10979826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Synaptic Connection Strength Changes Dynamics in a Population Firing Rate Model in Response to Continuous External Stimuli 响应连续外部刺激的群体放电率模型中分布式突触连接强度的动态变化。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01749
Masato Sugino;Mai Tanaka;Kenta Shimba;Kiyoshi Kotani;Yasuhiko Jimbo
{"title":"Distributed Synaptic Connection Strength Changes Dynamics in a Population Firing Rate Model in Response to Continuous External Stimuli","authors":"Masato Sugino;Mai Tanaka;Kenta Shimba;Kiyoshi Kotani;Yasuhiko Jimbo","doi":"10.1162/neco_a_01749","DOIUrl":"10.1162/neco_a_01749","url":null,"abstract":"Neural network complexity allows for diverse neuronal population dynamics and realizes higherorder brain functions such as cognition and memory. Complexity is enhanced through chemical synapses with exponentially decaying conductance and greater variation in the neuronal connection strength due to synaptic plasticity. However, in the macroscopic neuronal population model, synaptic connections are often described by spike connections, and connection strengths within the population are assumed to be uniform. Thus, the effects of synaptic connections variation on network synchronization remain unclear. Based on recent advances in mean field theory for the quadratic integrate-and-fire neuronal network model, we introduce synaptic conductance and variation of connection strength into the excitatory and inhibitory neuronal population model and derive the macroscopic firing rate equations for faithful modeling. We then introduce a heuristic switching rule of the dynamic system with respect to the mean membrane potentials to avoid divergences in the computation caused by variations in the neuronal connection strength. We show that the switching rule agrees with the numerical computation of the microscopic level model. In the derived model, variations in synaptic conductance and connection strength strongly alter the stability of the solutions to the equations, which is related to the mechanism of synchronous firing. When we apply physiologically plausible values from layer 4 of the mammalian primary visual cortex to the derived model, we observe event-related desynchronization at the alpha and beta frequencies and event-related synchronization at the gamma frequency over a wide range of balanced external currents. Our results show that the introduction of complex synaptic connections and physiologically valid numerical values into the low-dimensional mean field equations reproduces dynamic changes such as eventrelated (de)synchronization, and provides a unique mathematical insight into the relationship between synaptic strength variation and oscillatory mechanism.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"987-1009"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信