Neural Computation最新文献

筛选
英文 中文
Human Eyes–Inspired Recurrent Neural Networks Are More Robust Against Adversarial Noises 人眼启发的递归神经网络在对抗对抗性噪声时更稳健
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-08-19 DOI: 10.1162/neco_a_01688
Minkyu Choi;Yizhen Zhang;Kuan Han;Xiaokai Wang;Zhongming Liu
{"title":"Human Eyes–Inspired Recurrent Neural Networks Are More Robust Against Adversarial Noises","authors":"Minkyu Choi;Yizhen Zhang;Kuan Han;Xiaokai Wang;Zhongming Liu","doi":"10.1162/neco_a_01688","DOIUrl":"10.1162/neco_a_01688","url":null,"abstract":"Humans actively observe the visual surroundings by focusing on salient objects and ignoring trivial details. However, computer vision models based on convolutional neural networks (CNN) often analyze visual input all at once through a single feedforward pass. In this study, we designed a dual-stream vision model inspired by the human brain. This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation. Trained on image recognition, this model examines an image through a sequence of fixations, each time focusing on different parts, thereby progressively building a representation of the image. We evaluated this model against various benchmarks in terms of object recognition, gaze behavior, and adversarial robustness. Our findings suggest that the model can attend and gaze in ways similar to humans without being explicitly trained to mimic human attention and that the model can enhance robustness against adversarial attacks due to its retinal sampling and recurrent processing. In particular, the model can correct its perceptual errors by taking more glances, setting itself apart from all feedforward-only models. In conclusion, the interactions of retinal sampling, eye movement, and recurrent dynamics are important to human-like visual exploration and inference.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 9","pages":"1713-1743"},"PeriodicalIF":2.7,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Poisson Gaussian-Process Latent Variable Model for Unsupervised Neural Decoding 用于无监督神经解码的扩展泊松高斯过程潜变量模型
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01685
Della Daiyi Luo;Bapun Giri;Kamran Diba;Caleb Kemere
{"title":"Extended Poisson Gaussian-Process Latent Variable Model for Unsupervised Neural Decoding","authors":"Della Daiyi Luo;Bapun Giri;Kamran Diba;Caleb Kemere","doi":"10.1162/neco_a_01685","DOIUrl":"10.1162/neco_a_01685","url":null,"abstract":"Dimension reduction on neural activity paves a way for unsupervised neural decoding by dissociating the measurement of internal neural pattern reactivation from the measurement of external variable tuning. With assumptions only on the smoothness of latent dynamics and of internal tuning curves, the Poisson gaussian-process latent variable model (P-GPLVM; Wu et al., 2017) is a powerful tool to discover the low-dimensional latent structure for high-dimensional spike trains. However, when given novel neural data, the original model lacks a method to infer their latent trajectories in the learned latent space, limiting its ability for estimating the neural reactivation. Here, we extend the P-GPLVM to enable the latent variable inference of new data constrained by previously learned smoothness and mapping information. We also describe a principled approach for the constrained latent variable inference for temporally compressed patterns of activity, such as those found in population burst events during hippocampal sharp-wave ripples, as well as metrics for assessing the validity of neural pattern reactivation and inferring the encoded experience. Applying these approaches to hippocampal ensemble recordings during active maze exploration, we replicate the result that P-GPLVM learns a latent space encoding the animal’s position. We further demonstrate that this latent space can differentiate one maze context from another. By inferring the latent variables of new neural data during running, certain neural patterns are observed to reactivate, in accordance with the similarity of experiences encoded by its nearby neural trajectories in the training data manifold. Finally, reactivation of neural patterns can be estimated for neural activity during population burst events as well, allowing the identification for replay events of versatile behaviors and more general experiences. Thus, our extension of the P-GPLVM framework for unsupervised analysis of neural activity can be used to answer critical questions related to scientific discovery.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1449-1475"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Complexity of Convolutional Neural Networks 卷积神经网络的能量复杂性
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01676
Jiří Šíma;Petra Vidnerová;Vojtěch Mrázek
{"title":"Energy Complexity of Convolutional Neural Networks","authors":"Jiří Šíma;Petra Vidnerová;Vojtěch Mrázek","doi":"10.1162/neco_a_01676","DOIUrl":"10.1162/neco_a_01676","url":null,"abstract":"The energy efficiency of hardware implementations of convolutional neural networks (CNNs) is critical to their widespread deployment in low-power mobile devices. Recently, a number of methods have been proposed for providing energy-optimal mappings of CNNs onto diverse hardware accelerators. Their estimated energy consumption is related to specific implementation details and hardware parameters, which does not allow for machine-independent exploration of CNN energy measures. In this letter, we introduce a simplified theoretical energy complexity model for CNNs, based on only a two-level memory hierarchy that captures asymptotically all important sources of energy consumption for different CNN hardware implementations. In this model, we derive a simple energy lower bound and calculate the energy complexity of evaluating a CNN layer for two common data flows, providing corresponding upper bounds. According to statistical tests, the theoretical energy upper and lower bounds we present fit asymptotically very well with the real energy consumption of CNN implementations on the Simba and Eyeriss hardware platforms, estimated by the Timeloop/Accelergy program, which validates the proposed energy complexity model for CNNs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1601-1625"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trade-Offs Between Energy and Depth of Neural Networks 神经网络能量与深度之间的权衡。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01683
Kei Uchizawa;Haruki Abe
{"title":"Trade-Offs Between Energy and Depth of Neural Networks","authors":"Kei Uchizawa;Haruki Abe","doi":"10.1162/neco_a_01683","DOIUrl":"10.1162/neco_a_01683","url":null,"abstract":"We present an investigation on threshold circuits and other discretized neural networks in terms of the following four computational resources—size (the number of gates), depth (the number of layers), weight (weight resolution), and energy—where the energy is a complexity measure inspired by sparse coding and is defined as the maximum number of gates outputting nonzero values, taken over all the input assignments. As our main result, we prove that if a threshold circuit C of size s, depth d, energy e, and weight w computes a Boolean function f (i.e., a classification task) of n variables, it holds that log( rk (f))≤ed(logs+logw+logn) regardless of the algorithm employed by C to compute f, where rk (f) is a parameter solely determined by a scale of f and defined as the maximum rank of a communication matrix with regard to f taken over all the possible partitions of the n input variables. For example, given a Boolean function CD n(ξ) = ⋁i=1n/2ξi∧ξn/2+i, we can prove that n/2≤ed( log s+logw+logn) holds for any circuit C computing CD n. While its left-hand side is linear in n, its right-hand side is bounded by the product of the logarithmic factors of s,w,n and the linear factors of d,e. If we view the logarithmic terms as having a negligible impact on the bound, our result implies a trade-off between depth and energy: n/2 needs to be smaller than the product of e and d. For other neural network models, such as discretized ReLU circuits and discretized sigmoid circuits, we also prove that a similar trade-off holds. Thus, our results indicate that increasing depth linearly enhances the capability of neural networks to acquire sparse representations when there are hardware constraints on the number of neurons and weight resolution.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1541-1567"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Promoting the Shift From Pixel-Level Correlations to Object Semantics Learning by Rethinking Computer Vision Benchmark Data Sets 通过重新思考计算机视觉基准数据集,促进从像素级相关性学习到物体语义学习的转变
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01677
Maria Osório;Andreas Wichert
{"title":"Promoting the Shift From Pixel-Level Correlations to Object Semantics Learning by Rethinking Computer Vision Benchmark Data Sets","authors":"Maria Osório;Andreas Wichert","doi":"10.1162/neco_a_01677","DOIUrl":"10.1162/neco_a_01677","url":null,"abstract":"In computer vision research, convolutional neural networks (CNNs) have demonstrated remarkable capabilities at extracting patterns from raw pixel data, achieving state-of-the-art recognition accuracy. However, they significantly differ from human visual perception, prioritizing pixel-level correlations and statistical patterns, often overlooking object semantics. To explore this difference, we propose an approach that isolates core visual features crucial for human perception and object recognition: color, texture, and shape. In experiments on three benchmarks—Fruits 360, CIFAR-10, and Fashion MNIST—each visual feature is individually input into a neural network. Results reveal data set–dependent variations in classification accuracy, highlighting that deep learning models tend to learn pixel-level correlations instead of fundamental visual features. To validate this observation, we used various combinations of concatenated visual features as input for a neural network on the CIFAR-10 data set. CNNs excel at learning statistical patterns in images, achieving exceptional performance when training and test data share similar distributions. To substantiate this point, we trained a CNN on CIFAR-10 data set and evaluated its performance on the “dog” class from CIFAR-10 and on an equivalent number of examples from the Stanford Dogs data set. The CNN poor performance on Stanford Dogs images underlines the disparity between deep learning and human visual perception, highlighting the need for models that learn object semantics. Specialized benchmark data sets with controlled variations hold promise for aligning learned representations with human cognition in computer vision research.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1626-1642"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A General, Noise-Driven Mechanism for the 1/f-Like Behavior of Neural Field Spectra 神经场谱 1/f-Like 行为的一般噪声驱动机制
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01682
Mark A. Kramer;Catherine J. Chu
{"title":"A General, Noise-Driven Mechanism for the 1/f-Like Behavior of Neural Field Spectra","authors":"Mark A. Kramer;Catherine J. Chu","doi":"10.1162/neco_a_01682","DOIUrl":"10.1162/neco_a_01682","url":null,"abstract":"Consistent observations across recording modalities, experiments, and neural systems find neural field spectra with 1/f-like scaling, eliciting many alternative theories to explain this universal phenomenon. We show that a general dynamical system with stochastic drive and minimal assumptions generates 1/f-like spectra consistent with the range of values observed in vivo without requiring a specific biological mechanism or collective critical behavior.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1643-1668"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks 尖峰神经元网络中的脉冲形状和电压相关同步性
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01680
Bastian Pietras
{"title":"Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks","authors":"Bastian Pietras","doi":"10.1162/neco_a_01680","DOIUrl":"10.1162/neco_a_01680","url":null,"abstract":"Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1476-1540"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model 通过重参数化网络模型学习循环神经网络的定点
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-07-19 DOI: 10.1162/neco_a_01681
Vicky Zhu;Robert Rosenbaum
{"title":"Learning Fixed Points of Recurrent Neural Networks by Reparameterizing the Network Model","authors":"Vicky Zhu;Robert Rosenbaum","doi":"10.1162/neco_a_01681","DOIUrl":"10.1162/neco_a_01681","url":null,"abstract":"In computational neuroscience, recurrent neural networks are widely used to model neural activity and learning. In many studies, fixed points of recurrent neural networks are used to model neural responses to static or slowly changing stimuli, such as visual cortical responses to static visual stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. In parallel, training fixed points is a central topic in the study of deep equilibrium models in machine learning. A natural approach is to use gradient descent on the Euclidean space of weights. We show that this approach can lead to poor learning performance due in part to singularities that arise in the loss surface. We use a reparameterization of the recurrent network model to derive two alternative learning rules that produce more robust learning dynamics. We demonstrate that these learning rules avoid singularities and learn more effectively than standard gradient descent. The new learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should be expected to follow the negative Euclidean gradient of synaptic weights.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 8","pages":"1568-1600"},"PeriodicalIF":2.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mean Field to Capture Asynchronous Irregular Dynamics of Conductance-Based Networks of Adaptive Quadratic Integrate-and-Fire Neuron Models 捕捉基于电导的自适应四元积分与火神经元模型网络的异步不规则动态的平均场
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01670
Christoffer G. Alexandersen;Chloé Duprat;Aitakin Ezzati;Pierre Houzelstein;Ambre Ledoux;Yuhong Liu;Sandra Saghir;Alain Destexhe;Federico Tesler;Damien Depannemaecker
{"title":"A Mean Field to Capture Asynchronous Irregular Dynamics of Conductance-Based Networks of Adaptive Quadratic Integrate-and-Fire Neuron Models","authors":"Christoffer G. Alexandersen;Chloé Duprat;Aitakin Ezzati;Pierre Houzelstein;Ambre Ledoux;Yuhong Liu;Sandra Saghir;Alain Destexhe;Federico Tesler;Damien Depannemaecker","doi":"10.1162/neco_a_01670","DOIUrl":"10.1162/neco_a_01670","url":null,"abstract":"Mean-field models are a class of models used in computational neuroscience to study the behavior of large populations of neurons. These models are based on the idea of representing the activity of a large number of neurons as the average behavior of mean-field variables. This abstraction allows the study of large-scale neural dynamics in a computationally efficient and mathematically tractable manner. One of these methods, based on a semianalytical approach, has previously been applied to different types of single-neuron models, but never to models based on a quadratic form. In this work, we adapted this method to quadratic integrate-and-fire neuron models with adaptation and conductance-based synaptic interactions. We validated the mean-field model by comparing it to the spiking network model. This mean-field model should be useful to model large-scale activity based on quadratic neurons interacting with conductance-based synapses.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1433-1448"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes 生物神经网络的学习是否基于随机梯度下降?使用随机过程的分析
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-06-07 DOI: 10.1162/neco_a_01668
Sören Christensen;Jan Kallsen
{"title":"Is Learning in Biological Neural Networks Based on Stochastic Gradient Descent? An Analysis Using Stochastic Processes","authors":"Sören Christensen;Jan Kallsen","doi":"10.1162/neco_a_01668","DOIUrl":"10.1162/neco_a_01668","url":null,"abstract":"In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this note, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 7","pages":"1424-1432"},"PeriodicalIF":2.7,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信