Neural Computation最新文献

筛选
英文 中文
Multilevel Data Representation for Training Deep Helmholtz Machines 训练深度亥姆霍兹机的多层数据表示。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01748
Jose Miguel Ramos;Luis Sa-Couto;Andreas Wichert
{"title":"Multilevel Data Representation for Training Deep Helmholtz Machines","authors":"Jose Miguel Ramos;Luis Sa-Couto;Andreas Wichert","doi":"10.1162/neco_a_01748","DOIUrl":"10.1162/neco_a_01748","url":null,"abstract":"A vast majority of the current research in the field of machine learning is done using algorithms with strong arguments pointing to their biological implausibility such as backpropagation, deviating the field’s focus from understanding its original organic inspiration to a compulsive search for optimal performance. Yet there have been a few proposed models that respect most of the biological constraints present in the human brain and are valid candidates for mimicking some of its properties and mechanisms. In this letter, we focus on guiding the learning of a biologically plausible generative model called the Helmholtz machine in complex search spaces using a heuristic based on the human image perception mechanism. We hypothesize that this model’s learning algorithm is not fit for deep networks due to its Hebbian-like local update rule, rendering it incapable of taking full advantage of the compositional properties that multilayer networks provide. We propose to overcome this problem by providing the network’s hidden layers with visual queues at different resolutions using multilevel data representation. The results on several image data sets showed that the model was able to not only obtain better overall quality but also a wider diversity in the generated images, corroborating our intuition that using our proposed heuristic allows the model to take more advantage of the network’s depth growth. More important, they show the unexplored possibilities underlying brain-inspired models and techniques.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"1010-1033"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes 泄漏的积分与发射神经元是复合泊松过程的变化点探测器
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-04-17 DOI: 10.1162/neco_a_01750
Shivaram Mani;Paul Hurley;André van Schaik;Travis Monk
{"title":"The Leaky Integrate-and-Fire Neuron Is a Change-Point Detector for Compound Poisson Processes","authors":"Shivaram Mani;Paul Hurley;André van Schaik;Travis Monk","doi":"10.1162/neco_a_01750","DOIUrl":"10.1162/neco_a_01750","url":null,"abstract":"Animal nervous systems can detect changes in their environments within hundredths of a second. They do so by discerning abrupt shifts in sensory neural activity. Many neuroscience studies have employed change-point detection (CPD) algorithms to estimate such abrupt shifts in neural activity. But very few studies have suggested that spiking neurons themselves are online change-point detectors. We show that a leaky integrate-and-fire (LIF) neuron implements an online CPD algorithm for a compound Poisson process. We quantify the CPD performance of an LIF neuron under various regions of its parameter space. We show that CPD can be a recursive algorithm where the output of one algorithm can be input to another. Then we show that a simple feedforward network of LIF neurons can quickly and reliably detect very small changes in input spiking rates. For example, our network detects a 5% change in input rates within 20 ms on average, and false-positive detections are extremely rare. In a rigorous statistical context, we interpret the salient features of the LIF neuron: its membrane potential, synaptic weight, time constant, resting potential, action potentials, and threshold. Our results potentially generalize beyond the LIF neuron model and its associated CPD problem. If spiking neurons perform change-point detection on their inputs, then the electrophysiological properties of their membranes must be related to the spiking statistics of their inputs. We demonstrate one example of this relationship for the LIF neuron and compound Poisson processes and suggest how to test this hypothesis more broadly. Maybe neurons are not noisy devices whose action potentials must be averaged over time or populations. Instead, neurons might implement sophisticated, optimal, and online statistical algorithms on their inputs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 5","pages":"926-956"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge as a Breaking of Ergodicity 知识是对遍历性的突破。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01741
Yang He;Vassiliy Lubchenko
{"title":"Knowledge as a Breaking of Ergodicity","authors":"Yang He;Vassiliy Lubchenko","doi":"10.1162/neco_a_01741","DOIUrl":"10.1162/neco_a_01741","url":null,"abstract":"We construct a thermodynamic potential that can guide training of a generative model defined on a set of binary degrees of freedom. We argue that upon reduction in description, so as to make the generative model computationally manageable, the potential develops multiple minima. This is mirrored by the emergence of multiple minima in the free energy proper of the generative model itself. The variety of training samples that employ N binary degrees of freedom is ordinarily much lower than the size 2N of the full phase space. The nonrepresented configurations, we argue, should be thought of as comprising a high-temperature phase separated by an extensive energy gap from the configurations composing the training set. Thus, training amounts to sampling a free energy surface in the form of a library of distinct bound states, each of which breaks ergodicity. The ergodicity breaking prevents escape into the near continuum of states comprising the high-temperature phase; thus, it is necessary for proper functionality. It may, however, have the side effect of limiting access to patterns that were underrepresented in the training set. At the same time, the ergodicity breaking within the library complicates both learning and retrieval. As a remedy, one may concurrently employ multiple generative models—up to one model per free energy minimum.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"742-792"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Inference and Intentional Behavior 主动推理和有意行为。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01738
Karl J. Friston;Tommaso Salvatori;Takuya Isomura;Alexander Tschantz;Alex Kiefer;Tim Verbelen;Magnus Koudahl;Aswin Paul;Thomas Parr;Adeel Razi;Brett J. Kagan;Christopher L. Buckley;Maxwell J. D. Ramstead
{"title":"Active Inference and Intentional Behavior","authors":"Karl J. Friston;Tommaso Salvatori;Takuya Isomura;Alexander Tschantz;Alex Kiefer;Tim Verbelen;Magnus Koudahl;Aswin Paul;Thomas Parr;Adeel Razi;Brett J. Kagan;Christopher L. Buckley;Maxwell J. D. Ramstead","doi":"10.1162/neco_a_01738","DOIUrl":"10.1162/neco_a_01738","url":null,"abstract":"Recent advances in theoretical biology suggest that key definitions of basal cognition and sentient behavior may arise as emergent properties of in vitro cell cultures and neuronal networks. Such neuronal networks reorganize activity to demonstrate structured behaviors when embodied in structured information landscapes. In this article, we characterize this kind of self-organization through the lens of the free energy principle, that is, as self-evidencing. We do this by first discussing the definitions of reactive and sentient behavior in the setting of active inference, which describes the behavior of agents that model the consequences of their actions. We then introduce a formal account of intentional behavior that describes agents as driven by a preferred end point or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behavior using simulations. First, we simulate the in vitro experiments, in which neuronal cultures modulated activity to improve gameplay in a simplified version of Pong by implementing nested, free energy minimizing processes. The simulations are then used to deconstruct the ensuing predictive behavior, leading to the distinction between merely reactive, sentient, and intentional behavior with the latter formalized in terms of inductive inference. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem) that show how quickly and efficiently adaptive behavior emerges under an inductive form of active inference.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"666-700"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning in Wilson-Cowan Model for Metapopulation 基于Wilson-Cowan模型的元人口学习。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01744
Raffaele Marino;Lorenzo Buffoni;Lorenzo Chicchi;Francesca Di Patti;Diego Febbe;Lorenzo Giambagli;Duccio Fanelli
{"title":"Learning in Wilson-Cowan Model for Metapopulation","authors":"Raffaele Marino;Lorenzo Buffoni;Lorenzo Chicchi;Francesca Di Patti;Diego Febbe;Lorenzo Giambagli;Duccio Fanelli","doi":"10.1162/neco_a_01744","DOIUrl":"10.1162/neco_a_01744","url":null,"abstract":"The Wilson-Cowan model for metapopulation, a neural mass network model, treats different subcortical regions of the brain as connected nodes, with connections representing various types of structural, functional, or effective neuronal connectivity between these regions. Each region comprises interacting populations of excitatory and inhibitory cells, consistent with the standard Wilson-Cowan model. In this article, we show how to incorporate stable attractors into such a metapopulation model’s dynamics. By doing so, we transform the neural mass network model into a biologically inspired learning algorithm capable of solving different classification tasks. We test it on MNIST and Fashion MNIST in combination with convolutional neural networks, as well as on CIFAR-10 and TF-FLOWERS, and in combination with a transformer architecture (BERT) on IMDB, consistently achieving high classification accuracy.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"701-741"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nearly Optimal Learning Using Sparse Deep ReLU Networks in Regularized Empirical Risk Minimization With Lipschitz Loss 基于稀疏深度ReLU网络的带Lipschitz损失正则化经验风险最小化的近最优学习。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01742
Ke Huang;Mingming Liu;Shujie Ma
{"title":"Nearly Optimal Learning Using Sparse Deep ReLU Networks in Regularized Empirical Risk Minimization With Lipschitz Loss","authors":"Ke Huang;Mingming Liu;Shujie Ma","doi":"10.1162/neco_a_01742","DOIUrl":"10.1162/neco_a_01742","url":null,"abstract":"We propose a sparse deep ReLU network (SDRN) estimator of the regression function obtained from regularized empirical risk minimization with a Lipschitz loss function. Our framework can be applied to a variety of regression and classification problems. We establish novel nonasymptotic excess risk bounds for our SDRN estimator when the regression function belongs to a Sobolev space with mixed derivatives. We obtain a new, nearly optimal, risk rate in the sense that the SDRN estimator can achieve nearly the same optimal minimax convergence rate as one-dimensional nonparametric regression with the dimension involved in a logarithm term only when the feature dimension is fixed. The estimator has a slightly slower rate when the dimension grows with the sample size. We show that the depth of the SDRN estimator grows with the sample size in logarithmic order, and the total number of nodes and weights grows in polynomial order of the sample size to have the nearly optimal risk rate. The proposed SDRN can go deeper with fewer parameters to well estimate the regression and overcome the overfitting problem encountered by conventional feedforward neural networks.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"815-870"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Sensitive Processing in a Model Neocortical Pyramidal Cell With Two Sites of Input Integration 具有两个输入整合位点的新皮质锥体细胞模型的上下文敏感加工。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01739
Bruce P. Graham;Jim W. Kay;William A. Phillips
{"title":"Context-Sensitive Processing in a Model Neocortical Pyramidal Cell With Two Sites of Input Integration","authors":"Bruce P. Graham;Jim W. Kay;William A. Phillips","doi":"10.1162/neco_a_01739","DOIUrl":"10.1162/neco_a_01739","url":null,"abstract":"Neocortical layer 5 thick-tufted pyramidal cells are prone to exhibiting burst firing on receipt of coincident basal and apical dendritic inputs. These inputs carry different information, with basal inputs coming from feedforward sensory pathways and apical inputs coming from diverse sources that provide context in the cortical hierarchy. We explore the information processing possibilities of this burst firing using computer simulations of a noisy compartmental cell model. Simulated data on stochastic burst firing due to brief, simultaneously injected basal and apical currents allow estimation of burst firing probability for different stimulus current amplitudes. Information-theory-based partial information decomposition (PID) is used to quantify the contributions of the apical and basal input streams to the information in the cell output bursting probability. Four different operating regimes are apparent, depending on the relative strengths of the input streams, with output burst probability carrying more or less information that is uniquely contributed by either the basal or apical input, or shared and synergistic information due to the combined streams. We derive and fit transfer functions for these different regimes that describe burst probability over the different ranges of basal and apical input amplitudes. The operating regimes can be classified into distinct modes of information processing, depending on the contribution of apical input to output bursting: apical cooperation, in which both basal and apical inputs are required to generate a burst; apical amplification, in which basal input alone can generate a burst but the burst probability is modulated by apical input; apical drive, in which apical input alone can produce a burst; and apical integration, in which strong apical or basal inputs alone, as well as their combination, can generate bursting. In particular, PID and the transfer function clarify that the apical amplification mode has the features required for contextually modulated information processing.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"588-634"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced EEG Forecasting: A Probabilistic Deep Learning Approach 增强脑电图预测:一种概率深度学习方法。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01743
Hanna Pankka;Jaakko Lehtinen;Risto J. Ilmoniemi;Timo Roine
{"title":"Enhanced EEG Forecasting: A Probabilistic Deep Learning Approach","authors":"Hanna Pankka;Jaakko Lehtinen;Risto J. Ilmoniemi;Timo Roine","doi":"10.1162/neco_a_01743","DOIUrl":"10.1162/neco_a_01743","url":null,"abstract":"Forecasting electroencephalography (EEG) signals, that is, estimating future values of the time series based on the past ones, is essential in many real-time EEG-based applications, such as brain–computer interfaces and closed-loop brain stimulation. As these applications are becoming more and more common, the importance of a good prediction model has increased. Previously, the autoregressive model (AR) has been employed for this task; however, its prediction accuracy tends to fade quickly as multiple steps are predicted. We aim to improve on this by applying probabilistic deep learning to make robust longer-range forecasts. For this, we applied the probabilistic deep neural network model WaveNet to forecast resting-state EEG in theta- (4–7.5 Hz) and alpha-frequency (8–13 Hz) bands and compared it to the AR model. WaveNet reliably predicted EEG signals in both theta and alpha frequencies 150 ms ahead, with mean absolute errors of 1.0 ± 1.1 µV (theta) and 0.9 ± 1.1 µV (alpha), and outperformed the AR model in estimating the signal amplitude and phase. Furthermore, we found that the probabilistic approach offers a way of forecasting even more accurately while effectively discarding uncertain predictions. We demonstrate for the first time that probabilistic deep learning can be used to forecast resting-state EEG time series. In the future, the developed model can enhance the real-time estimation of brain states in brain–computer interfaces and brain stimulation protocols. It may also be useful for answering neuroscientific questions and for diagnostic purposes.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"793-814"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiking Neuron-Astrocyte Networks for Image Recognition 脉冲神经元-星形胶质细胞网络用于图像识别。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-18 DOI: 10.1162/neco_a_01740
Jhunlyn Lorenzo;Juan-Antonio Rico-Gallego;Stéphane Binczak;Sabir Jacquir
{"title":"Spiking Neuron-Astrocyte Networks for Image Recognition","authors":"Jhunlyn Lorenzo;Juan-Antonio Rico-Gallego;Stéphane Binczak;Sabir Jacquir","doi":"10.1162/neco_a_01740","DOIUrl":"10.1162/neco_a_01740","url":null,"abstract":"From biological and artificial network perspectives, researchers have started acknowledging astrocytes as computational units mediating neural processes. Here, we propose a novel biologically inspired neuron-astrocyte network model for image recognition, one of the first attempts at implementing astrocytes in spiking neuron networks (SNNs) using a standard data set. The architecture for image recognition has three primary units: the preprocessing unit for converting the image pixels into spiking patterns, the neuron-astrocyte network forming bipartite (neural connections) and tripartite synapses (neural and astrocytic connections), and the classifier unit. In the astrocyte-mediated SNNs, an astrocyte integrates neural signals following the simplified Postnov model. It then modulates the integrate-and-fire (IF) neurons via gliotransmission, thereby strengthening the synaptic connections of the neurons within the astrocytic territory. We develop an architecture derived from a baseline SNN model for unsupervised digit classification. The spiking neuron-astrocyte networks (SNANs) display better network performance with an optimal variance-bias trade-off than SNN alone. We demonstrate that astrocytes promote faster learning, support memory formation and recognition, and provide a simplified network architecture. Our proposed SNAN can serve as a benchmark for future researchers on astrocyte implementation in artificial networks, particularly in neuromorphic systems, for its simplified design.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 4","pages":"635-665"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation 具有尖峰频率自适应的连续吸引子神经网络动力学。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-03-14 DOI: 10.1162/neco_a_01757
Yujun Li;Tianhao Chu;Si Wu
{"title":"Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation","authors":"Yujun Li;Tianhao Chu;Si Wu","doi":"10.1162/neco_a_01757","DOIUrl":"10.1162/neco_a_01757","url":null,"abstract":"Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of rapid update of network states, which can impair information update and search in the brain. To overcome this difficulty, a solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize what are otherwise permanently stable states. In such a way, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations wherever rapid state updating is involved. Previous studies have shown that continuous attractor neural networks with adaptation (A-CANNs) exhibit rich dynamical behaviors accounting for various brain functions. In this review, we present a comprehensive view of the rich diverse dynamics of A-CANNs. Moreover, we provide a unified mathematical framework to understand these different dynamical behaviors and briefly discuss their biological implications.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1057-1101"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144026069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信