Neural Computation最新文献

筛选
英文 中文
Infinite Horizon Control With Nonlinear Dynamics Models Reproduces Temporal Modulation of Reaching Movements 非线性动力学模型的无限视界控制再现了到达运动的时间调制。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-30 Epub Date: 2026-03-08 DOI: 10.1162/NECO.a.1515
Antoine De Comite;Hari Teja Kalidindi;J. Andrew Pruszynski;Frédéric Crevecoeur
{"title":"Infinite Horizon Control With Nonlinear Dynamics Models Reproduces Temporal Modulation of Reaching Movements","authors":"Antoine De Comite;Hari Teja Kalidindi;J. Andrew Pruszynski;Frédéric Crevecoeur","doi":"10.1162/NECO.a.1515","DOIUrl":"10.1162/NECO.a.1515","url":null,"abstract":"Movement duration, a fundamental aspect of motor control, is often viewed as a preprogrammed parameter requiring dedicated selection mechanisms. An alternative view posits that movement duration emerges from the control policy itself. Here, we demonstrate, using infinite horizon optimal feedback control (IHOFC) and nonlinear limb dynamics, that this alternative hypothesis successfully captures diverse aspects of human reaching behavior, including trade-offs between movement duration and task parameters. Specifically, we reproduced the modulation of movement duration with varying reach distances and accuracy (Fitts's law) in the presence of nonlinear dynamics, and extended the infinite horizon framework to include the effect of rewards and biomechanical costs. Furthermore, our model also featured a temporal evolution of feedback responses to perturbations that resembles experimental observations and naturally accounted for motor decisions observed when participants select one among multiple goals in dynamic environments. Together, these developments show that in many cases, movement duration may not need to be specified a priori, but instead could result from task-dependent control policies. This framework validates a candidate explanation for varied movement durations, which invites a reconsideration of the nature and strength of evidence for the finite horizon formulation.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 5","pages":"823-844"},"PeriodicalIF":2.1,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147634991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Echoes of the Past: A Unified Perspective on Fading Memory and Echo States 过去的回声:对消退记忆和回声状态的统一视角。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-30 Epub Date: 2026-03-08 DOI: 10.1162/NECO.a.1510
Juan-Pablo Ortega;Florian Rossmannek
{"title":"Echoes of the Past: A Unified Perspective on Fading Memory and Echo States","authors":"Juan-Pablo Ortega;Florian Rossmannek","doi":"10.1162/NECO.a.1510","DOIUrl":"10.1162/NECO.a.1510","url":null,"abstract":"Recurrent neural networks (RNNs) have become increasingly popular in information processing tasks involving time series and temporal data. A fundamental property of RNNs is their ability to create reliable input/output responses, often linked to how the network handles its memory of the information it processed. Various notions have been proposed to conceptualize the behavior of memory in RNNs, including steady states, echo states, state forgetting, input forgetting, and fading memory. Although these notions are often used interchangeably, their precise relationships remain unclear. This work aims to unify these notions in a common language, derive new implications and equivalences between them, and provide alternative proofs to some existing results. By clarifying the relationships between these concepts, this research contributes to a deeper understanding of RNNs and their temporal information processing capabilities.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 5","pages":"765-782"},"PeriodicalIF":2.1,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147634950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Similarity Matching Networks: Hebbian Learning and Convergence Over Multiple Timescales 相似匹配网络:多时间尺度的Hebbian学习和收敛。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-30 Epub Date: 2026-03-08 DOI: 10.1162/NECO.a.1509
Veronica Centorrino;Francesco Bullo;Giovanni Russo
{"title":"Similarity Matching Networks: Hebbian Learning and Convergence Over Multiple Timescales","authors":"Veronica Centorrino;Francesco Bullo;Giovanni Russo","doi":"10.1162/NECO.a.1509","DOIUrl":"10.1162/NECO.a.1509","url":null,"abstract":"A recent breakthrough in biologically plausible normative frameworks for dimensionality reduction is based on the similarity matching cost function and the low-rank matrix approximation problem. Despite clear biological interpretation, successful application in several domains, and experimental validation, a formal complete convergence analysis remains elusive. Building on this framework, we consider and analyze a continuous-time neural network, the similarity matching network, for principal subspace projection. Derived from a min-max-min objective, this biologically plausible network consists of three coupled dynamics evolving at different timescales: neural dynamics, lateral synaptic dynamics, and feedforward synaptic dynamics at the fast, intermediate, and slow timescales, respectively. The feedforward and lateral synaptic dynamics consist of Hebbian and anti-Hebbian learning rules, respectively. By leveraging a multilevel optimization framework, we prove convergence of the dynamics in the offline setting. Specifically, at the first level (fast timescale), we show strong convexity of the cost function and global exponential convergence of the corresponding gradient-flow dynamics. At the second level (intermediate timescale), we prove strong concavity of the cost function and exponential convergence of the corresponding gradient-flow dynamics within the space of positive definite matrices. At the third and final level (slow timescale), we study a nonconvex and nonsmooth cost function, provide explicit expressions for its global minima, and prove almost sure convergence of the corresponding gradient-flow dynamics to the global minima. These results rely on two empirically motivated conjectures that are supported by thorough numerical experiments. Finally, we validate the effectiveness of our approach via a numerical example.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 5","pages":"725-764"},"PeriodicalIF":2.1,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive Control Strategies Derive From Dimension Reliability. 认知控制策略源于维度可靠性。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-30 DOI: 10.1162/NECO.a.1513
William H Alexander
{"title":"Cognitive Control Strategies Derive From Dimension Reliability.","authors":"William H Alexander","doi":"10.1162/NECO.a.1513","DOIUrl":"10.1162/NECO.a.1513","url":null,"abstract":"<p><p>To explain behavioral effects, models of cognitive control frequently rely on task information that the modeler provides. Hard-wired information can include labeling task dimensions as being relevant or irrelevant, defining which task stimuli belong to which task dimensions, or proposing a specific strategy by which control is adjusted during task performance. Although models incorporating hard-wired information of this nature are frequently successful at accounting for observed behavior, their ability to do so often depends on tailoring this information to specific tasks, usually performed in a laboratory setting. Outside of the laboratory, individuals are not usually provided explicit information about how to behave; it thus remains an open question as to how individuals identify, update, and switch task strategies in the real world. Here, we present a new model of cognitive control, learned attention for control (LAC), that not only captures a broad range of control effects but does so using a minimal amount of modeler-supplied information. In a series of simulations, we demonstrate how the LAC model adopts distinct control strategies based on recent trial history and adapts to changing behavioral contexts. The model's ability to do so derives from an ongoing evaluation of how well task stimuli independently predict correct behavior, and the results of this evaluation are used to shift attention among information sources. These results suggest that the reliability of information can serve as a general principle for understanding cognitive control.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"681-724"},"PeriodicalIF":2.1,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147634937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Signal-Dependent Planning Noise Reduces Task Interference by Assisting in the Formation of Stable Motor Primitives in a Neural Network Model of Muscle Coordination Learning 在肌肉协调学习的神经网络模型中,信号依赖的规划噪声通过协助形成稳定的运动原语来减少任务干扰。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-30 Epub Date: 2026-03-08 DOI: 10.1162/NECO.a.1512
Daniel W. Feng;David J. Reinkensmeyer;Juan C. Perez-Ibarra
{"title":"Signal-Dependent Planning Noise Reduces Task Interference by Assisting in the Formation of Stable Motor Primitives in a Neural Network Model of Muscle Coordination Learning","authors":"Daniel W. Feng;David J. Reinkensmeyer;Juan C. Perez-Ibarra","doi":"10.1162/NECO.a.1512","DOIUrl":"10.1162/NECO.a.1512","url":null,"abstract":"In human motor coordination, learning to coactivate multiple muscles at once to achieve distinct target combinations of forces or tasks remains a fundamental area of study. Task interference, where training on one task degrades performance on previously learned tasks, can slow motor learning. However, the neural mechanisms that reduce interference are not fully understood. We hypothesized that the structure of planning noise, specifically its signal-dependent nature, significantly shapes learning dynamics and limits interference within motor learning systems that rely on variability for exploration. To test this hypothesis, we developed a three-layer neural network model of muscle coordination informed by key neuroanatomical and neurophysiological principles and simulated learning for producing various combinations of muscle forces. Synaptic weights were stochastically altered from trial to trial with either fixed-variance planning noise (FVPN), where each connection's variance was fixed during learning, or signal-dependent planning noise (SDPN), where noise variance depended on the neuron population activity. Weights were reinforced when they reduced output error relative to target forces. An execution noise term, applied to the motor output, modeled peripheral motor variability. However, the learning rule was not informed about how much of the output corresponded to peripheral or central variability. Our results showed that SDPN improved both the rate and accuracy of multitask learning by reducing task interference compared to FVPN across network sizes, training schedules, and execution noise levels. SDPN achieved this by concentrating neural plasticity within the neuron populations engaged by the current task rather than modifying the entire network. This signal-dependent plasticity allowed multiple motor primitives to form, stabilize, and be reused for new tasks. The model replicated the well-documented benefit of interleaved versus blocked training in motor learning. As a computational proof of concept, this work suggests that SDPN can benefit multitask motor training by facilitating the formation of motor primitives.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"38 5","pages":"783-822"},"PeriodicalIF":2.1,"publicationDate":"2026-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Dynamical Models Through Diffeomorphic Vector Field Alignment. 通过微分同构向量场对齐比较动力学模型。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-23 DOI: 10.1162/NECO.a.1526
Ruiqi Chen, Giacomo Vedovati, Todd Braver, ShiNung Ching
{"title":"Comparing Dynamical Models Through Diffeomorphic Vector Field Alignment.","authors":"Ruiqi Chen, Giacomo Vedovati, Todd Braver, ShiNung Ching","doi":"10.1162/NECO.a.1526","DOIUrl":"https://doi.org/10.1162/NECO.a.1526","url":null,"abstract":"<p><p>Dynamical systems models such as recurrent neural networks (RNNs) are increasingly popular in theoretical neuroscience as a vehicle for hypothesis generation and data analysis. Evaluating the dynamics in such models is key to understanding their learned generative mechanisms. However, such evaluation is impeded by two major challenges: (1) comparison of learned dynamics across models is difficult because a priori there is no enforced equivalence of their coordinate systems, and (2) identification of mechanistically important low-dimensional motifs (e.g., limit sets) is analytically intractable in high-dimensional nonlinear models such as RNNs. Here, we propose a comprehensive framework to address these two issues, termed diffeomorphic vector field alignment for learned models (DFORM). DFORM learns a nonlinear coordinate transformation between the state spaces of two dynamical systems, which aligns their trajectories in a maximally one-to-one manner. In so doing, DFORM enables an assessment of whether a set of models exhibits topological equivalence, that is, their dynamics are mechanistically similar despite differences in their coordinate systems. A by-product of this methodology is a means to locate dynamical motifs on low-dimensional manifolds embedded within higher-dimensional systems. We verified DFORM's ability to identify linear and nonlinear coordinate transformations using canonical topologically equivalent systems, RNNs, and systems related by nonlinear flows. DFORM was also shown to provide a quantification of similarity between topologically distinct systems. We then demonstrated that DFORM can locate important dynamical motifs including invariant manifolds and saddle limit sets within high-dimensional models. Finally, using a set of RNN models trained on human functional magnetic resonance imaging recordings, we illustrated that DFORM can identify limit cycles from high-dimensional data-driven models, which agreed well with prior numerical analysis.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-56"},"PeriodicalIF":2.1,"publicationDate":"2026-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147789361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain Adaptation With Additional Features via Label-Aware and Graph-Based Fused Gromov-Wasserstein Optimal Transport. 基于标签感知和基于图的融合Gromov-Wasserstein最优传输的领域自适应。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-23 DOI: 10.1162/NECO.a.1525
Toshimitsu Aritake, Hideitsu Hino
{"title":"Domain Adaptation With Additional Features via Label-Aware and Graph-Based Fused Gromov-Wasserstein Optimal Transport.","authors":"Toshimitsu Aritake, Hideitsu Hino","doi":"10.1162/NECO.a.1525","DOIUrl":"https://doi.org/10.1162/NECO.a.1525","url":null,"abstract":"<p><p>In many domain adaptation tasks, the source and target domains share an identical feature space, so the domain gap arises only from the distributional shift. In practice; however, new-target-only features (e.g., sensors added after training) often become available at test time, violating the shared feature space assumption and invalidating most existing methods. We address this setting with Label-Aware and Graph-Based Fused Gromov-Wasserstein Optimal Transport (LAGB-FGW), focusing on a transductive domain adaptation scenario, in which the entire unlabeled target data set is available during training, and predictions are jointly inferred for all target samples. LAGB-FGW (1) embeds label discrepancy directly into the source metric, (2) constructs a K-NN graph on the full target feature space to capture structure introduced by the additional features, and (3) jointly solves standard Optimal Transport (OT) and Gromov-Wasserstein OT, thereby transferring labels using both the common and the additional features. We validate LAGB-FGW on four synthetic benchmarks and the HAR70+ human-activity data set, and LAGB-FGW consistently outperforms all baselines, highlighting the advantage of combining source label information with graph-based structural cues when additional target features are available only at test time.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-30"},"PeriodicalIF":2.1,"publicationDate":"2026-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147789401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Learning With High-Dimensional Computing Architecture Similar to Von Neumann's. 与冯·诺伊曼的高维计算架构类似的自主学习。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-23 DOI: 10.1162/NECO.a.1523
Pentti Kanerva
{"title":"Autonomous Learning With High-Dimensional Computing Architecture Similar to Von Neumann's.","authors":"Pentti Kanerva","doi":"10.1162/NECO.a.1523","DOIUrl":"https://doi.org/10.1162/NECO.a.1523","url":null,"abstract":"<p><p>We model human and animal learning by computing with high-dimensional vectors (e.g., D = 10,000). The architecture resembles traditional (von Neumann) computing with numbers, but the instructions refer to vectors and operate on them in superposition. The architecture includes a high-capacity memory for vectors, counterpart of the random-access memory (RAM) for numbers. The model's ability to learn from data reminds us of deep learning, but with an architecture closer to biology. The architecture agrees with an idea from psychology that human memory and learning involve a short-term working memory and a long-term data store. Neuroscience provides us with a model of the long-term memory, namely, the cortex of the cerebellum. With roots in psychology, biology, and traditional computing, a theory of computing with vectors can help us understand how brains compute. Application to learning by robots seems inevitable, but there is likely to be more, including language. Ultimately we want to compute with no more material and energy than brains use. To that end, we need a mathematical theory that agrees with psychology and biology and is suitable for nano-technology. We also need to exercise the theory in large-scale experiments. The analogy with traditional computing suggests that the architecture be programmable in terms of variables, values, and data structures, the very things that have made traditional computing ubiquitous and that seem worth learning from and emulating.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-19"},"PeriodicalIF":2.1,"publicationDate":"2026-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147789278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global Stability of a Hebbian/Anti-Hebbian Network for Principal Subspace Learning. 主子空间学习的Hebbian/Anti-Hebbian网络的全局稳定性。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-23 DOI: 10.1162/NECO.a.1524
David Lipshutz, Robert J Lipshutz
{"title":"Global Stability of a Hebbian/Anti-Hebbian Network for Principal Subspace Learning.","authors":"David Lipshutz, Robert J Lipshutz","doi":"10.1162/NECO.a.1524","DOIUrl":"https://doi.org/10.1162/NECO.a.1524","url":null,"abstract":"<p><p>Biological neural networks self-organize according to local synaptic modifications to produce stable computations. How modifications at the synaptic level give rise to such computations at the network level remains an open question. Pehlevan et al. (2015) proposed a model of a self-organizing neural network with Hebbian and anti-Hebbian synaptic updates that implements an algorithm for principal subspace analysis; however, global stability of the nonlinear synaptic dynamics has not been established. Here, for the case that the feedforward and recurrent weights evolve at the same timescale, we prove global stability of the continuum limit of the synaptic dynamics and show that the dynamics evolve in two phases. In the first phase, the synaptic weights converge to an invariant manifold where the neural filters are orthonormal. In the second phase, the synaptic dynamics follow the gradient flow of a nonconvex potential function whose minima correspond to neural filters that span the principal subspace of the input data.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-24"},"PeriodicalIF":2.1,"publicationDate":"2026-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147789488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graphon Signal Processing for Spiking and Biological Neural Networks. 石墨烯信号处理与生物神经网络。
IF 2.1 4区 计算机科学
Neural Computation Pub Date : 2026-04-23 DOI: 10.1162/NECO.a.1522
Takuma Sumi, Georgi S Medvedev
{"title":"Graphon Signal Processing for Spiking and Biological Neural Networks.","authors":"Takuma Sumi, Georgi S Medvedev","doi":"10.1162/NECO.a.1522","DOIUrl":"https://doi.org/10.1162/NECO.a.1522","url":null,"abstract":"<p><p>Graph signal processing (GSP) extends classical signal processing to signals defined on graphs, enabling filtering, spectral analysis, and sampling of data generated by networks of various kinds. Graphon signal processing (GnSP) develops this framework further by employing the theory of graphons. Graphons are measurable functions on the unit square that represent graphs and limits of convergent graph sequences. The use of graphons provides stability of GSP methods to stochastic variability in network data and improves computational efficiency for very large networks. We use GnSP to address the stimulus identification problem (SIP) in computational and biological neural networks. The SIP is an inverse problem that aims to infer the unknown stimulus sfrom the observed network output f. We first validate the approach in spiking neural network simulations and then analyze calcium imaging recordings. Graphon-based spectral projections yield trial-invariant, low-dimensional embeddings that improve stimulus classification over principal component analysis and discrete GSP baselines. The embeddings remain stable under variations in network stochasticity, providing robustness to different network sizes and noise levels. To the best of our knowledge, this is the first application of GnSP to biological neural networks, opening new avenues for graphon-based analysis in neuroscience.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-26"},"PeriodicalIF":2.1,"publicationDate":"2026-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147789551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书