论概率和神经计算异常有效性的物理基础

S. Tiwari, D. Querlioz
{"title":"论概率和神经计算异常有效性的物理基础","authors":"S. Tiwari, D. Querlioz","doi":"10.1109/ICRC.2017.8123680","DOIUrl":null,"url":null,"abstract":"Probabilistic and neural approaches, through their incorporation of nonlinearities and compression of states, enable a broader sampling of the phase space. For a broad set of complex questions that are encountered in conventional computation, this approach is very effective. In these patterns-oriented tasks a fluctuation in the size of data is akin to a thermal fluctuation. A thermodynamic view naturally applies to this computational style to information processing and from this reasoning one may estimate a variety of interesting consequences for computing: (a) efficiencies in energy, (b) complexity of tasks that can be tackled, (c) inaccuracies in inferences, and (d) limitations arising in the incompleteness of inputs and models. We employ toy model examples to reflect on these important themes to establish the following: (.)A dissipation minimum can be predicted predicated on the averaged information being discarded under constraints of minimization of energy and maximization of information preservation and entropy. Analogous to the $k_{B}T \\ln 2$ for the randomization of a bit, under biological constraints, the $\\sim \\! -70 \\; mV$ base and $\\sim \\! 40 \\;mV$ peak spike potential are then a natural consequence in a biological neural environment. Non-biological, that is, physical implementations can be analyzed by a similar approach for noisy and variability-prone thermodynamic setting. (.) In drawing inference, the resorting to Occam's razor as a statistical equivalent to the choice of simplest and least number of axioms in developing of a theory conflicts with Mencken's rule--for every complex problem, there is an answer that is clear, simple and wrong--as a reflection of dimensionality reduction. (.) Between these two factors, it is possible to make a measure of the error bound predicated on the averaged information being discarded and being filled in, and (.) This lets one predict the upper limits of information processing rate under constraints. These observations point to what may be achievable using neural and probabilistic computation through their physical implementation as reflected in the thermodynamics of the implementation of a statistical information mechanic engine that avoids computation via deterministic linear algebra.","PeriodicalId":125114,"journal":{"name":"2017 IEEE International Conference on Rebooting Computing (ICRC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Physical Underpinnings of the Unusual Effectiveness of Probabilistic and Neural Computation\",\"authors\":\"S. Tiwari, D. Querlioz\",\"doi\":\"10.1109/ICRC.2017.8123680\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Probabilistic and neural approaches, through their incorporation of nonlinearities and compression of states, enable a broader sampling of the phase space. For a broad set of complex questions that are encountered in conventional computation, this approach is very effective. In these patterns-oriented tasks a fluctuation in the size of data is akin to a thermal fluctuation. A thermodynamic view naturally applies to this computational style to information processing and from this reasoning one may estimate a variety of interesting consequences for computing: (a) efficiencies in energy, (b) complexity of tasks that can be tackled, (c) inaccuracies in inferences, and (d) limitations arising in the incompleteness of inputs and models. We employ toy model examples to reflect on these important themes to establish the following: (.)A dissipation minimum can be predicted predicated on the averaged information being discarded under constraints of minimization of energy and maximization of information preservation and entropy. Analogous to the $k_{B}T \\\\ln 2$ for the randomization of a bit, under biological constraints, the $\\\\sim \\\\! -70 \\\\; mV$ base and $\\\\sim \\\\! 40 \\\\;mV$ peak spike potential are then a natural consequence in a biological neural environment. Non-biological, that is, physical implementations can be analyzed by a similar approach for noisy and variability-prone thermodynamic setting. (.) In drawing inference, the resorting to Occam's razor as a statistical equivalent to the choice of simplest and least number of axioms in developing of a theory conflicts with Mencken's rule--for every complex problem, there is an answer that is clear, simple and wrong--as a reflection of dimensionality reduction. (.) Between these two factors, it is possible to make a measure of the error bound predicated on the averaged information being discarded and being filled in, and (.) This lets one predict the upper limits of information processing rate under constraints. These observations point to what may be achievable using neural and probabilistic computation through their physical implementation as reflected in the thermodynamics of the implementation of a statistical information mechanic engine that avoids computation via deterministic linear algebra.\",\"PeriodicalId\":125114,\"journal\":{\"name\":\"2017 IEEE International Conference on Rebooting Computing (ICRC)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Rebooting Computing (ICRC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRC.2017.8123680\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Rebooting Computing (ICRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRC.2017.8123680","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

概率和神经方法,通过结合非线性和状态压缩,使相空间的采样范围更广。对于传统计算中遇到的大量复杂问题,这种方法是非常有效的。在这些面向模式的任务中,数据大小的波动类似于热波动。热力学的观点自然地适用于这种计算方式的信息处理,从这种推理中,人们可以估计出计算的各种有趣的结果:(A)能量效率,(b)可以处理的任务的复杂性,(c)推断的不准确性,以及(d)输入和模型的不完整性引起的限制。我们用玩具模型的例子来反思这些重要的主题,以建立以下:(。)在能量最小化、信息保存和熵最大化的约束下,根据被丢弃的平均信息可以预测出耗散最小值。类似于位随机化的$k_{B}T \ln 2$,在生物约束下,$\sim \! -70 \; mV$碱基和$\sim \! 40 \;mV$峰值电位是生物神经环境中的自然结果。非生物的,即物理的实现可以用类似的方法来分析嘈杂和易变的热力学设置。()在进行推理时,将奥卡姆剃刀作为在理论发展中选择最简单和最少公理的统计等效,与门肯规则相冲突——对于每一个复杂的问题,都有一个清晰、简单和错误的答案——作为维数减少的反映。()在这两个因素之间,可以对基于丢弃和填充的平均信息的误差界进行度量,并且()。这使得人们可以预测在约束条件下信息处理速率的上限。这些观察结果表明,通过物理实现的神经和概率计算可以实现什么,这反映在统计信息机械引擎实现的热力学中,避免了通过确定性线性代数进行计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the Physical Underpinnings of the Unusual Effectiveness of Probabilistic and Neural Computation
Probabilistic and neural approaches, through their incorporation of nonlinearities and compression of states, enable a broader sampling of the phase space. For a broad set of complex questions that are encountered in conventional computation, this approach is very effective. In these patterns-oriented tasks a fluctuation in the size of data is akin to a thermal fluctuation. A thermodynamic view naturally applies to this computational style to information processing and from this reasoning one may estimate a variety of interesting consequences for computing: (a) efficiencies in energy, (b) complexity of tasks that can be tackled, (c) inaccuracies in inferences, and (d) limitations arising in the incompleteness of inputs and models. We employ toy model examples to reflect on these important themes to establish the following: (.)A dissipation minimum can be predicted predicated on the averaged information being discarded under constraints of minimization of energy and maximization of information preservation and entropy. Analogous to the $k_{B}T \ln 2$ for the randomization of a bit, under biological constraints, the $\sim \! -70 \; mV$ base and $\sim \! 40 \;mV$ peak spike potential are then a natural consequence in a biological neural environment. Non-biological, that is, physical implementations can be analyzed by a similar approach for noisy and variability-prone thermodynamic setting. (.) In drawing inference, the resorting to Occam's razor as a statistical equivalent to the choice of simplest and least number of axioms in developing of a theory conflicts with Mencken's rule--for every complex problem, there is an answer that is clear, simple and wrong--as a reflection of dimensionality reduction. (.) Between these two factors, it is possible to make a measure of the error bound predicated on the averaged information being discarded and being filled in, and (.) This lets one predict the upper limits of information processing rate under constraints. These observations point to what may be achievable using neural and probabilistic computation through their physical implementation as reflected in the thermodynamics of the implementation of a statistical information mechanic engine that avoids computation via deterministic linear algebra.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信