神经科学中的解释模型,第 2 部分:功能可理解性和差异原则

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS
Rosa Cao , Daniel Yamins
{"title":"神经科学中的解释模型,第 2 部分:功能可理解性和差异原则","authors":"Rosa Cao ,&nbsp;Daniel Yamins","doi":"10.1016/j.cogsys.2023.101200","DOIUrl":null,"url":null,"abstract":"<div><p>Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally “top-down”, as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain <em>why</em> brain systems are as they are — because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation — one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.</p></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explanatory models in neuroscience, Part 2: Functional intelligibility and the contravariance principle\",\"authors\":\"Rosa Cao ,&nbsp;Daniel Yamins\",\"doi\":\"10.1016/j.cogsys.2023.101200\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally “top-down”, as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain <em>why</em> brain systems are as they are — because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation — one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.</p></div>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041723001341\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041723001341","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0

摘要

计算建模在神经科学中发挥着越来越重要的作用,凸显了计算模型如何解释的哲学问题。在神经网络模型的特殊情况下,人们对其可理解性以及这些模型与大脑中发现的东西之间的关系(如果有关系的话)产生了担忧。我们认为,系统的可理解性在于理解其行为与导致该行为的因素之间的依赖关系。在生物学中,这些依赖关系很多都是 "自上而下 "的,因为在自然选择的作用下,伦理要求与进化和发展限制相互作用,从而产生了具有适合其进化需要的能力和行为的系统。我们描述了用于构建神经网络模型的优化技术如何捕捉到这些依赖关系的某些关键方面,从而帮助解释大脑系统为何如此--因为当神经网络和大脑共享一个具有挑战性的生态相关目标时,就会对这两种系统可能表现出的机制产生约束。这些约束的存在和强度解释了为什么某些结果比其他结果更有可能发生。通过结合两种我们熟悉的解释模式--一种基于自下而上的机制描述(我们将在另一篇论文中讨论其与神经网络模型的关系),另一种基于自上而下的约束,这些模型有可能阐明大脑的功能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explanatory models in neuroscience, Part 2: Functional intelligibility and the contravariance principle

Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally “top-down”, as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain why brain systems are as they are — because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation — one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信