Can reinforcement learning explain the development of causal inference in multisensory integration?

Thomas H. Weisswange, C. Rothkopf, Tobias Rodemann, J. Triesch
{"title":"Can reinforcement learning explain the development of causal inference in multisensory integration?","authors":"Thomas H. Weisswange, C. Rothkopf, Tobias Rodemann, J. Triesch","doi":"10.1109/DEVLRN.2009.5175531","DOIUrl":null,"url":null,"abstract":"Bayesian inference techniques have been used to understand the performance of human subjects on a large number of sensory tasks. Particularly, it has been shown that humans integrate sensory inputs from multiple cues in an optimal way in many conditions. Recently it has also been proposed that causal inference [1] can well describe the way humans select the most plausible model for a given input. It is still unclear how those problems are solved in the brain. Also, considering that infants do not yet behave as ideal observers [2]–[4], it is interesting to ask how the related abilities can develop. We present a reinforcement learning approach to this problem. An orienting task is used in which we reward the model for a correct movement to the origin of noisy audio visual signals. We show that the model learns to do cue-integration and model selection, in this case inferring the number of objects. Its behaviour also includes differences in reliability between the two modalities. All of that comes without any prior knowledge by simple interaction with the environment.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE 8th International Conference on Development and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2009.5175531","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Bayesian inference techniques have been used to understand the performance of human subjects on a large number of sensory tasks. Particularly, it has been shown that humans integrate sensory inputs from multiple cues in an optimal way in many conditions. Recently it has also been proposed that causal inference [1] can well describe the way humans select the most plausible model for a given input. It is still unclear how those problems are solved in the brain. Also, considering that infants do not yet behave as ideal observers [2]–[4], it is interesting to ask how the related abilities can develop. We present a reinforcement learning approach to this problem. An orienting task is used in which we reward the model for a correct movement to the origin of noisy audio visual signals. We show that the model learns to do cue-integration and model selection, in this case inferring the number of objects. Its behaviour also includes differences in reliability between the two modalities. All of that comes without any prior knowledge by simple interaction with the environment.
强化学习能否解释多感觉统合中因果推理的发展?
贝叶斯推理技术已被用于理解人类受试者在大量感官任务上的表现。特别是,已经证明人类在许多条件下以最佳方式整合来自多个线索的感觉输入。最近也有人提出,因果推理[1]可以很好地描述人类为给定输入选择最合理模型的方式。目前还不清楚这些问题是如何在大脑中解决的。此外,考虑到婴儿的行为还不是理想的观察者[2]-[4],询问相关能力如何发展是很有趣的。我们提出了一种强化学习方法来解决这个问题。在定向任务中,我们奖励模型正确移动到有噪声的视听信号的原点。我们展示了模型学习做线索整合和模型选择,在这种情况下推断对象的数量。它的行为还包括两种模式之间可靠性的差异。所有这些都是在没有任何先验知识的情况下,通过与环境的简单互动而实现的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信