在时间结构环境中进行类人学习

Matt Jones, Tyler R. Scott, Michael C. Mozer
{"title":"在时间结构环境中进行类人学习","authors":"Matt Jones, Tyler R. Scott, Michael C. Mozer","doi":"10.1609/aaaiss.v3i1.31273","DOIUrl":null,"url":null,"abstract":"Natural environments have correlations at a wide range of timescales. Human cognition is tuned to this temporal structure, as seen by power laws of learning and memory, and by spacing effects whereby the intervals between repeated training data affect how long knowledge is retained. Machine learning is instead dominated by batch iid training or else relatively simple nonstationarity assumptions such as random walks or discrete task sequences.\n\nThe main contributions of our work are:\n(1) We develop a Bayesian model formalizing the brain's inductive bias for temporal structure\nand show our model accounts for key features of human learning and memory.\n(2) We translate the model into a new gradient-based optimization technique for neural networks that endows them with human-like temporal inductive bias and improves their performance in realistic nonstationary tasks.\n\nOur technical approach is founded on Bayesian inference over 1/f noise, a statistical signature of many natural environments with long-range, power law correlations. We derive a new closed-form solution to this problem by treating the state of the environment as a sum of processes on different timescales and applying an extended Kalman filter to learn all timescales jointly. \n\nWe then derive a variational approximation of this model for training neural networks, which can be used as a drop-in replacement for standard optimizers in arbitrary architectures. Our optimizer decomposes each weight in the network as a sum of subweights with different learning and decay rates and tracks their joint uncertainty. Thus knowledge becomes distributed across timescales, enabling rapid adaptation to task changes while retaining long-term knowledge and avoiding catastrophic interference. Simulations show improved performance in environments with realistic multiscale nonstationarity.\n\nFinally, we present simulations showing our model gives essentially parameter-free fits of learning, forgetting, and spacing effects in human data. We then explore the analogue of human spacing effects in a deep net trained in a structured environment where tasks recur at different rates and compare the model's behavioral properties to those of people.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human-like Learning in Temporally Structured Environments\",\"authors\":\"Matt Jones, Tyler R. Scott, Michael C. Mozer\",\"doi\":\"10.1609/aaaiss.v3i1.31273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural environments have correlations at a wide range of timescales. Human cognition is tuned to this temporal structure, as seen by power laws of learning and memory, and by spacing effects whereby the intervals between repeated training data affect how long knowledge is retained. Machine learning is instead dominated by batch iid training or else relatively simple nonstationarity assumptions such as random walks or discrete task sequences.\\n\\nThe main contributions of our work are:\\n(1) We develop a Bayesian model formalizing the brain's inductive bias for temporal structure\\nand show our model accounts for key features of human learning and memory.\\n(2) We translate the model into a new gradient-based optimization technique for neural networks that endows them with human-like temporal inductive bias and improves their performance in realistic nonstationary tasks.\\n\\nOur technical approach is founded on Bayesian inference over 1/f noise, a statistical signature of many natural environments with long-range, power law correlations. We derive a new closed-form solution to this problem by treating the state of the environment as a sum of processes on different timescales and applying an extended Kalman filter to learn all timescales jointly. \\n\\nWe then derive a variational approximation of this model for training neural networks, which can be used as a drop-in replacement for standard optimizers in arbitrary architectures. Our optimizer decomposes each weight in the network as a sum of subweights with different learning and decay rates and tracks their joint uncertainty. Thus knowledge becomes distributed across timescales, enabling rapid adaptation to task changes while retaining long-term knowledge and avoiding catastrophic interference. Simulations show improved performance in environments with realistic multiscale nonstationarity.\\n\\nFinally, we present simulations showing our model gives essentially parameter-free fits of learning, forgetting, and spacing effects in human data. We then explore the analogue of human spacing effects in a deep net trained in a structured environment where tasks recur at different rates and compare the model's behavioral properties to those of people.\",\"PeriodicalId\":516827,\"journal\":{\"name\":\"Proceedings of the AAAI Symposium Series\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the AAAI Symposium Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1609/aaaiss.v3i1.31273\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自然环境中的相关性具有广泛的时间尺度。从学习和记忆的幂律以及间隔效应(重复训练数据之间的间隔会影响知识的保留时间)可以看出,人类认知与这种时间结构相适应。而机器学习则受制于批量整数训练或相对简单的非平稳性假设,如随机漫步或离散任务序列。我们工作的主要贡献有:(1)我们建立了一个贝叶斯模型,将大脑对时间结构的归纳偏好形式化,并表明我们的模型解释了人类学习和记忆的关键特征。(2) 我们将该模型转化为一种新的基于梯度的神经网络优化技术,该技术赋予神经网络类似于人类的时间归纳偏差,并提高了神经网络在现实非稳态任务中的性能。我们的技术方法建立在对 1/f 噪声的贝叶斯推理基础之上,1/f 噪声是许多自然环境的统计特征,具有长程幂律相关性。我们将环境状态视为不同时间尺度上的过程之和,并应用扩展卡尔曼滤波器联合学习所有时间尺度,从而推导出这一问题的新闭式解决方案。然后,我们推导出用于训练神经网络的该模型的变分近似值,该近似值可用于替代任意架构中的标准优化器。我们的优化器将网络中的每个权重分解为具有不同学习率和衰减率的子权重之和,并跟踪它们的联合不确定性。这样,知识就可以跨时标分布,从而在快速适应任务变化的同时,保留长期知识并避免灾难性干扰。模拟结果表明,在具有现实多尺度非平稳性的环境中,该模型的性能得到了改善。最后,我们展示了模拟结果,表明我们的模型基本上无参数地拟合了人类数据中的学习、遗忘和间隔效应。然后,我们探索了在结构化环境中训练的深度网的人类间距效应,在这种环境中,任务以不同的速度重复出现,并将模型的行为特性与人类的行为特性进行了比较。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Human-like Learning in Temporally Structured Environments
Natural environments have correlations at a wide range of timescales. Human cognition is tuned to this temporal structure, as seen by power laws of learning and memory, and by spacing effects whereby the intervals between repeated training data affect how long knowledge is retained. Machine learning is instead dominated by batch iid training or else relatively simple nonstationarity assumptions such as random walks or discrete task sequences. The main contributions of our work are: (1) We develop a Bayesian model formalizing the brain's inductive bias for temporal structure and show our model accounts for key features of human learning and memory. (2) We translate the model into a new gradient-based optimization technique for neural networks that endows them with human-like temporal inductive bias and improves their performance in realistic nonstationary tasks. Our technical approach is founded on Bayesian inference over 1/f noise, a statistical signature of many natural environments with long-range, power law correlations. We derive a new closed-form solution to this problem by treating the state of the environment as a sum of processes on different timescales and applying an extended Kalman filter to learn all timescales jointly. We then derive a variational approximation of this model for training neural networks, which can be used as a drop-in replacement for standard optimizers in arbitrary architectures. Our optimizer decomposes each weight in the network as a sum of subweights with different learning and decay rates and tracks their joint uncertainty. Thus knowledge becomes distributed across timescales, enabling rapid adaptation to task changes while retaining long-term knowledge and avoiding catastrophic interference. Simulations show improved performance in environments with realistic multiscale nonstationarity. Finally, we present simulations showing our model gives essentially parameter-free fits of learning, forgetting, and spacing effects in human data. We then explore the analogue of human spacing effects in a deep net trained in a structured environment where tasks recur at different rates and compare the model's behavioral properties to those of people.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信