分解归纳式程序学习:以类似人类的数据效率学习学术任务

Daniel Weitekamp
{"title":"分解归纳式程序学习:以类似人类的数据效率学习学术任务","authors":"Daniel Weitekamp","doi":"10.1609/aaaiss.v3i1.31289","DOIUrl":null,"url":null,"abstract":"Human brains have many differently functioning regions which play specialized roles in learning. By contrast, methods for training artificial neural networks, such as reinforcement-learning, typically learn exclusively via a single mechanism: gradient descent. This raises the question: might human learners’ advantage in learning efficiency over deep-learning be attributed to the interplay between multiple specialized mechanisms of learning? In this work we review a series of simulated learner systems which have been built with the aim of modeling human student’s inductive learning as they practice STEM procedural tasks. By comparison to modern deep-learning based methods which train on thousands to millions of examples to acquire passing performance capabilities, these simulated learners match human performance curves---achieving passing levels of performance within about a dozen practice opportunities. We investigate this impressive learning efficiency via an ablation analysis. Beginning with end-to-end reinforcement learning (1-mechanism), we decompose learning systems incrementally to construct the 3-mechanism inductive learning characteristic of prior simulated learners such as Sierra, SimStudent and the Apprentice Learner Architecture. Our analysis shows that learning decomposition plays a significant role in achieving data-efficient learning on par with human learners---a greater role even than simple distinctions between symbolic/subsymbolic learning. Finally we highlight how this breakdown in learning mechanisms can flexibly incorporate diverse forms of natural language and interface grounded instruction, and discuss opportunities for using these flexible learning capabilities in interactive task learning systems that learn directly from a user’s natural instruction.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Decomposed Inductive Procedure Learning: Learning Academic Tasks with Human-Like Data Efficiency\",\"authors\":\"Daniel Weitekamp\",\"doi\":\"10.1609/aaaiss.v3i1.31289\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human brains have many differently functioning regions which play specialized roles in learning. By contrast, methods for training artificial neural networks, such as reinforcement-learning, typically learn exclusively via a single mechanism: gradient descent. This raises the question: might human learners’ advantage in learning efficiency over deep-learning be attributed to the interplay between multiple specialized mechanisms of learning? In this work we review a series of simulated learner systems which have been built with the aim of modeling human student’s inductive learning as they practice STEM procedural tasks. By comparison to modern deep-learning based methods which train on thousands to millions of examples to acquire passing performance capabilities, these simulated learners match human performance curves---achieving passing levels of performance within about a dozen practice opportunities. We investigate this impressive learning efficiency via an ablation analysis. Beginning with end-to-end reinforcement learning (1-mechanism), we decompose learning systems incrementally to construct the 3-mechanism inductive learning characteristic of prior simulated learners such as Sierra, SimStudent and the Apprentice Learner Architecture. Our analysis shows that learning decomposition plays a significant role in achieving data-efficient learning on par with human learners---a greater role even than simple distinctions between symbolic/subsymbolic learning. Finally we highlight how this breakdown in learning mechanisms can flexibly incorporate diverse forms of natural language and interface grounded instruction, and discuss opportunities for using these flexible learning capabilities in interactive task learning systems that learn directly from a user’s natural instruction.\",\"PeriodicalId\":516827,\"journal\":{\"name\":\"Proceedings of the AAAI Symposium Series\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the AAAI Symposium Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1609/aaaiss.v3i1.31289\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人类大脑中有许多功能不同的区域,它们在学习中发挥着专门的作用。相比之下,训练人工神经网络的方法(如强化学习)通常只通过单一机制进行学习:梯度下降。这就提出了一个问题:与深度学习相比,人类学习者在学习效率上的优势是否可以归因于多种专门学习机制之间的相互作用?在这项工作中,我们回顾了一系列模拟学习者系统,这些系统旨在模拟人类学生在练习 STEM 程序任务时的归纳学习。与基于深度学习的现代方法相比,这些模拟学习系统能在数千到数百万个示例的训练中获得合格的表现能力,与人类的表现曲线相匹配--在十几次练习机会内就能达到合格水平。我们通过消融分析来研究这种令人印象深刻的学习效率。从端到端强化学习(1 个机制)开始,我们逐步分解学习系统,从而构建出具有先前模拟学习者(如 Sierra、SimStudent 和 Apprentice Learner Architecture)特征的 3 个机制归纳学习。我们的分析表明,学习分解在实现与人类学习者同等的数据高效学习方面发挥了重要作用--甚至比简单区分符号/次符号学习的作用更大。最后,我们强调了这种学习机制的分解如何能够灵活地纳入各种形式的自然语言和基于界面的指令,并讨论了在交互式任务学习系统中使用这些灵活学习能力的机会,这些系统可直接从用户的自然指令中学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Decomposed Inductive Procedure Learning: Learning Academic Tasks with Human-Like Data Efficiency
Human brains have many differently functioning regions which play specialized roles in learning. By contrast, methods for training artificial neural networks, such as reinforcement-learning, typically learn exclusively via a single mechanism: gradient descent. This raises the question: might human learners’ advantage in learning efficiency over deep-learning be attributed to the interplay between multiple specialized mechanisms of learning? In this work we review a series of simulated learner systems which have been built with the aim of modeling human student’s inductive learning as they practice STEM procedural tasks. By comparison to modern deep-learning based methods which train on thousands to millions of examples to acquire passing performance capabilities, these simulated learners match human performance curves---achieving passing levels of performance within about a dozen practice opportunities. We investigate this impressive learning efficiency via an ablation analysis. Beginning with end-to-end reinforcement learning (1-mechanism), we decompose learning systems incrementally to construct the 3-mechanism inductive learning characteristic of prior simulated learners such as Sierra, SimStudent and the Apprentice Learner Architecture. Our analysis shows that learning decomposition plays a significant role in achieving data-efficient learning on par with human learners---a greater role even than simple distinctions between symbolic/subsymbolic learning. Finally we highlight how this breakdown in learning mechanisms can flexibly incorporate diverse forms of natural language and interface grounded instruction, and discuss opportunities for using these flexible learning capabilities in interactive task learning systems that learn directly from a user’s natural instruction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信