Reinforcement learning at the interface of artificial intelligence and cognitive science

IF 2.8 3区 医学 Q2 NEUROSCIENCES
Tursun Alkam, Ebrahim Tarshizi, Andrew H. Van Benschoten
{"title":"Reinforcement learning at the interface of artificial intelligence and cognitive science","authors":"Tursun Alkam,&nbsp;Ebrahim Tarshizi,&nbsp;Andrew H. Van Benschoten","doi":"10.1016/j.neuroscience.2025.09.004","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement learning (RL) is a computational framework that models how agents learn from trial and error to make sequential decisions. Rooted in behavioural psychology, RL has become central to artificial intelligence and is increasingly applied in healthcare to personalize treatment strategies, optimize clinical workflows, guide robotic surgery, and adapt neurorehabilitation. These same properties, learning from outcomes in dynamic and uncertain environments, make RL a powerful lens for modelling human cognition. This review introduces RL to neuroscientists, clinicians, and psychologists, aiming to bridge artificial intelligence and brain science through accessible terminology and clinical analogies. We first outline foundational RL concepts and explain key algorithms such as temporal-difference learning, Q-learning, and policy gradient methods. We then connect RL mechanisms to neurobiological processes, including dopaminergic reward prediction errors, hippocampal replay, and frontostriatal loops, which support learning, planning, and habit formation. RL’s incorporation into cognitive architectures such as ACT-R, SOAR, and CLARION further demonstrates its utility in modelling attention, memory, decision-making, and language. Beyond these foundations, we critically examine RL’s capacity to explain human behaviour, from developmental changes to cognitive biases, and discuss emerging applications of deep RL in simulating complex cognitive tasks. Importantly, we argue that RL should be viewed not only as a modelling tool but as a unifying framework that highlights limitations in current methods and points toward new directions. Our perspective emphasizes hybrid symbolic–subsymbolic models, multi-agent RL for social cognition, and adaptive healthcare applications, offering a roadmap for interdisciplinary research that integrates computation, neuroscience, and clinical practice.</div></div>","PeriodicalId":19142,"journal":{"name":"Neuroscience","volume":"585 ","pages":"Pages 289-312"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306452225009182","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement learning (RL) is a computational framework that models how agents learn from trial and error to make sequential decisions. Rooted in behavioural psychology, RL has become central to artificial intelligence and is increasingly applied in healthcare to personalize treatment strategies, optimize clinical workflows, guide robotic surgery, and adapt neurorehabilitation. These same properties, learning from outcomes in dynamic and uncertain environments, make RL a powerful lens for modelling human cognition. This review introduces RL to neuroscientists, clinicians, and psychologists, aiming to bridge artificial intelligence and brain science through accessible terminology and clinical analogies. We first outline foundational RL concepts and explain key algorithms such as temporal-difference learning, Q-learning, and policy gradient methods. We then connect RL mechanisms to neurobiological processes, including dopaminergic reward prediction errors, hippocampal replay, and frontostriatal loops, which support learning, planning, and habit formation. RL’s incorporation into cognitive architectures such as ACT-R, SOAR, and CLARION further demonstrates its utility in modelling attention, memory, decision-making, and language. Beyond these foundations, we critically examine RL’s capacity to explain human behaviour, from developmental changes to cognitive biases, and discuss emerging applications of deep RL in simulating complex cognitive tasks. Importantly, we argue that RL should be viewed not only as a modelling tool but as a unifying framework that highlights limitations in current methods and points toward new directions. Our perspective emphasizes hybrid symbolic–subsymbolic models, multi-agent RL for social cognition, and adaptive healthcare applications, offering a roadmap for interdisciplinary research that integrates computation, neuroscience, and clinical practice.

Abstract Image

强化学习在人工智能和认知科学的接口。
强化学习(RL)是一个计算框架,它模拟了智能体如何从尝试和错误中学习以做出顺序决策。基于行为心理学,强化学习已成为人工智能的核心,并越来越多地应用于医疗保健,以个性化治疗策略,优化临床工作流程,指导机器人手术和适应神经康复。这些相同的属性,从动态和不确定环境的结果中学习,使强化学习成为模拟人类认知的强大镜头。这篇综述将RL介绍给神经科学家、临床医生和心理学家,旨在通过可理解的术语和临床类比架起人工智能和脑科学的桥梁。我们首先概述了基本的强化学习概念,并解释了关键算法,如时间差学习、q学习和策略梯度方法。然后,我们将RL机制与神经生物学过程联系起来,包括多巴胺能奖励预测错误、海马重放和支持学习、计划和习惯形成的额纹状体回路。强化学习与认知架构(如ACT-R、SOAR和CLARION)的结合进一步证明了它在建模注意力、记忆、决策和语言方面的实用性。除了这些基础,我们批判性地研究了强化学习解释人类行为的能力,从发育变化到认知偏见,并讨论了深度强化学习在模拟复杂认知任务中的新兴应用。重要的是,我们认为强化学习不仅应该被视为一种建模工具,而且应该被视为一个统一的框架,它突出了当前方法的局限性,并指出了新的方向。我们的观点强调混合符号-子符号模型,社会认知的多智能体强化学习和适应性医疗应用,为集成计算,神经科学和临床实践的跨学科研究提供了路线图。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuroscience
Neuroscience 医学-神经科学
CiteScore
6.20
自引率
0.00%
发文量
394
审稿时长
52 days
期刊介绍: Neuroscience publishes papers describing the results of original research on any aspect of the scientific study of the nervous system. Any paper, however short, will be considered for publication provided that it reports significant, new and carefully confirmed findings with full experimental details.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信