Tursun Alkam, Ebrahim Tarshizi, Andrew H. Van Benschoten
{"title":"Reinforcement learning at the interface of artificial intelligence and cognitive science","authors":"Tursun Alkam, Ebrahim Tarshizi, Andrew H. Van Benschoten","doi":"10.1016/j.neuroscience.2025.09.004","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement learning (RL) is a computational framework that models how agents learn from trial and error to make sequential decisions. Rooted in behavioural psychology, RL has become central to artificial intelligence and is increasingly applied in healthcare to personalize treatment strategies, optimize clinical workflows, guide robotic surgery, and adapt neurorehabilitation. These same properties, learning from outcomes in dynamic and uncertain environments, make RL a powerful lens for modelling human cognition. This review introduces RL to neuroscientists, clinicians, and psychologists, aiming to bridge artificial intelligence and brain science through accessible terminology and clinical analogies. We first outline foundational RL concepts and explain key algorithms such as temporal-difference learning, Q-learning, and policy gradient methods. We then connect RL mechanisms to neurobiological processes, including dopaminergic reward prediction errors, hippocampal replay, and frontostriatal loops, which support learning, planning, and habit formation. RL’s incorporation into cognitive architectures such as ACT-R, SOAR, and CLARION further demonstrates its utility in modelling attention, memory, decision-making, and language. Beyond these foundations, we critically examine RL’s capacity to explain human behaviour, from developmental changes to cognitive biases, and discuss emerging applications of deep RL in simulating complex cognitive tasks. Importantly, we argue that RL should be viewed not only as a modelling tool but as a unifying framework that highlights limitations in current methods and points toward new directions. Our perspective emphasizes hybrid symbolic–subsymbolic models, multi-agent RL for social cognition, and adaptive healthcare applications, offering a roadmap for interdisciplinary research that integrates computation, neuroscience, and clinical practice.</div></div>","PeriodicalId":19142,"journal":{"name":"Neuroscience","volume":"585 ","pages":"Pages 289-312"},"PeriodicalIF":2.8000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306452225009182","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement learning (RL) is a computational framework that models how agents learn from trial and error to make sequential decisions. Rooted in behavioural psychology, RL has become central to artificial intelligence and is increasingly applied in healthcare to personalize treatment strategies, optimize clinical workflows, guide robotic surgery, and adapt neurorehabilitation. These same properties, learning from outcomes in dynamic and uncertain environments, make RL a powerful lens for modelling human cognition. This review introduces RL to neuroscientists, clinicians, and psychologists, aiming to bridge artificial intelligence and brain science through accessible terminology and clinical analogies. We first outline foundational RL concepts and explain key algorithms such as temporal-difference learning, Q-learning, and policy gradient methods. We then connect RL mechanisms to neurobiological processes, including dopaminergic reward prediction errors, hippocampal replay, and frontostriatal loops, which support learning, planning, and habit formation. RL’s incorporation into cognitive architectures such as ACT-R, SOAR, and CLARION further demonstrates its utility in modelling attention, memory, decision-making, and language. Beyond these foundations, we critically examine RL’s capacity to explain human behaviour, from developmental changes to cognitive biases, and discuss emerging applications of deep RL in simulating complex cognitive tasks. Importantly, we argue that RL should be viewed not only as a modelling tool but as a unifying framework that highlights limitations in current methods and points toward new directions. Our perspective emphasizes hybrid symbolic–subsymbolic models, multi-agent RL for social cognition, and adaptive healthcare applications, offering a roadmap for interdisciplinary research that integrates computation, neuroscience, and clinical practice.
期刊介绍:
Neuroscience publishes papers describing the results of original research on any aspect of the scientific study of the nervous system. Any paper, however short, will be considered for publication provided that it reports significant, new and carefully confirmed findings with full experimental details.