人工智能和神经生物学中的强化学习

Tursun Alkam, Andrew H Van Benschoten, Ebrahim Tarshizi
{"title":"人工智能和神经生物学中的强化学习","authors":"Tursun Alkam,&nbsp;Andrew H Van Benschoten,&nbsp;Ebrahim Tarshizi","doi":"10.1016/j.neuri.2025.100220","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement learning (RL), a computational framework rooted in behavioral psychology, enables agents to learn optimal actions through trial and error. It now powers intelligent systems across domains such as autonomous driving, robotics, and logistics, solving tasks once thought to require human cognition. As RL reshapes artificial intelligence (AI), it raises a critical question in neuroscience: does the brain learn through similar mechanisms? Growing evidence suggests it does.</div><div>To bridge this interdisciplinary gap, this review introduces core RL concepts to neuroscientists and clinicians with limited AI exposure. We outline the agent–environment interaction loop and describe key architectures including model-free, model-based, and meta-RL. We then examine how advances in deep RL have generated testable hypotheses about neural computation and behavior. In parallel, we discuss how neurobiological findings, especially the role of dopamine in encoding reward prediction errors, have inspired biologically grounded RL models. Empirical studies reveal neural correlates of RL algorithms in the basal ganglia, prefrontal cortex, and hippocampus, supporting their roles in planning, memory, and decision-making. We also highlight clinical applications, including how RL frameworks are used to model cognitive decline and psychiatric disorders, while acknowledging limitations in scaling RL to biological complexity.</div><div>Looking ahead, RL offers powerful tools for understanding brain function, guiding brain–machine interfaces, and personalizing psychiatric treatment. The convergence of RL and neuroscience offers a promising interdisciplinary lens for advancing our understanding of learning and decision-making in both artificial agents and the human brain.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100220"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning in artificial intelligence and neurobiology\",\"authors\":\"Tursun Alkam,&nbsp;Andrew H Van Benschoten,&nbsp;Ebrahim Tarshizi\",\"doi\":\"10.1016/j.neuri.2025.100220\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Reinforcement learning (RL), a computational framework rooted in behavioral psychology, enables agents to learn optimal actions through trial and error. It now powers intelligent systems across domains such as autonomous driving, robotics, and logistics, solving tasks once thought to require human cognition. As RL reshapes artificial intelligence (AI), it raises a critical question in neuroscience: does the brain learn through similar mechanisms? Growing evidence suggests it does.</div><div>To bridge this interdisciplinary gap, this review introduces core RL concepts to neuroscientists and clinicians with limited AI exposure. We outline the agent–environment interaction loop and describe key architectures including model-free, model-based, and meta-RL. We then examine how advances in deep RL have generated testable hypotheses about neural computation and behavior. In parallel, we discuss how neurobiological findings, especially the role of dopamine in encoding reward prediction errors, have inspired biologically grounded RL models. Empirical studies reveal neural correlates of RL algorithms in the basal ganglia, prefrontal cortex, and hippocampus, supporting their roles in planning, memory, and decision-making. We also highlight clinical applications, including how RL frameworks are used to model cognitive decline and psychiatric disorders, while acknowledging limitations in scaling RL to biological complexity.</div><div>Looking ahead, RL offers powerful tools for understanding brain function, guiding brain–machine interfaces, and personalizing psychiatric treatment. The convergence of RL and neuroscience offers a promising interdisciplinary lens for advancing our understanding of learning and decision-making in both artificial agents and the human brain.</div></div>\",\"PeriodicalId\":74295,\"journal\":{\"name\":\"Neuroscience informatics\",\"volume\":\"5 3\",\"pages\":\"Article 100220\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuroscience informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772528625000354\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528625000354","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

强化学习(RL)是一种植根于行为心理学的计算框架,它使代理能够通过试错来学习最佳行为。它现在为自动驾驶、机器人和物流等领域的智能系统提供动力,解决了曾经被认为需要人类认知的任务。随着强化学习重塑人工智能(AI),它提出了神经科学中的一个关键问题:大脑是否通过类似的机制进行学习?越来越多的证据表明确实如此。为了弥合这一跨学科的差距,本综述向人工智能接触有限的神经科学家和临床医生介绍了核心RL概念。我们概述了代理-环境交互循环,并描述了包括无模型、基于模型和元强化学习在内的关键体系结构。然后,我们研究了深度强化学习的进展如何产生关于神经计算和行为的可测试假设。同时,我们讨论了神经生物学的发现,特别是多巴胺在编码奖励预测错误中的作用,如何启发了基于生物学的RL模型。实证研究揭示了RL算法在基底神经节、前额叶皮层和海马体中的神经关联,支持它们在计划、记忆和决策中的作用。我们还强调了临床应用,包括RL框架如何用于模拟认知能力下降和精神疾病,同时承认将RL扩展到生物复杂性的局限性。展望未来,强化学习为理解大脑功能、指导脑机接口和个性化精神治疗提供了强大的工具。强化学习和神经科学的融合提供了一个很有前途的跨学科视角,可以促进我们对人工智能体和人脑中学习和决策的理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement learning in artificial intelligence and neurobiology
Reinforcement learning (RL), a computational framework rooted in behavioral psychology, enables agents to learn optimal actions through trial and error. It now powers intelligent systems across domains such as autonomous driving, robotics, and logistics, solving tasks once thought to require human cognition. As RL reshapes artificial intelligence (AI), it raises a critical question in neuroscience: does the brain learn through similar mechanisms? Growing evidence suggests it does.
To bridge this interdisciplinary gap, this review introduces core RL concepts to neuroscientists and clinicians with limited AI exposure. We outline the agent–environment interaction loop and describe key architectures including model-free, model-based, and meta-RL. We then examine how advances in deep RL have generated testable hypotheses about neural computation and behavior. In parallel, we discuss how neurobiological findings, especially the role of dopamine in encoding reward prediction errors, have inspired biologically grounded RL models. Empirical studies reveal neural correlates of RL algorithms in the basal ganglia, prefrontal cortex, and hippocampus, supporting their roles in planning, memory, and decision-making. We also highlight clinical applications, including how RL frameworks are used to model cognitive decline and psychiatric disorders, while acknowledging limitations in scaling RL to biological complexity.
Looking ahead, RL offers powerful tools for understanding brain function, guiding brain–machine interfaces, and personalizing psychiatric treatment. The convergence of RL and neuroscience offers a promising interdisciplinary lens for advancing our understanding of learning and decision-making in both artificial agents and the human brain.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信