Zhewei Zhang, Kauê M Costa, Angela J Langdon, Geoffrey Schoenbaum
{"title":"The devilish details affecting TDRL models in dopamine research.","authors":"Zhewei Zhang, Kauê M Costa, Angela J Langdon, Geoffrey Schoenbaum","doi":"10.1016/j.tics.2025.02.001","DOIUrl":null,"url":null,"abstract":"<p><p>Over recent decades, temporal difference reinforcement learning (TDRL) models have successfully explained much dopamine (DA) activity. This success has invited heightened scrutiny of late, with many studies challenging the validity of TDRL models of DA function. Yet, when evaluating the validity of these models, the devil is truly in the details. TDRL is a broad class of algorithms sharing core ideas but differing greatly in implementation and predictions. Thus, it is important to identify the defining aspects of the TDRL framework being tested and to use state spaces and model architectures that capture the known complexity of the behavioral representations and neural systems involved. Here, we discuss several examples that illustrate the importance of these considerations.</p>","PeriodicalId":49417,"journal":{"name":"Trends in Cognitive Sciences","volume":" ","pages":""},"PeriodicalIF":16.7000,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Trends in Cognitive Sciences","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1016/j.tics.2025.02.001","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Over recent decades, temporal difference reinforcement learning (TDRL) models have successfully explained much dopamine (DA) activity. This success has invited heightened scrutiny of late, with many studies challenging the validity of TDRL models of DA function. Yet, when evaluating the validity of these models, the devil is truly in the details. TDRL is a broad class of algorithms sharing core ideas but differing greatly in implementation and predictions. Thus, it is important to identify the defining aspects of the TDRL framework being tested and to use state spaces and model architectures that capture the known complexity of the behavioral representations and neural systems involved. Here, we discuss several examples that illustrate the importance of these considerations.
期刊介绍:
Essential reading for those working directly in the cognitive sciences or in related specialist areas, Trends in Cognitive Sciences provides an instant overview of current thinking for scientists, students and teachers who want to keep up with the latest developments in the cognitive sciences. The journal brings together research in psychology, artificial intelligence, linguistics, philosophy, computer science and neuroscience. Trends in Cognitive Sciences provides a platform for the interaction of these disciplines and the evolution of cognitive science as an independent field of study.