{"title":"人类在不同的学习环境中会适应性地选择不同的计算策略。","authors":"Pieter Verbeke, Tom Verguts","doi":"10.1037/rev0000474","DOIUrl":null,"url":null,"abstract":"<p><p>The Rescorla-Wagner rule remains the most popular tool to describe human behavior in reinforcement learning tasks. Nevertheless, it cannot fit human learning in complex environments. Previous work proposed several hierarchical extensions of this learning rule. However, it remains unclear when a flat (nonhierarchical) versus a hierarchical strategy is adaptive, or when it is implemented by humans. To address this question, current work applies a nested modeling approach to evaluate multiple models in multiple reinforcement learning environments both computationally (which approach performs best) and empirically (which approach fits human data best). We consider 10 empirical data sets (<i>N</i> = 407) divided over three reinforcement learning environments. Our results demonstrate that different environments are best solved with different learning strategies; and that humans adaptively select the learning strategy that allows best performance. Specifically, while flat learning fitted best in less complex stable learning environments, humans employed more hierarchically complex models in more complex environments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":21016,"journal":{"name":"Psychological review","volume":" ","pages":""},"PeriodicalIF":5.1000,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Humans adaptively select different computational strategies in different learning environments.\",\"authors\":\"Pieter Verbeke, Tom Verguts\",\"doi\":\"10.1037/rev0000474\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The Rescorla-Wagner rule remains the most popular tool to describe human behavior in reinforcement learning tasks. Nevertheless, it cannot fit human learning in complex environments. Previous work proposed several hierarchical extensions of this learning rule. However, it remains unclear when a flat (nonhierarchical) versus a hierarchical strategy is adaptive, or when it is implemented by humans. To address this question, current work applies a nested modeling approach to evaluate multiple models in multiple reinforcement learning environments both computationally (which approach performs best) and empirically (which approach fits human data best). We consider 10 empirical data sets (<i>N</i> = 407) divided over three reinforcement learning environments. Our results demonstrate that different environments are best solved with different learning strategies; and that humans adaptively select the learning strategy that allows best performance. Specifically, while flat learning fitted best in less complex stable learning environments, humans employed more hierarchically complex models in more complex environments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>\",\"PeriodicalId\":21016,\"journal\":{\"name\":\"Psychological review\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2024-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological review\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/rev0000474\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/rev0000474","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY","Score":null,"Total":0}
Humans adaptively select different computational strategies in different learning environments.
The Rescorla-Wagner rule remains the most popular tool to describe human behavior in reinforcement learning tasks. Nevertheless, it cannot fit human learning in complex environments. Previous work proposed several hierarchical extensions of this learning rule. However, it remains unclear when a flat (nonhierarchical) versus a hierarchical strategy is adaptive, or when it is implemented by humans. To address this question, current work applies a nested modeling approach to evaluate multiple models in multiple reinforcement learning environments both computationally (which approach performs best) and empirically (which approach fits human data best). We consider 10 empirical data sets (N = 407) divided over three reinforcement learning environments. Our results demonstrate that different environments are best solved with different learning strategies; and that humans adaptively select the learning strategy that allows best performance. Specifically, while flat learning fitted best in less complex stable learning environments, humans employed more hierarchically complex models in more complex environments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
期刊介绍:
Psychological Review publishes articles that make important theoretical contributions to any area of scientific psychology, including systematic evaluation of alternative theories.