Suhas E. Chelian, Jaehyon Paik, P. Pirolli, C. Lebiere, Rajan Bhattacharyya
{"title":"预测觅食任务中人类决策建模的强化学习和基于实例的学习方法","authors":"Suhas E. Chelian, Jaehyon Paik, P. Pirolli, C. Lebiere, Rajan Bhattacharyya","doi":"10.1109/DEVLRN.2015.7346127","DOIUrl":null,"url":null,"abstract":"Procedural memory and episodic memory are known to be distinct and both underlie the performance of many tasks. Reinforcement learning (RL) and instance-based learning (IBL) represent common approaches to modeling procedural and episodic memory in that order. In this work, we present a neural model utilizing RL dynamics and an ACT-R model utilizing IBL productions to the task of modeling human decision making in a prognostic foraging task. The task performed was derived from a geospatial intelligence domain wherein agents must choose among information sources to more accurately predict the actions of an adversary. Results from both models are compared to human data and suggest that information gain is an important component in modeling decision-making behavior using either memory system; with respect to the episodic memory approach, the procedural memory approach has a small but significant advantage in fitting human data. Finally, we discuss the interactions of multi-memory systems in complex decision-making tasks.","PeriodicalId":164756,"journal":{"name":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Reinforcement learning and instance-based learning approaches to modeling human decision making in a prognostic foraging task\",\"authors\":\"Suhas E. Chelian, Jaehyon Paik, P. Pirolli, C. Lebiere, Rajan Bhattacharyya\",\"doi\":\"10.1109/DEVLRN.2015.7346127\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Procedural memory and episodic memory are known to be distinct and both underlie the performance of many tasks. Reinforcement learning (RL) and instance-based learning (IBL) represent common approaches to modeling procedural and episodic memory in that order. In this work, we present a neural model utilizing RL dynamics and an ACT-R model utilizing IBL productions to the task of modeling human decision making in a prognostic foraging task. The task performed was derived from a geospatial intelligence domain wherein agents must choose among information sources to more accurately predict the actions of an adversary. Results from both models are compared to human data and suggest that information gain is an important component in modeling decision-making behavior using either memory system; with respect to the episodic memory approach, the procedural memory approach has a small but significant advantage in fitting human data. Finally, we discuss the interactions of multi-memory systems in complex decision-making tasks.\",\"PeriodicalId\":164756,\"journal\":{\"name\":\"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-12-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2015.7346127\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2015.7346127","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Reinforcement learning and instance-based learning approaches to modeling human decision making in a prognostic foraging task
Procedural memory and episodic memory are known to be distinct and both underlie the performance of many tasks. Reinforcement learning (RL) and instance-based learning (IBL) represent common approaches to modeling procedural and episodic memory in that order. In this work, we present a neural model utilizing RL dynamics and an ACT-R model utilizing IBL productions to the task of modeling human decision making in a prognostic foraging task. The task performed was derived from a geospatial intelligence domain wherein agents must choose among information sources to more accurately predict the actions of an adversary. Results from both models are compared to human data and suggest that information gain is an important component in modeling decision-making behavior using either memory system; with respect to the episodic memory approach, the procedural memory approach has a small but significant advantage in fitting human data. Finally, we discuss the interactions of multi-memory systems in complex decision-making tasks.