B. Grzyb, J. Boedecker, M. Asada, A. P. Pobil, Linda B. Smith
{"title":"无论如何都要尝试:忽略错误如何有助于学习新技能","authors":"B. Grzyb, J. Boedecker, M. Asada, A. P. Pobil, Linda B. Smith","doi":"10.1109/DEVLRN.2011.6037333","DOIUrl":null,"url":null,"abstract":"Traditional view stresses the role of errors in the learning process. The result obtained from our experiment with older infants suggested that omitting the errors during learning can also be beneficial. We propose that a temporal decrease in learning from negative feedback could be an efficient mechanism behind infant learning new skills. Herein, we claim that disregarding the errors is tightly connected to the sense of control, and results from extremely high level of self-efficacy (overconfidence). Our preliminary results with a robot simulator serve as a proof-of-concept for our approach, and suggest a possible new route for constraints balancing exploration and exploitation in intrinsically motivated reinforcement learning.","PeriodicalId":256921,"journal":{"name":"2011 IEEE International Conference on Development and Learning (ICDL)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Trying anyways: How ignoring the errors may help in learning new skills\",\"authors\":\"B. Grzyb, J. Boedecker, M. Asada, A. P. Pobil, Linda B. Smith\",\"doi\":\"10.1109/DEVLRN.2011.6037333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Traditional view stresses the role of errors in the learning process. The result obtained from our experiment with older infants suggested that omitting the errors during learning can also be beneficial. We propose that a temporal decrease in learning from negative feedback could be an efficient mechanism behind infant learning new skills. Herein, we claim that disregarding the errors is tightly connected to the sense of control, and results from extremely high level of self-efficacy (overconfidence). Our preliminary results with a robot simulator serve as a proof-of-concept for our approach, and suggest a possible new route for constraints balancing exploration and exploitation in intrinsically motivated reinforcement learning.\",\"PeriodicalId\":256921,\"journal\":{\"name\":\"2011 IEEE International Conference on Development and Learning (ICDL)\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE International Conference on Development and Learning (ICDL)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2011.6037333\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE International Conference on Development and Learning (ICDL)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2011.6037333","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Trying anyways: How ignoring the errors may help in learning new skills
Traditional view stresses the role of errors in the learning process. The result obtained from our experiment with older infants suggested that omitting the errors during learning can also be beneficial. We propose that a temporal decrease in learning from negative feedback could be an efficient mechanism behind infant learning new skills. Herein, we claim that disregarding the errors is tightly connected to the sense of control, and results from extremely high level of self-efficacy (overconfidence). Our preliminary results with a robot simulator serve as a proof-of-concept for our approach, and suggest a possible new route for constraints balancing exploration and exploitation in intrinsically motivated reinforcement learning.