Implicit Concept Formation

Z. Dienes
{"title":"Implicit Concept Formation","authors":"Z. Dienes","doi":"10.4324/9781315791227-3","DOIUrl":null,"url":null,"abstract":" This thesis provides a conceptual and empirical analysis of implicit concept formation. A review of concept formation studies highlights the need for improving existing methodology in establish- ing the claim for implicit concept formation. Eight experiments are reported that address this aim. A review of theoretical issues highlights the need for computational modelling to elucidate the nature of implicit learning. Two chapters address the feasibility of different exemplar and Connectionist models in accounting for how subjects perform on tasks typically employed in the implicit learn- ing literature. The first five experiments use a concept formation task that involves classifying \"computer people\" as belonging to a particular town or income category. A number of manipulations are made of the underlying rule to be learned and of the cover task given subjects. In all cases, the knowledge underlying classification performance can be elicited both by free recall and by forced choice tasks. The final three experiments employ Reber's (e.g., 1989) grammar learning paradigm. More rigorous methods for eliciting the knowledge underlying classification performance are employed than have been used previously by Reber. The knowledge underlying clas- sification performance is not elicited by free recall, but is elicited by a forced-choice measure. The robustness of the learning in this paradigm is investigated by using a secondary task methodol- ogy. Concurrent random number generation interferes with all knowledge measures. A number of parameter-free Connectionist and exemplar models of artificial grammar learning are tested against the experimental data. The importance of different assumptions regarding the coding of features and the learning rule used is investigated by determin- ing the performance of the model with and without each assumption. Only one class of Connectionist model passes all the tests. Fur- ther, this class of model can simulate subject performance in a different task domain. The relevance of these empirical and theoretical results for understanding implicit learning is discussed, and suggestions are made for future research.","PeriodicalId":186117,"journal":{"name":"Implicit Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Implicit Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4324/9781315791227-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

 This thesis provides a conceptual and empirical analysis of implicit concept formation. A review of concept formation studies highlights the need for improving existing methodology in establish- ing the claim for implicit concept formation. Eight experiments are reported that address this aim. A review of theoretical issues highlights the need for computational modelling to elucidate the nature of implicit learning. Two chapters address the feasibility of different exemplar and Connectionist models in accounting for how subjects perform on tasks typically employed in the implicit learn- ing literature. The first five experiments use a concept formation task that involves classifying "computer people" as belonging to a particular town or income category. A number of manipulations are made of the underlying rule to be learned and of the cover task given subjects. In all cases, the knowledge underlying classification performance can be elicited both by free recall and by forced choice tasks. The final three experiments employ Reber's (e.g., 1989) grammar learning paradigm. More rigorous methods for eliciting the knowledge underlying classification performance are employed than have been used previously by Reber. The knowledge underlying clas- sification performance is not elicited by free recall, but is elicited by a forced-choice measure. The robustness of the learning in this paradigm is investigated by using a secondary task methodol- ogy. Concurrent random number generation interferes with all knowledge measures. A number of parameter-free Connectionist and exemplar models of artificial grammar learning are tested against the experimental data. The importance of different assumptions regarding the coding of features and the learning rule used is investigated by determin- ing the performance of the model with and without each assumption. Only one class of Connectionist model passes all the tests. Fur- ther, this class of model can simulate subject performance in a different task domain. The relevance of these empirical and theoretical results for understanding implicit learning is discussed, and suggestions are made for future research.
内隐概念形成
本文对内隐概念形成进行了概念分析和实证分析。对概念形成研究的回顾强调了在建立内隐概念形成主张时需要改进现有的方法。本文报道了八个针对这一目标的实验。对理论问题的回顾强调了计算建模来阐明内隐学习本质的必要性。两章讨论了不同的范例和联结主义模型在解释受试者如何执行内隐学习文献中典型使用的任务方面的可行性。前五个实验使用概念形成任务,包括将“计算机人”归类为属于特定城镇或收入类别。许多操作是由要学习的基本规则和给定受试者的覆盖任务组成的。在所有情况下,分类表现的基础知识都可以通过自由回忆和强制选择任务来引出。最后三个实验采用了Reber(例如,1989)的语法学习范式。与Reber之前使用的方法相比,采用了更严格的方法来提取分类性能的基础知识。分类性能的基础知识不是由自由回忆引起的,而是由强制选择措施引起的。在这种范式中,学习的鲁棒性是通过使用二级任务方法来研究的。并发随机数生成干扰所有的知识度量。针对实验数据,对人工语法学习的一些无参数连接主义模型和范例模型进行了测试。通过确定有和没有每个假设的模型的性能,研究了关于特征编码和所使用的学习规则的不同假设的重要性。只有一类联结主义模型通过了所有的测试。此外,这类模型可以模拟受试者在不同任务域的表现。讨论了这些实证和理论结果对理解内隐学习的相关性,并对未来的研究提出了建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信