A Systematic Quantitative Review of Divergent Thinking Assessments

Pub Date : 2023-11-07 DOI:10.31219/osf.io/eaqbt
Janika Saretzki, Boris Forthmann, Mathias Benedek
{"title":"A Systematic Quantitative Review of Divergent Thinking Assessments","authors":"Janika Saretzki, Boris Forthmann, Mathias Benedek","doi":"10.31219/osf.io/eaqbt","DOIUrl":null,"url":null,"abstract":"Divergent thinking (DT) tasks are among the most established approaches to assess creative potential. Although DT assessments are widely used, there exist many variants on how DT tasks can be administered and scored. We present findings from a preregistered, systematic review of DT assessment methods aiming to determine the prevalence of various DT assessment conditions as well as to identify recent trends in the field. We searched two electronic databases for studies that have investigated creativity with DT. We then screened a total of 2066 publications published between 1957 and 2022 and identified 451 eligible studies within 396 articles. The employed data coding system discerned more than 110 conditions and options that establish the specific administration and scoring of DT tasks. Amongst others, we found that the Alternate Uses Task is the most used DT task, task time is commonly set between two and three minutes, and responses are often scored by human raters. While traditional task instructions emphasized idea fluency, more recent studies often encourage creativity and originality. Trends in scoring include the assessment of response quality (i.e., originality/creativity) and response aggregation methods that account for the confounding effect of fluency (e.g., average or subset scoring such as top- and max-scoring) and generally an increasing instruction-scoring-fit. Overall, numerous studies lacked information regarding the precise procedures for both administration and scoring. In sum, the review identifies established practices and trends but also highlights substantial heterogeneity and underreporting in DT assessments that poses a risk to reproducibility in creativity research.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31219/osf.io/eaqbt","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Divergent thinking (DT) tasks are among the most established approaches to assess creative potential. Although DT assessments are widely used, there exist many variants on how DT tasks can be administered and scored. We present findings from a preregistered, systematic review of DT assessment methods aiming to determine the prevalence of various DT assessment conditions as well as to identify recent trends in the field. We searched two electronic databases for studies that have investigated creativity with DT. We then screened a total of 2066 publications published between 1957 and 2022 and identified 451 eligible studies within 396 articles. The employed data coding system discerned more than 110 conditions and options that establish the specific administration and scoring of DT tasks. Amongst others, we found that the Alternate Uses Task is the most used DT task, task time is commonly set between two and three minutes, and responses are often scored by human raters. While traditional task instructions emphasized idea fluency, more recent studies often encourage creativity and originality. Trends in scoring include the assessment of response quality (i.e., originality/creativity) and response aggregation methods that account for the confounding effect of fluency (e.g., average or subset scoring such as top- and max-scoring) and generally an increasing instruction-scoring-fit. Overall, numerous studies lacked information regarding the precise procedures for both administration and scoring. In sum, the review identifies established practices and trends but also highlights substantial heterogeneity and underreporting in DT assessments that poses a risk to reproducibility in creativity research.
分享
查看原文
发散性思维评估的系统定量回顾
发散性思维(DT)任务是评估创造性潜力最成熟的方法之一。虽然DT评估被广泛使用,但在如何管理和评分方面存在许多变体。我们提出了一项预注册的DT评估方法的系统综述,旨在确定各种DT评估条件的普遍性,并确定该领域的最新趋势。我们在两个电子数据库中搜索了关于DT创造力的研究。然后,我们筛选了1957年至2022年间发表的2066篇出版物,并在396篇文章中确定了451篇符合条件的研究。所采用的数据编码系统识别了110多个条件和选项,这些条件和选项建立了DT任务的具体管理和评分。其中,我们发现交替使用任务是最常用的DT任务,任务时间通常设置在2到3分钟之间,并且回答通常由人工评分员评分。虽然传统的任务指示强调想法的流畅性,但最近的研究往往鼓励创造力和原创性。评分的趋势包括对反应质量的评估(即原创性/创造力)和反应聚合方法,这些方法考虑了流利性的混淆效应(例如,平均得分或子集得分,如最高分和最高分),以及通常越来越多的教学评分适合度。总的来说,许多研究缺乏关于给药和评分的精确程序的信息。总而言之,该综述确定了既定的实践和趋势,但也强调了DT评估中的实质性异质性和少报,这对创造力研究的可重复性构成了风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信