{"title":"A Systematic Quantitative Review of Divergent Thinking Assessments","authors":"Janika Saretzki, Boris Forthmann, Mathias Benedek","doi":"10.31219/osf.io/eaqbt","DOIUrl":null,"url":null,"abstract":"Divergent thinking (DT) tasks are among the most established approaches to assess creative potential. Although DT assessments are widely used, there exist many variants on how DT tasks can be administered and scored. We present findings from a preregistered, systematic review of DT assessment methods aiming to determine the prevalence of various DT assessment conditions as well as to identify recent trends in the field. We searched two electronic databases for studies that have investigated creativity with DT. We then screened a total of 2066 publications published between 1957 and 2022 and identified 451 eligible studies within 396 articles. The employed data coding system discerned more than 110 conditions and options that establish the specific administration and scoring of DT tasks. Amongst others, we found that the Alternate Uses Task is the most used DT task, task time is commonly set between two and three minutes, and responses are often scored by human raters. While traditional task instructions emphasized idea fluency, more recent studies often encourage creativity and originality. Trends in scoring include the assessment of response quality (i.e., originality/creativity) and response aggregation methods that account for the confounding effect of fluency (e.g., average or subset scoring such as top- and max-scoring) and generally an increasing instruction-scoring-fit. Overall, numerous studies lacked information regarding the precise procedures for both administration and scoring. In sum, the review identifies established practices and trends but also highlights substantial heterogeneity and underreporting in DT assessments that poses a risk to reproducibility in creativity research.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31219/osf.io/eaqbt","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Divergent thinking (DT) tasks are among the most established approaches to assess creative potential. Although DT assessments are widely used, there exist many variants on how DT tasks can be administered and scored. We present findings from a preregistered, systematic review of DT assessment methods aiming to determine the prevalence of various DT assessment conditions as well as to identify recent trends in the field. We searched two electronic databases for studies that have investigated creativity with DT. We then screened a total of 2066 publications published between 1957 and 2022 and identified 451 eligible studies within 396 articles. The employed data coding system discerned more than 110 conditions and options that establish the specific administration and scoring of DT tasks. Amongst others, we found that the Alternate Uses Task is the most used DT task, task time is commonly set between two and three minutes, and responses are often scored by human raters. While traditional task instructions emphasized idea fluency, more recent studies often encourage creativity and originality. Trends in scoring include the assessment of response quality (i.e., originality/creativity) and response aggregation methods that account for the confounding effect of fluency (e.g., average or subset scoring such as top- and max-scoring) and generally an increasing instruction-scoring-fit. Overall, numerous studies lacked information regarding the precise procedures for both administration and scoring. In sum, the review identifies established practices and trends but also highlights substantial heterogeneity and underreporting in DT assessments that poses a risk to reproducibility in creativity research.