Ahmed M. Abdulla Alabbasi , Mark A. Runco , Selcuk Acar , Haitham Jahrami
{"title":"The reliability and validity of problem generation tests: A meta-analysis with implications for problem finding and creativity","authors":"Ahmed M. Abdulla Alabbasi , Mark A. Runco , Selcuk Acar , Haitham Jahrami","doi":"10.1016/j.ijedro.2025.100472","DOIUrl":null,"url":null,"abstract":"<div><div>Problem finding (PF) is a very important part of the creative process. This suggests that careful measurement is necessary. Paper and pencil Problem Generation (PG) tests were developed nearly 40 years ago in an attempt to assess the potential for PF. The reliability and validity of these instruments have yet to be statistically examined with meta-analytic methods. This meta-analysis examined 19 previous empirical investigations of PG tests to examine internal reliability (<em>k</em> = 43, <em>N</em> = 2029), convergent validity (<em>k</em> = 125, <em>N</em> = 2573), and discriminant validity (<em>k</em> = 26, <em>N</em> = 2145). Analyses indicated that the overall random-effects weighted mean reliability was α = 0.816, <em>t</em>(42) = 26.419, <em>p</em> < .001 (95 % CI: 0.786, 0.842), indicating good internal consistency. A second analysis produced a random-effects weighted mean validity coefficient, which indicated moderate agreement between PG tests and other creativity measures, such as divergent thinking and creative achievement (<em>r</em> = 0.463, <em>t</em>(124) = 13.994, <em>p</em> < .001 (95 % CI: 0.406, 0.518). The mean correlation among the scores produced by PG tests (i.e., fluency, flexibility, and originality) was 0.590, <em>t</em>(25) = 4.714, <em>p</em> < .001 (95 % CI: 0.364, 0.750), which indicates a moderate level of discriminant validity of the indices. Data were heterogeneous in all three analyses, but moderator analyses did not explain significant amounts of variation. This is the first study to examine the reliability and validity of the PG tests using meta-analytic methods. These showed that the test is, in several ways, a valid and reliable tool for assessing PF ability.</div></div>","PeriodicalId":73445,"journal":{"name":"International journal of educational research open","volume":"9 ","pages":"Article 100472"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of educational research open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666374025000378","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
Problem finding (PF) is a very important part of the creative process. This suggests that careful measurement is necessary. Paper and pencil Problem Generation (PG) tests were developed nearly 40 years ago in an attempt to assess the potential for PF. The reliability and validity of these instruments have yet to be statistically examined with meta-analytic methods. This meta-analysis examined 19 previous empirical investigations of PG tests to examine internal reliability (k = 43, N = 2029), convergent validity (k = 125, N = 2573), and discriminant validity (k = 26, N = 2145). Analyses indicated that the overall random-effects weighted mean reliability was α = 0.816, t(42) = 26.419, p < .001 (95 % CI: 0.786, 0.842), indicating good internal consistency. A second analysis produced a random-effects weighted mean validity coefficient, which indicated moderate agreement between PG tests and other creativity measures, such as divergent thinking and creative achievement (r = 0.463, t(124) = 13.994, p < .001 (95 % CI: 0.406, 0.518). The mean correlation among the scores produced by PG tests (i.e., fluency, flexibility, and originality) was 0.590, t(25) = 4.714, p < .001 (95 % CI: 0.364, 0.750), which indicates a moderate level of discriminant validity of the indices. Data were heterogeneous in all three analyses, but moderator analyses did not explain significant amounts of variation. This is the first study to examine the reliability and validity of the PG tests using meta-analytic methods. These showed that the test is, in several ways, a valid and reliable tool for assessing PF ability.