Tanja Bipp, Serena Wee, Marvin Walczok, Laura Hansal
{"title":"游戏相关评估与传统认知能力测量的关系——meta分析","authors":"Tanja Bipp, Serena Wee, Marvin Walczok, Laura Hansal","doi":"10.3390/jintelligence12120129","DOIUrl":null,"url":null,"abstract":"<p><p>Technological advances have introduced new methods for assessing psychological constructs, moving beyond traditional paper-pencil tests. Game-related assessments (GRAs) offer several advantages for research and practice, though questions about their construct validity persist. This meta-analysis investigated the relationship between indicators derived from computer-based games and traditional cognitive ability measures, examining whether measurement scope (single vs. multiple indicators) or measurement medium of cognitive ability (computer-based vs. paper-pencil) influences this relationship. We identified 52 eligible samples stemming from 44 papers, including data from over 6100 adult participants. The results from three-stage mixed-effects meta-analyses showed an overall observed correlation of <i>r</i> = 0.30 (<i>p</i> < 0.001; corrected <i>r</i> = 0.45) between GRA indicators and traditional cognitive ability measures with substantial heterogeneity in effect sizes. Stronger relationships were found when cognitive ability was measured by multiple indicators, but no differences emerged based on the measurement medium of cognitive ability. Furthermore, GRAs intended to assess cognitive ability did not show stronger relationships with traditional measures of cognitive ability than GRAs not specifically used to measure cognitive ability. Overall, our findings suggest that GRAs are related to traditional cognitive ability measures. However, the overall effect size raises questions about whether GRAs and traditional measures capture the same aspects of cognitive ability or if GRAs also measure other constructs beyond cognitive ability.</p>","PeriodicalId":52279,"journal":{"name":"Journal of Intelligence","volume":"12 12","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676581/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability-A Meta-Analysis.\",\"authors\":\"Tanja Bipp, Serena Wee, Marvin Walczok, Laura Hansal\",\"doi\":\"10.3390/jintelligence12120129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Technological advances have introduced new methods for assessing psychological constructs, moving beyond traditional paper-pencil tests. Game-related assessments (GRAs) offer several advantages for research and practice, though questions about their construct validity persist. This meta-analysis investigated the relationship between indicators derived from computer-based games and traditional cognitive ability measures, examining whether measurement scope (single vs. multiple indicators) or measurement medium of cognitive ability (computer-based vs. paper-pencil) influences this relationship. We identified 52 eligible samples stemming from 44 papers, including data from over 6100 adult participants. The results from three-stage mixed-effects meta-analyses showed an overall observed correlation of <i>r</i> = 0.30 (<i>p</i> < 0.001; corrected <i>r</i> = 0.45) between GRA indicators and traditional cognitive ability measures with substantial heterogeneity in effect sizes. Stronger relationships were found when cognitive ability was measured by multiple indicators, but no differences emerged based on the measurement medium of cognitive ability. Furthermore, GRAs intended to assess cognitive ability did not show stronger relationships with traditional measures of cognitive ability than GRAs not specifically used to measure cognitive ability. Overall, our findings suggest that GRAs are related to traditional cognitive ability measures. However, the overall effect size raises questions about whether GRAs and traditional measures capture the same aspects of cognitive ability or if GRAs also measure other constructs beyond cognitive ability.</p>\",\"PeriodicalId\":52279,\"journal\":{\"name\":\"Journal of Intelligence\",\"volume\":\"12 12\",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676581/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligence\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3390/jintelligence12120129\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligence","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3390/jintelligence12120129","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
The Relationship Between Game-Related Assessment and Traditional Measures of Cognitive Ability-A Meta-Analysis.
Technological advances have introduced new methods for assessing psychological constructs, moving beyond traditional paper-pencil tests. Game-related assessments (GRAs) offer several advantages for research and practice, though questions about their construct validity persist. This meta-analysis investigated the relationship between indicators derived from computer-based games and traditional cognitive ability measures, examining whether measurement scope (single vs. multiple indicators) or measurement medium of cognitive ability (computer-based vs. paper-pencil) influences this relationship. We identified 52 eligible samples stemming from 44 papers, including data from over 6100 adult participants. The results from three-stage mixed-effects meta-analyses showed an overall observed correlation of r = 0.30 (p < 0.001; corrected r = 0.45) between GRA indicators and traditional cognitive ability measures with substantial heterogeneity in effect sizes. Stronger relationships were found when cognitive ability was measured by multiple indicators, but no differences emerged based on the measurement medium of cognitive ability. Furthermore, GRAs intended to assess cognitive ability did not show stronger relationships with traditional measures of cognitive ability than GRAs not specifically used to measure cognitive ability. Overall, our findings suggest that GRAs are related to traditional cognitive ability measures. However, the overall effect size raises questions about whether GRAs and traditional measures capture the same aspects of cognitive ability or if GRAs also measure other constructs beyond cognitive ability.