Analyzing-Evaluating-Creating: Assessing Computational Thinking and Problem Solving in Visual Programming Domains

Ahana Ghosh, Liina Malva, A. Singla
{"title":"Analyzing-Evaluating-Creating: Assessing Computational Thinking and Problem Solving in Visual Programming Domains","authors":"Ahana Ghosh, Liina Malva, A. Singla","doi":"10.1145/3626252.3630778","DOIUrl":null,"url":null,"abstract":"Computational thinking (CT) and problem-solving skills are increasingly integrated into K-8 school curricula worldwide. Consequently, there is a growing need to develop reliable assessments for measuring students' proficiency in these skills. Recent works have proposed tests for assessing these skills across various CT concepts and practices, in particular, based on multi-choice items enabling psychometric validation and usage in large-scale studies. Despite their practical relevance, these tests are limited in how they measure students' computational creativity, a crucial ability when applying CT and problem solving in real-world settings. In our work, we have developed ACE, a novel test focusing on the three higher cognitive levels in Bloom's Taxonomy, i.e., Analyze, Evaluate, and Create. ACE comprises a diverse set of 7x3 multi-choice items spanning these three levels, grounded in elementary block-based visual programming. We evaluate the psychometric properties of ACE through a study conducted with 371 students in grades 3-7 from 10 schools. Based on several psychometric analysis frameworks, our results confirm the reliability and validity of ACE. Our study also shows a positive correlation between students' performance on ACE and performance on Hour of Code: Maze Challenge by Code.org.","PeriodicalId":517851,"journal":{"name":"Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1","volume":"48 22","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3626252.3630778","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Computational thinking (CT) and problem-solving skills are increasingly integrated into K-8 school curricula worldwide. Consequently, there is a growing need to develop reliable assessments for measuring students' proficiency in these skills. Recent works have proposed tests for assessing these skills across various CT concepts and practices, in particular, based on multi-choice items enabling psychometric validation and usage in large-scale studies. Despite their practical relevance, these tests are limited in how they measure students' computational creativity, a crucial ability when applying CT and problem solving in real-world settings. In our work, we have developed ACE, a novel test focusing on the three higher cognitive levels in Bloom's Taxonomy, i.e., Analyze, Evaluate, and Create. ACE comprises a diverse set of 7x3 multi-choice items spanning these three levels, grounded in elementary block-based visual programming. We evaluate the psychometric properties of ACE through a study conducted with 371 students in grades 3-7 from 10 schools. Based on several psychometric analysis frameworks, our results confirm the reliability and validity of ACE. Our study also shows a positive correlation between students' performance on ACE and performance on Hour of Code: Maze Challenge by Code.org.
分析-评估-创造:评估可视化编程领域的计算思维和问题解决能力
计算思维(CT)和问题解决技能越来越多地被纳入全球 K-8 学校课程。因此,越来越需要开发可靠的评估方法来衡量学生在这些技能方面的熟练程度。最近的研究提出了一些测试方法,用于评估这些技能在各种计算机辅助学习概念和实践中的应用,特别是基于多选题目的测试方法,以便在大规模研究中进行心理计量验证和使用。尽管这些测试具有实际意义,但它们在测量学生的计算创造力方面存在局限性,而计算创造力是在真实世界环境中应用计算机辅助计算和解决问题时的一项重要能力。在我们的工作中,我们开发了 ACE,这是一种新颖的测试,侧重于布卢姆分类学中的三个较高认知水平,即分析、评估和创造。ACE 包含一组跨越这三个层次的 7x3 多选题,以基于块的初级可视化编程为基础。我们通过对来自 10 所学校的 371 名三至七年级学生进行研究,评估了 ACE 的心理测量特性。基于多个心理测量分析框架,我们的结果证实了 ACE 的可靠性和有效性。我们的研究还表明,学生在 ACE 上的表现与在 Code.org 举办的 "代码一小时 "迷宫挑战赛上的表现呈正相关:Code.org举办的迷宫挑战赛的成绩之间存在正相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信