LevelUp – Automatic Assessment of Block-Based Machine Learning Projects for AI Education

Tejal Reddy, Randi Williams, C. Breazeal
{"title":"LevelUp – Automatic Assessment of Block-Based Machine Learning Projects for AI Education","authors":"Tejal Reddy, Randi Williams, C. Breazeal","doi":"10.1109/vl/hcc53370.2022.9833130","DOIUrl":null,"url":null,"abstract":"—Although artificial intelligence (AI) is increasingly involved in everyday technologies, AI literacy amongst the general public remains low. Thus many AI education curricula for people without prior AI experience have emerged, often utilizing graphical programming languages for hands-on projects. However, there are no tools that assist educators in evaluating learners’ AI projects or provide learners with contemporaneous feedback on their work. We developed LevelUp, an automatic code analysis tool to support these educators and learners. LevelUp is built into a block-based programming platform and gives users continuous feedback on their text classification projects. We evaluated the tool with a crossover user study where participants completed two text classification projects, once where they could access LevelUp and once when they could not. To measure the tool’s impact on participants’ understanding of text classification, we used pre-post assessments and graded both of their projects against LevelUp’s rubric. We saw a significant improvement in the quality of participants’ projects after they used the tool. We also used questionnaires to solicit participants’ feedback. Overall, participants said that LevelUp was useful and intuitive. Our investigation of this novel automatic assessment tool can inform the design of future code analysis tools for AI education.","PeriodicalId":351709,"journal":{"name":"2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/vl/hcc53370.2022.9833130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

—Although artificial intelligence (AI) is increasingly involved in everyday technologies, AI literacy amongst the general public remains low. Thus many AI education curricula for people without prior AI experience have emerged, often utilizing graphical programming languages for hands-on projects. However, there are no tools that assist educators in evaluating learners’ AI projects or provide learners with contemporaneous feedback on their work. We developed LevelUp, an automatic code analysis tool to support these educators and learners. LevelUp is built into a block-based programming platform and gives users continuous feedback on their text classification projects. We evaluated the tool with a crossover user study where participants completed two text classification projects, once where they could access LevelUp and once when they could not. To measure the tool’s impact on participants’ understanding of text classification, we used pre-post assessments and graded both of their projects against LevelUp’s rubric. We saw a significant improvement in the quality of participants’ projects after they used the tool. We also used questionnaires to solicit participants’ feedback. Overall, participants said that LevelUp was useful and intuitive. Our investigation of this novel automatic assessment tool can inform the design of future code analysis tools for AI education.
升级-人工智能教育中基于块的机器学习项目的自动评估
-尽管人工智能(AI)越来越多地参与到日常技术中,但公众对人工智能的认识仍然很低。因此,出现了许多针对没有人工智能经验的人的人工智能教育课程,通常使用图形编程语言进行实践项目。然而,没有工具可以帮助教育工作者评估学习者的人工智能项目,或为学习者提供有关其工作的即时反馈。我们开发了LevelUp,一个自动代码分析工具来支持这些教育者和学习者。LevelUp内置在一个基于块的编程平台中,并为用户提供关于其文本分类项目的持续反馈。我们通过交叉用户研究来评估这个工具,参与者完成了两个文本分类项目,一次他们可以访问LevelUp,一次他们不能。为了衡量该工具对参与者理解文本分类的影响,我们使用了前后评估,并根据LevelUp的标准对他们的两个项目进行了评分。在参与者使用该工具后,我们看到了他们项目质量的显著提高。我们也使用问卷来征求参与者的反馈。总体而言,参与者表示LevelUp很有用且直观。我们对这种新型自动评估工具的研究可以为未来人工智能教育代码分析工具的设计提供信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信