评估初高中机器学习学习的自动化模型的可靠性和有效性:来自“ML for All!””课程

IF 2.1 Q1 EDUCATION & EDUCATIONAL RESEARCH
Marcelo Fernando Rauber, Christiane Gresse von Wangenheim, Pedro Alberto Barbetta, Adriano Ferreti Borgatto, Ramon Mayor Martins, Jean Carlo Rossa Hauck
{"title":"评估初高中机器学习学习的自动化模型的可靠性和有效性:来自“ML for All!””课程","authors":"Marcelo Fernando Rauber, Christiane Gresse von Wangenheim, Pedro Alberto Barbetta, Adriano Ferreti Borgatto, Ramon Mayor Martins, Jean Carlo Rossa Hauck","doi":"10.15388/infedu.2024.10","DOIUrl":null,"url":null,"abstract":"The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.","PeriodicalId":45270,"journal":{"name":"Informatics in Education","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reliability and Validity of an Automated Model for Assessing the Learning of Machine Learning in Middle and High School: Experiences from the “ML for All!” course\",\"authors\":\"Marcelo Fernando Rauber, Christiane Gresse von Wangenheim, Pedro Alberto Barbetta, Adriano Ferreti Borgatto, Ramon Mayor Martins, Jean Carlo Rossa Hauck\",\"doi\":\"10.15388/infedu.2024.10\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.\",\"PeriodicalId\":45270,\"journal\":{\"name\":\"Informatics in Education\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Informatics in Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15388/infedu.2024.10\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Informatics in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15388/infedu.2024.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

机器学习(ML)在日常生活中的应用证明了在学校普及机器学习知识的重要性。伴随这种趋势而来的是评估学生学习的需要。然而,到目前为止,提出的评估很少,大多数都缺乏评估。因此,我们评估了学生学习图像分类模型的自动评估的可靠性和有效性,该模型是作为“ML for All!””课程。根据240名学生的数据,测评结果可以认为是可靠的(系数Omega = 0.834/Cronbach's α=0.83)。我们还根据多元相关矩阵确定了中度到强的收敛效度和判别效度。因子分析表明,“数据管理与模型训练”和“绩效解释”两个潜在因素是相互补充的。这些结果可以指导评估的改进,以及该模型应用的决定,以支持ML教育作为综合评估的一部分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reliability and Validity of an Automated Model for Assessing the Learning of Machine Learning in Middle and High School: Experiences from the “ML for All!” course
The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Informatics in Education
Informatics in Education EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
6.10
自引率
3.70%
发文量
20
审稿时长
20 weeks
期刊介绍: INFORMATICS IN EDUCATION publishes original articles about theoretical, experimental and methodological studies in the fields of informatics (computer science) education and educational applications of information technology, ranging from primary to tertiary education. Multidisciplinary research studies that enhance our understanding of how theoretical and technological innovations translate into educational practice are most welcome. We are particularly interested in work at boundaries, both the boundaries of informatics and of education. The topics covered by INFORMATICS IN EDUCATION will range across diverse aspects of informatics (computer science) education research including: empirical studies, including composing different approaches to teach various subjects, studying availability of various concepts at a given age, measuring knowledge transfer and skills developed, addressing gender issues, etc. statistical research on big data related to informatics (computer science) activities including e.g. research on assessment, online teaching, competitions, etc. educational engineering focusing mainly on developing high quality original teaching sequences of different informatics (computer science) topics that offer new, successful ways for knowledge transfer and development of computational thinking machine learning of student''s behavior including the use of information technology to observe students in the learning process and discovering clusters of their working design and evaluation of educational tools that apply information technology in novel ways.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信