Reliability and Validity of an Automated Model for Assessing the Learning of Machine Learning in Middle and High School: Experiences from the “ML for All!” course
Marcelo Fernando Rauber, Christiane Gresse von Wangenheim, Pedro Alberto Barbetta, Adriano Ferreti Borgatto, Ramon Mayor Martins, Jean Carlo Rossa Hauck
{"title":"Reliability and Validity of an Automated Model for Assessing the Learning of Machine Learning in Middle and High School: Experiences from the “ML for All!” course","authors":"Marcelo Fernando Rauber, Christiane Gresse von Wangenheim, Pedro Alberto Barbetta, Adriano Ferreti Borgatto, Ramon Mayor Martins, Jean Carlo Rossa Hauck","doi":"10.15388/infedu.2024.10","DOIUrl":null,"url":null,"abstract":"The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.","PeriodicalId":45270,"journal":{"name":"Informatics in Education","volume":"80 1","pages":"0"},"PeriodicalIF":2.1000,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Informatics in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15388/infedu.2024.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
The insertion of Machine Learning (ML) in everyday life demonstrates the importance of popularizing an understanding of ML already in school. Accompanying this trend arises the need to assess the students’ learning. Yet, so far, few assessments have been proposed, most lacking an evaluation. Therefore, we evaluate the reliability and validity of an automated assessment of the students’ learning of an image classification model created as a learning outcome of the “ML for All!” course. Results based on data collected from 240 students indicate that the assessment can be considered reliable (coefficient Omega = 0.834/Cronbach's alpha α=0.83). We also identified moderate to strong convergent and discriminant validity based on the polychoric correlation matrix. Factor analyses indicate two underlying factors “Data Management and Model Training” and “Performance Interpretation”, completing each other. These results can guide the improvement of assessments, as well as the decision on the application of this model in order to support ML education as part of a comprehensive assessment.
期刊介绍:
INFORMATICS IN EDUCATION publishes original articles about theoretical, experimental and methodological studies in the fields of informatics (computer science) education and educational applications of information technology, ranging from primary to tertiary education. Multidisciplinary research studies that enhance our understanding of how theoretical and technological innovations translate into educational practice are most welcome. We are particularly interested in work at boundaries, both the boundaries of informatics and of education. The topics covered by INFORMATICS IN EDUCATION will range across diverse aspects of informatics (computer science) education research including: empirical studies, including composing different approaches to teach various subjects, studying availability of various concepts at a given age, measuring knowledge transfer and skills developed, addressing gender issues, etc. statistical research on big data related to informatics (computer science) activities including e.g. research on assessment, online teaching, competitions, etc. educational engineering focusing mainly on developing high quality original teaching sequences of different informatics (computer science) topics that offer new, successful ways for knowledge transfer and development of computational thinking machine learning of student''s behavior including the use of information technology to observe students in the learning process and discovering clusters of their working design and evaluation of educational tools that apply information technology in novel ways.