Sirazum Munira Tisha, Rufino A. Oregon, Gerald Baumgartner, Fernando Alegre, Juana Moreno
{"title":"高中计算思维课程自动评分系统","authors":"Sirazum Munira Tisha, Rufino A. Oregon, Gerald Baumgartner, Fernando Alegre, Juana Moreno","doi":"10.1145/3528231.3528357","DOIUrl":null,"url":null,"abstract":"Automatic grading systems help lessen the load of manual grading. Most existent autograders are based on unit testing, which focuses on the correctness of the code, but has limited scope for judging code quality. Moreover, it is cumbersome to implement unit testing for evaluating graphical output code. We propose an autograder that can effectively judge the code quality of the visual output codes created by students enrolled in a high school-level computational thinking course. We aim to provide suggestions to teachers on an essential aspect of their grading, namely the level of student com-petency in using abstraction within their codes. A dataset from five different assignments, including open-ended problems, is used to evaluate the effectiveness of our autograder. Our initial experiments show that our method can classify the students' submissions even for open-ended problems, where existing autograders fail to do so. Additionally, survey responses from course teachers support the importance of our work.","PeriodicalId":296945,"journal":{"name":"2022 IEEE/ACM 4th International Workshop on Software Engineering Education for the Next Generation (SEENG)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An Automatic Grading System for a High School-level Computational Thinking Course\",\"authors\":\"Sirazum Munira Tisha, Rufino A. Oregon, Gerald Baumgartner, Fernando Alegre, Juana Moreno\",\"doi\":\"10.1145/3528231.3528357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic grading systems help lessen the load of manual grading. Most existent autograders are based on unit testing, which focuses on the correctness of the code, but has limited scope for judging code quality. Moreover, it is cumbersome to implement unit testing for evaluating graphical output code. We propose an autograder that can effectively judge the code quality of the visual output codes created by students enrolled in a high school-level computational thinking course. We aim to provide suggestions to teachers on an essential aspect of their grading, namely the level of student com-petency in using abstraction within their codes. A dataset from five different assignments, including open-ended problems, is used to evaluate the effectiveness of our autograder. Our initial experiments show that our method can classify the students' submissions even for open-ended problems, where existing autograders fail to do so. Additionally, survey responses from course teachers support the importance of our work.\",\"PeriodicalId\":296945,\"journal\":{\"name\":\"2022 IEEE/ACM 4th International Workshop on Software Engineering Education for the Next Generation (SEENG)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/ACM 4th International Workshop on Software Engineering Education for the Next Generation (SEENG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3528231.3528357\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 4th International Workshop on Software Engineering Education for the Next Generation (SEENG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3528231.3528357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Automatic Grading System for a High School-level Computational Thinking Course
Automatic grading systems help lessen the load of manual grading. Most existent autograders are based on unit testing, which focuses on the correctness of the code, but has limited scope for judging code quality. Moreover, it is cumbersome to implement unit testing for evaluating graphical output code. We propose an autograder that can effectively judge the code quality of the visual output codes created by students enrolled in a high school-level computational thinking course. We aim to provide suggestions to teachers on an essential aspect of their grading, namely the level of student com-petency in using abstraction within their codes. A dataset from five different assignments, including open-ended problems, is used to evaluate the effectiveness of our autograder. Our initial experiments show that our method can classify the students' submissions even for open-ended problems, where existing autograders fail to do so. Additionally, survey responses from course teachers support the importance of our work.