{"title":"自动评分和反馈使用程序修复入门程序设计课程","authors":"Sagar Parihar, Ziyaan Dadachanji, P. Singh, Rajdeep Das, Amey Karkare, Arnab Bhattacharya","doi":"10.1145/3059009.3059026","DOIUrl":null,"url":null,"abstract":"We present GradeIT, a system that combines the dual objectives of automated grading and program repairing for introductory programming courses (CS1). Syntax errors pose a significant challenge for testcase-based grading as it is difficult to differentiate between a submission that is almost correct and has some minor syntax errors and another submission that is completely off-the-mark. GradeIT also uses program repair to help in grading submissions that do not compile. This enables running testcases on submissions containing minor syntax errors, thereby awarding partial marks for these submissions (which, without repair, do not compile successfully and, hence, do not pass any testcase). Our experiments on 15613 submissions show that GradeIT results are comparable to manual grading by teaching assistants (TAs), and do not suffer from unintentional variability that happens when multiple TAs grade the same assignment. The repairs performed by GradeIT enabled successful compilation of 56% of the submissions having compilation errors, and resulted in an improvement in marks for 11% of these submissions.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"66","resultStr":"{\"title\":\"Automatic Grading and Feedback using Program Repair for Introductory Programming Courses\",\"authors\":\"Sagar Parihar, Ziyaan Dadachanji, P. Singh, Rajdeep Das, Amey Karkare, Arnab Bhattacharya\",\"doi\":\"10.1145/3059009.3059026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present GradeIT, a system that combines the dual objectives of automated grading and program repairing for introductory programming courses (CS1). Syntax errors pose a significant challenge for testcase-based grading as it is difficult to differentiate between a submission that is almost correct and has some minor syntax errors and another submission that is completely off-the-mark. GradeIT also uses program repair to help in grading submissions that do not compile. This enables running testcases on submissions containing minor syntax errors, thereby awarding partial marks for these submissions (which, without repair, do not compile successfully and, hence, do not pass any testcase). Our experiments on 15613 submissions show that GradeIT results are comparable to manual grading by teaching assistants (TAs), and do not suffer from unintentional variability that happens when multiple TAs grade the same assignment. The repairs performed by GradeIT enabled successful compilation of 56% of the submissions having compilation errors, and resulted in an improvement in marks for 11% of these submissions.\",\"PeriodicalId\":174429,\"journal\":{\"name\":\"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education\",\"volume\":\"115 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"66\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3059009.3059026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3059009.3059026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automatic Grading and Feedback using Program Repair for Introductory Programming Courses
We present GradeIT, a system that combines the dual objectives of automated grading and program repairing for introductory programming courses (CS1). Syntax errors pose a significant challenge for testcase-based grading as it is difficult to differentiate between a submission that is almost correct and has some minor syntax errors and another submission that is completely off-the-mark. GradeIT also uses program repair to help in grading submissions that do not compile. This enables running testcases on submissions containing minor syntax errors, thereby awarding partial marks for these submissions (which, without repair, do not compile successfully and, hence, do not pass any testcase). Our experiments on 15613 submissions show that GradeIT results are comparable to manual grading by teaching assistants (TAs), and do not suffer from unintentional variability that happens when multiple TAs grade the same assignment. The repairs performed by GradeIT enabled successful compilation of 56% of the submissions having compilation errors, and resulted in an improvement in marks for 11% of these submissions.