Shion Fujimori, Mohamed Harmanani, Owais Siddiqui, Lisa Zhang
{"title":"使用深度学习来定位学生代码提交中的错误","authors":"Shion Fujimori, Mohamed Harmanani, Owais Siddiqui, Lisa Zhang","doi":"10.1145/3478432.3499048","DOIUrl":null,"url":null,"abstract":"We explore RNN and CodeBERT deep learning models that highlight errors in student submissions to Python coding problems. We find that a standard automatic metric like AUC does not correspond well to human evaluation, and that the scale of the benefits of transfer learning and pre-training are only seen when using human evaluation.","PeriodicalId":113773,"journal":{"name":"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Using Deep Learning to Localize Errors in Student Code Submissions\",\"authors\":\"Shion Fujimori, Mohamed Harmanani, Owais Siddiqui, Lisa Zhang\",\"doi\":\"10.1145/3478432.3499048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We explore RNN and CodeBERT deep learning models that highlight errors in student submissions to Python coding problems. We find that a standard automatic metric like AUC does not correspond well to human evaluation, and that the scale of the benefits of transfer learning and pre-training are only seen when using human evaluation.\",\"PeriodicalId\":113773,\"journal\":{\"name\":\"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3478432.3499048\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3478432.3499048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using Deep Learning to Localize Errors in Student Code Submissions
We explore RNN and CodeBERT deep learning models that highlight errors in student submissions to Python coding problems. We find that a standard automatic metric like AUC does not correspond well to human evaluation, and that the scale of the benefits of transfer learning and pre-training are only seen when using human evaluation.