Janani Tharmaseelan, Kalpani Manatunga, Shyam Reyal, D. Kasthurirathna, Tharsika Thurairasa
{"title":"Revisit of Automated Marking Techniques for Programming Assignments","authors":"Janani Tharmaseelan, Kalpani Manatunga, Shyam Reyal, D. Kasthurirathna, Tharsika Thurairasa","doi":"10.1109/EDUCON46332.2021.9453889","DOIUrl":null,"url":null,"abstract":"Due to the popularity of the Computer science field many students study programming. With large numbers of student enrollments in undergraduate courses, assessing programming submissions is becoming an increasingly tedious task that requires high cognitive load, and considerable amount of time and effort. Programming assignments usually contain algorithmic implementations written in specific programming languages to assess students’ logical thinking and problem-solving skills. Evaluators use either a test case-driven or source code analysis approach when evaluating programming assignments. Given that many marking rubrics and evaluation criteria provide partial marks for programs that are not syntactically correct, evaluators are required to analyze the source code during evaluations. This extra step adds additional burden on evaluators that consumes more time and effort. Hence, this research work attempts to study existing automatic source code analysis mechanisms, specifically, use of deep learning approaches in the domain of automatic assessments. Such knowledge may lead to creating novel automated marking models using past student data and apply deep learning techniques to implement automatic assessments of programming assignments irrespective of the computer language or the algorithm implemented.","PeriodicalId":178923,"journal":{"name":"2021 IEEE Global Engineering Education Conference (EDUCON)","volume":"163 2-3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Engineering Education Conference (EDUCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EDUCON46332.2021.9453889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Due to the popularity of the Computer science field many students study programming. With large numbers of student enrollments in undergraduate courses, assessing programming submissions is becoming an increasingly tedious task that requires high cognitive load, and considerable amount of time and effort. Programming assignments usually contain algorithmic implementations written in specific programming languages to assess students’ logical thinking and problem-solving skills. Evaluators use either a test case-driven or source code analysis approach when evaluating programming assignments. Given that many marking rubrics and evaluation criteria provide partial marks for programs that are not syntactically correct, evaluators are required to analyze the source code during evaluations. This extra step adds additional burden on evaluators that consumes more time and effort. Hence, this research work attempts to study existing automatic source code analysis mechanisms, specifically, use of deep learning approaches in the domain of automatic assessments. Such knowledge may lead to creating novel automated marking models using past student data and apply deep learning techniques to implement automatic assessments of programming assignments irrespective of the computer language or the algorithm implemented.