{"title":"Programmable Questions in Edgar","authors":"I. Mekterović, Ljiljana Brkić, V. Krstic","doi":"10.23919/MIPRO57284.2023.10159897","DOIUrl":null,"url":null,"abstract":"Automated programming assessment systems (APAS) are a valuable tool that is growing in popularity, particularly in the field of computer science education. They can provide quick and objective assessment and feedback to the programming assignments – those that receive source code as a response. Most APASs treat code as a black box and employ dynamic analysis to assess code. Dynamic analysis is straightforward, easily implemented, explainable, and works well in most situations. Edgar is a comprehensive, state of the art APAS, that has been used daily and has evolved for the past six years. This paper examines the pipeline used by Edgar to assess programming questions and presents our enhancements to the traditional dynamic analysis programmable templates and scripts. Templates enable customized question texts based on the programmable model, so that each student can receive personalized variation of the question. Personalized questions are a great way to fight potential academic dishonesty. Scripts are plugged into the assessment pipeline after the dynamic analysis and can override the default grade by examining some other aspect of the program. We also offer our thoughts on upcoming plans to include generic static analysis as we move closer to a unified assessment pipeline.","PeriodicalId":177983,"journal":{"name":"2023 46th MIPRO ICT and Electronics Convention (MIPRO)","volume":"46 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 46th MIPRO ICT and Electronics Convention (MIPRO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MIPRO57284.2023.10159897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Automated programming assessment systems (APAS) are a valuable tool that is growing in popularity, particularly in the field of computer science education. They can provide quick and objective assessment and feedback to the programming assignments – those that receive source code as a response. Most APASs treat code as a black box and employ dynamic analysis to assess code. Dynamic analysis is straightforward, easily implemented, explainable, and works well in most situations. Edgar is a comprehensive, state of the art APAS, that has been used daily and has evolved for the past six years. This paper examines the pipeline used by Edgar to assess programming questions and presents our enhancements to the traditional dynamic analysis programmable templates and scripts. Templates enable customized question texts based on the programmable model, so that each student can receive personalized variation of the question. Personalized questions are a great way to fight potential academic dishonesty. Scripts are plugged into the assessment pipeline after the dynamic analysis and can override the default grade by examining some other aspect of the program. We also offer our thoughts on upcoming plans to include generic static analysis as we move closer to a unified assessment pipeline.