Sebastian Serth, Daniel Köhler, Leonard Marschke, Felix Auringer, Konrad Hanff, Jan-Eric Hellenberg, Tobias Kantusch, Maximilian Paß, C. Meinel
{"title":"在mooc环境下提高自动评分执行环境的可扩展性和安全性","authors":"Sebastian Serth, Daniel Köhler, Leonard Marschke, Felix Auringer, Konrad Hanff, Jan-Eric Hellenberg, Tobias Kantusch, Maximilian Paß, C. Meinel","doi":"10.18420/abp2021-1","DOIUrl":null,"url":null,"abstract":": Learning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students’ submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students’ source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs.","PeriodicalId":170086,"journal":{"name":"Workshop Automatische Bewertung von Programmieraufgaben","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Improving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCs\",\"authors\":\"Sebastian Serth, Daniel Köhler, Leonard Marschke, Felix Auringer, Konrad Hanff, Jan-Eric Hellenberg, Tobias Kantusch, Maximilian Paß, C. Meinel\",\"doi\":\"10.18420/abp2021-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\": Learning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students’ submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students’ source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs.\",\"PeriodicalId\":170086,\"journal\":{\"name\":\"Workshop Automatische Bewertung von Programmieraufgaben\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Workshop Automatische Bewertung von Programmieraufgaben\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18420/abp2021-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop Automatische Bewertung von Programmieraufgaben","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18420/abp2021-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving the Scalability and Security of Execution Environments for Auto-Graders in the Context of MOOCs
: Learning a programming language requires learners to write code themselves, execute their programs interactively, and receive feedback about the correctness of their code. Many approaches with so-called auto-graders exist to grade students’ submissions and provide feedback for them automatically. University classes with hundreds of students or Massive Open Online Courses (MOOCs) with thousands of learners often use these systems. Assessing the submissions usually includes executing the students’ source code and thus implies requirements on the scalability and security of the systems. In this paper, we evaluate different execution environments and orchestration solutions for auto-graders. We compare the most promising open-source tools regarding their usability in a scalable environment required for MOOCs. According to our evaluation, Nomad, in conjunction with Docker, fulfills most requirements. We derive implications for the productive use of Nomad for an auto-grader in MOOCs.