{"title":"学生学习编程的自动提示的现状和下一步","authors":"Daniel Toll, Anna Wingkvist, Morgan Ericsson","doi":"10.1109/FIE44824.2020.9274053","DOIUrl":null,"url":null,"abstract":"The core of this work-in-progress is that the best way to learn how to code is to practice by solving problems. However, if students have trouble with this, they can get frustrated and give up. Automated Tutoring Systems (ATS) aim to provide hints to help them solve the problems they encounter. Many of the existing systems offer general hints, e.g., \"check the conditional statement\" or help the student interpret the compiler or test-case errors. While this can be useful, we think that an ATS should provide interactive and specialized feedback for each program. We snowballed through publications on promising ATS and found that there are several such systems (in 27 publications), but we could also identify many challenges and that our requirements were not met by any existing system. For example, few of them work on general-purpose programming languages, e.g., Java, or scale to realistic problems consisting of multiple methods and classes. From the search, we find ATS based on Automated Program Repair (APR) shows the most promise. However, while program repair has the potential to generate specialized hints to help guide the student to a working state, studies that looked into these have identified further challenges. For example, many APR ATS tools only show the repaired program to the students, who then have to compare and modify their program accordingly. Another issue is that APR generally only modifies a few lines, so if the student solution is far from correct, the repair might fail. This can be solved by partial repair, i.e., the program is repaired so at least one additional test-case passes. While this increases the repair rate, it might make hints more difficult or point the students in a non-obvious or even \"wrong\" direction. The APR can take several minutes, which also makes it unsuitable for interactive ATS. We take a design science approach to define an ATS based on APR that attempts to address the identified challenges. We give a review of the state-of-the-art for the required components, e.g., APR, how to generate hints from differences between two programs. From this, we suggest a three-step roadmap; 1. identify suitable APR-tools, 2. construct an oversized test-suite, and 3. adopt APR to the tutoring context.","PeriodicalId":149828,"journal":{"name":"2020 IEEE Frontiers in Education Conference (FIE)","volume":"26 8","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Current State and Next Steps on Automated Hints for Students Learning to Code\",\"authors\":\"Daniel Toll, Anna Wingkvist, Morgan Ericsson\",\"doi\":\"10.1109/FIE44824.2020.9274053\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The core of this work-in-progress is that the best way to learn how to code is to practice by solving problems. However, if students have trouble with this, they can get frustrated and give up. Automated Tutoring Systems (ATS) aim to provide hints to help them solve the problems they encounter. Many of the existing systems offer general hints, e.g., \\\"check the conditional statement\\\" or help the student interpret the compiler or test-case errors. While this can be useful, we think that an ATS should provide interactive and specialized feedback for each program. We snowballed through publications on promising ATS and found that there are several such systems (in 27 publications), but we could also identify many challenges and that our requirements were not met by any existing system. For example, few of them work on general-purpose programming languages, e.g., Java, or scale to realistic problems consisting of multiple methods and classes. From the search, we find ATS based on Automated Program Repair (APR) shows the most promise. However, while program repair has the potential to generate specialized hints to help guide the student to a working state, studies that looked into these have identified further challenges. For example, many APR ATS tools only show the repaired program to the students, who then have to compare and modify their program accordingly. Another issue is that APR generally only modifies a few lines, so if the student solution is far from correct, the repair might fail. This can be solved by partial repair, i.e., the program is repaired so at least one additional test-case passes. While this increases the repair rate, it might make hints more difficult or point the students in a non-obvious or even \\\"wrong\\\" direction. The APR can take several minutes, which also makes it unsuitable for interactive ATS. We take a design science approach to define an ATS based on APR that attempts to address the identified challenges. We give a review of the state-of-the-art for the required components, e.g., APR, how to generate hints from differences between two programs. From this, we suggest a three-step roadmap; 1. identify suitable APR-tools, 2. construct an oversized test-suite, and 3. adopt APR to the tutoring context.\",\"PeriodicalId\":149828,\"journal\":{\"name\":\"2020 IEEE Frontiers in Education Conference (FIE)\",\"volume\":\"26 8\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Frontiers in Education Conference (FIE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FIE44824.2020.9274053\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Frontiers in Education Conference (FIE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FIE44824.2020.9274053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Current State and Next Steps on Automated Hints for Students Learning to Code
The core of this work-in-progress is that the best way to learn how to code is to practice by solving problems. However, if students have trouble with this, they can get frustrated and give up. Automated Tutoring Systems (ATS) aim to provide hints to help them solve the problems they encounter. Many of the existing systems offer general hints, e.g., "check the conditional statement" or help the student interpret the compiler or test-case errors. While this can be useful, we think that an ATS should provide interactive and specialized feedback for each program. We snowballed through publications on promising ATS and found that there are several such systems (in 27 publications), but we could also identify many challenges and that our requirements were not met by any existing system. For example, few of them work on general-purpose programming languages, e.g., Java, or scale to realistic problems consisting of multiple methods and classes. From the search, we find ATS based on Automated Program Repair (APR) shows the most promise. However, while program repair has the potential to generate specialized hints to help guide the student to a working state, studies that looked into these have identified further challenges. For example, many APR ATS tools only show the repaired program to the students, who then have to compare and modify their program accordingly. Another issue is that APR generally only modifies a few lines, so if the student solution is far from correct, the repair might fail. This can be solved by partial repair, i.e., the program is repaired so at least one additional test-case passes. While this increases the repair rate, it might make hints more difficult or point the students in a non-obvious or even "wrong" direction. The APR can take several minutes, which also makes it unsuitable for interactive ATS. We take a design science approach to define an ATS based on APR that attempts to address the identified challenges. We give a review of the state-of-the-art for the required components, e.g., APR, how to generate hints from differences between two programs. From this, we suggest a three-step roadmap; 1. identify suitable APR-tools, 2. construct an oversized test-suite, and 3. adopt APR to the tutoring context.