{"title":"Improving Assessment of Programming Pattern Knowledge through Code Editing and Revision","authors":"Sara Nurollahian, Anna N. Rafferty, E. Wiese","doi":"10.1109/ICSE-SEET58685.2023.00012","DOIUrl":null,"url":null,"abstract":"How well do code-writing tasks measure students’ knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writing code, editing someone else’s code, and, if applicable, revising their own alternatively-structured code. Our tasks targeted returning a Boolean expression and using unique code within an if and else.We found that code writing sometimes under-estimated student knowledge. For tasks targeting returning a Boolean expression, over 55% of students who initially wrote with non-expert structure successfully revised to expert structure when prompted - even though the prompt did not include guidance on how to improve their code. Further, over 25% of students who initially wrote non-expert code could properly edit someone else’s non-expert code to expert structure. These results show that non-expert code is not a reliable indicator of deep misconceptions about the structure of expert code. Finally, although code writing is correlated with code editing, the relationship is weak: a model with code writing as the sole predictor of code editing explains less than 15% of the variance. Model accuracy improves when we include additional predictors that reflect other facets of knowledge, namely the identification of expert code and selection of expert code as more readable than non-expert code. Together, these results indicate that a combination of code writing, revising, editing, and identification tasks can provide a more accurate assessment of student knowledge of programming patterns than code writing alone.","PeriodicalId":68155,"journal":{"name":"软件产业与工程","volume":"34 1","pages":"58-69"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"软件产业与工程","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.1109/ICSE-SEET58685.2023.00012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
How well do code-writing tasks measure students’ knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writing code, editing someone else’s code, and, if applicable, revising their own alternatively-structured code. Our tasks targeted returning a Boolean expression and using unique code within an if and else.We found that code writing sometimes under-estimated student knowledge. For tasks targeting returning a Boolean expression, over 55% of students who initially wrote with non-expert structure successfully revised to expert structure when prompted - even though the prompt did not include guidance on how to improve their code. Further, over 25% of students who initially wrote non-expert code could properly edit someone else’s non-expert code to expert structure. These results show that non-expert code is not a reliable indicator of deep misconceptions about the structure of expert code. Finally, although code writing is correlated with code editing, the relationship is weak: a model with code writing as the sole predictor of code editing explains less than 15% of the variance. Model accuracy improves when we include additional predictors that reflect other facets of knowledge, namely the identification of expert code and selection of expert code as more readable than non-expert code. Together, these results indicate that a combination of code writing, revising, editing, and identification tasks can provide a more accurate assessment of student knowledge of programming patterns than code writing alone.