Chu-Hsuan Kuo, Malayka Mottarella, Theodros M. Haile, C. Prat
{"title":"Predicting Programming Success: How Intermittent Knowledge Assessments, Individual Psychometrics, and Resting-State EEG Predict Python Programming and Debugging Skills","authors":"Chu-Hsuan Kuo, Malayka Mottarella, Theodros M. Haile, C. Prat","doi":"10.23919/softcom55329.2022.9911411","DOIUrl":null,"url":null,"abstract":"Computer programming requires fluid application of acquired chunks of declarative knowledge to accomplish a defined goal. This raises the question-how strongly do declarative knowledge assessments collected during training predict individual learners' future programming capabilities, and how might neurocognitive measures expand these predictions? The current study explored this by using stepwise regression to determine whether neurocognitive characteristics of individual learners and post-module declarative assessments collected in the Codecademy learning platform explain unique or overlapping variance when predicting real-world coding outcomes. Based on data from 80 participants over 16 one-hour Python training sessions, we found that post-module declarative knowledge assessments explained the most variance in each of our seven learning outcomes: multiple-choice test accuracy, programming accuracy, and debugging accuracy (collected at two time points) plus learning rate. However, neurocognitive measures also contributed unique variance, with total variance explained varying across outcomes. Our preliminary results suggest that declarative knowledge and neurocognitive indices combine in different proportions to predict different types of programming outcomes.","PeriodicalId":261625,"journal":{"name":"2022 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/softcom55329.2022.9911411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Computer programming requires fluid application of acquired chunks of declarative knowledge to accomplish a defined goal. This raises the question-how strongly do declarative knowledge assessments collected during training predict individual learners' future programming capabilities, and how might neurocognitive measures expand these predictions? The current study explored this by using stepwise regression to determine whether neurocognitive characteristics of individual learners and post-module declarative assessments collected in the Codecademy learning platform explain unique or overlapping variance when predicting real-world coding outcomes. Based on data from 80 participants over 16 one-hour Python training sessions, we found that post-module declarative knowledge assessments explained the most variance in each of our seven learning outcomes: multiple-choice test accuracy, programming accuracy, and debugging accuracy (collected at two time points) plus learning rate. However, neurocognitive measures also contributed unique variance, with total variance explained varying across outcomes. Our preliminary results suggest that declarative knowledge and neurocognitive indices combine in different proportions to predict different types of programming outcomes.