{"title":"Auto-scoring of Student Speech: Proprietary vs. Open-source Solutions","authors":"Paul Daniels","doi":"10.55593/ej.26103int","DOIUrl":null,"url":null,"abstract":"This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, Speech Assessment for Moodle (SAM), is an open-source solution developed by the author that makes use of Google’s speech recognition engine to transcribe speech into text which is then automatically scored using a phoneme-based algorithm. SAM is designed as a custom quiz type for Moodle, a widely adopted open-source course management system. The second auto-scoring system, EnglishCentral, is a popular proprietary language learning solution which utilizes a trained intelligibility model to automatically score speech. Results of this study indicated a positive correlation between the speaking scores generated by both systems, meaning students who scored higher on the SAM speaking tasks also tended to score higher on the EnglishCentral speaking tasks and vice versa. In addition to comparing the scores generated from these two systems against each other, students’ computer-scored speaking scores were compared to human-generated scores from small-group face-to-face speaking tasks. The results indicated that students who received higher scores with the online computer-graded speaking tasks tended to score higher on the human-graded small-group speaking tasks and vice versa.","PeriodicalId":66774,"journal":{"name":"对外汉语教学与研究","volume":"142 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"对外汉语教学与研究","FirstCategoryId":"1092","ListUrlMain":"https://doi.org/10.55593/ej.26103int","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, Speech Assessment for Moodle (SAM), is an open-source solution developed by the author that makes use of Google’s speech recognition engine to transcribe speech into text which is then automatically scored using a phoneme-based algorithm. SAM is designed as a custom quiz type for Moodle, a widely adopted open-source course management system. The second auto-scoring system, EnglishCentral, is a popular proprietary language learning solution which utilizes a trained intelligibility model to automatically score speech. Results of this study indicated a positive correlation between the speaking scores generated by both systems, meaning students who scored higher on the SAM speaking tasks also tended to score higher on the EnglishCentral speaking tasks and vice versa. In addition to comparing the scores generated from these two systems against each other, students’ computer-scored speaking scores were compared to human-generated scores from small-group face-to-face speaking tasks. The results indicated that students who received higher scores with the online computer-graded speaking tasks tended to score higher on the human-graded small-group speaking tasks and vice versa.