{"title":"通过人工智能评估,支持学生在大规模在线课程中生成反馈","authors":"Alwyn Vwen Yen LEE","doi":"10.1016/j.stueduc.2023.101250","DOIUrl":null,"url":null,"abstract":"<div><p>Educators in large-scale online courses tend to lack the necessary resources to generate and provide adequate feedback for all students, especially when students’ learning outcomes are evaluated through student writing. As a result, students welcome peer feedback and sometimes generate self-feedback to widen their perspectives and obtain feedback, but often lack the support to do so. This study, as part of a larger project, sought to address this prevalent problem in large-scale courses by allowing students to write essays as an expression of their opinions and response to others, conduct peer and self-evaluation, using provided rubric and Artificial Intelligence (AI)-enabled evaluation to aid the giving and receiving of feedback. A total of 605 undergraduate students were part of a large-scale online course and contributed over 2500 short essays during a semester. The research design uses a mixed-methods approach, consisting qualitative measures used during essay coding, and quantitative methods from the application of machine learning algorithms. With limited instructors and resources, students first use instructor-developed rubric to conduct peer and self-assessment, while instructors qualitatively code a subset of essays that are used as inputs for training a machine learning model, which is subsequently used to provide automated scores and an accuracy rate for the remaining essays. With AI-enabled evaluation, the provision of feedback can become a sustainable process with students receiving and using meaningful feedback for their work, entailing shared responsibility from teachers and students, and becoming more effective.</p></div>","PeriodicalId":47539,"journal":{"name":"Studies in Educational Evaluation","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Supporting students’ generation of feedback in large-scale online course with artificial intelligence-enabled evaluation\",\"authors\":\"Alwyn Vwen Yen LEE\",\"doi\":\"10.1016/j.stueduc.2023.101250\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Educators in large-scale online courses tend to lack the necessary resources to generate and provide adequate feedback for all students, especially when students’ learning outcomes are evaluated through student writing. As a result, students welcome peer feedback and sometimes generate self-feedback to widen their perspectives and obtain feedback, but often lack the support to do so. This study, as part of a larger project, sought to address this prevalent problem in large-scale courses by allowing students to write essays as an expression of their opinions and response to others, conduct peer and self-evaluation, using provided rubric and Artificial Intelligence (AI)-enabled evaluation to aid the giving and receiving of feedback. A total of 605 undergraduate students were part of a large-scale online course and contributed over 2500 short essays during a semester. The research design uses a mixed-methods approach, consisting qualitative measures used during essay coding, and quantitative methods from the application of machine learning algorithms. With limited instructors and resources, students first use instructor-developed rubric to conduct peer and self-assessment, while instructors qualitatively code a subset of essays that are used as inputs for training a machine learning model, which is subsequently used to provide automated scores and an accuracy rate for the remaining essays. With AI-enabled evaluation, the provision of feedback can become a sustainable process with students receiving and using meaningful feedback for their work, entailing shared responsibility from teachers and students, and becoming more effective.</p></div>\",\"PeriodicalId\":47539,\"journal\":{\"name\":\"Studies in Educational Evaluation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in Educational Evaluation\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0191491X23000160\",\"RegionNum\":2,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Educational Evaluation","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0191491X23000160","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Supporting students’ generation of feedback in large-scale online course with artificial intelligence-enabled evaluation
Educators in large-scale online courses tend to lack the necessary resources to generate and provide adequate feedback for all students, especially when students’ learning outcomes are evaluated through student writing. As a result, students welcome peer feedback and sometimes generate self-feedback to widen their perspectives and obtain feedback, but often lack the support to do so. This study, as part of a larger project, sought to address this prevalent problem in large-scale courses by allowing students to write essays as an expression of their opinions and response to others, conduct peer and self-evaluation, using provided rubric and Artificial Intelligence (AI)-enabled evaluation to aid the giving and receiving of feedback. A total of 605 undergraduate students were part of a large-scale online course and contributed over 2500 short essays during a semester. The research design uses a mixed-methods approach, consisting qualitative measures used during essay coding, and quantitative methods from the application of machine learning algorithms. With limited instructors and resources, students first use instructor-developed rubric to conduct peer and self-assessment, while instructors qualitatively code a subset of essays that are used as inputs for training a machine learning model, which is subsequently used to provide automated scores and an accuracy rate for the remaining essays. With AI-enabled evaluation, the provision of feedback can become a sustainable process with students receiving and using meaningful feedback for their work, entailing shared responsibility from teachers and students, and becoming more effective.
期刊介绍:
Studies in Educational Evaluation publishes original reports of evaluation studies. Four types of articles are published by the journal: (a) Empirical evaluation studies representing evaluation practice in educational systems around the world; (b) Theoretical reflections and empirical studies related to issues involved in the evaluation of educational programs, educational institutions, educational personnel and student assessment; (c) Articles summarizing the state-of-the-art concerning specific topics in evaluation in general or in a particular country or group of countries; (d) Book reviews and brief abstracts of evaluation studies.