通过人工智能评估,支持学生在大规模在线课程中生成反馈

IF 2.6 2区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Alwyn Vwen Yen LEE
{"title":"通过人工智能评估,支持学生在大规模在线课程中生成反馈","authors":"Alwyn Vwen Yen LEE","doi":"10.1016/j.stueduc.2023.101250","DOIUrl":null,"url":null,"abstract":"<div><p>Educators in large-scale online courses tend to lack the necessary resources to generate and provide adequate feedback for all students, especially when students’ learning outcomes are evaluated through student writing. As a result, students welcome peer feedback and sometimes generate self-feedback to widen their perspectives and obtain feedback, but often lack the support to do so. This study, as part of a larger project, sought to address this prevalent problem in large-scale courses by allowing students to write essays as an expression of their opinions and response to others, conduct peer and self-evaluation, using provided rubric and Artificial Intelligence (AI)-enabled evaluation to aid the giving and receiving of feedback. A total of 605 undergraduate students were part of a large-scale online course and contributed over 2500 short essays during a semester. The research design uses a mixed-methods approach, consisting qualitative measures used during essay coding, and quantitative methods from the application of machine learning algorithms. With limited instructors and resources, students first use instructor-developed rubric to conduct peer and self-assessment, while instructors qualitatively code a subset of essays that are used as inputs for training a machine learning model, which is subsequently used to provide automated scores and an accuracy rate for the remaining essays. With AI-enabled evaluation, the provision of feedback can become a sustainable process with students receiving and using meaningful feedback for their work, entailing shared responsibility from teachers and students, and becoming more effective.</p></div>","PeriodicalId":47539,"journal":{"name":"Studies in Educational Evaluation","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Supporting students’ generation of feedback in large-scale online course with artificial intelligence-enabled evaluation\",\"authors\":\"Alwyn Vwen Yen LEE\",\"doi\":\"10.1016/j.stueduc.2023.101250\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Educators in large-scale online courses tend to lack the necessary resources to generate and provide adequate feedback for all students, especially when students’ learning outcomes are evaluated through student writing. As a result, students welcome peer feedback and sometimes generate self-feedback to widen their perspectives and obtain feedback, but often lack the support to do so. This study, as part of a larger project, sought to address this prevalent problem in large-scale courses by allowing students to write essays as an expression of their opinions and response to others, conduct peer and self-evaluation, using provided rubric and Artificial Intelligence (AI)-enabled evaluation to aid the giving and receiving of feedback. A total of 605 undergraduate students were part of a large-scale online course and contributed over 2500 short essays during a semester. The research design uses a mixed-methods approach, consisting qualitative measures used during essay coding, and quantitative methods from the application of machine learning algorithms. With limited instructors and resources, students first use instructor-developed rubric to conduct peer and self-assessment, while instructors qualitatively code a subset of essays that are used as inputs for training a machine learning model, which is subsequently used to provide automated scores and an accuracy rate for the remaining essays. With AI-enabled evaluation, the provision of feedback can become a sustainable process with students receiving and using meaningful feedback for their work, entailing shared responsibility from teachers and students, and becoming more effective.</p></div>\",\"PeriodicalId\":47539,\"journal\":{\"name\":\"Studies in Educational Evaluation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in Educational Evaluation\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0191491X23000160\",\"RegionNum\":2,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Educational Evaluation","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0191491X23000160","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 4

摘要

大型在线课程的教育工作者往往缺乏必要的资源来为所有学生生成和提供足够的反馈,尤其是当通过学生写作评估学生的学习成果时。因此,学生们欢迎同伴的反馈,有时会产生自我反馈,以拓宽他们的视野并获得反馈,但往往缺乏这样做的支持。这项研究作为一个更大项目的一部分,试图通过允许学生写文章来表达他们的意见和对他人的回应,来解决大规模课程中普遍存在的问题,进行同行和自我评估,使用提供的准则和人工智能(AI)评估来帮助提供和接收反馈。共有605名本科生参加了一个大型在线课程,在一个学期内贡献了2500多篇短文。研究设计采用了混合方法,包括论文编码过程中使用的定性测量和机器学习算法应用的定量方法。在教师和资源有限的情况下,学生首先使用教师开发的评估准则进行同行和自我评估,而教师则对文章的子集进行定性编码,这些子集用作训练机器学习模型的输入,随后用于提供剩余文章的自动评分和准确率。通过人工智能评估,提供反馈可以成为一个可持续的过程,学生可以在工作中接受和使用有意义的反馈,这需要教师和学生分担责任,并变得更加有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Supporting students’ generation of feedback in large-scale online course with artificial intelligence-enabled evaluation

Educators in large-scale online courses tend to lack the necessary resources to generate and provide adequate feedback for all students, especially when students’ learning outcomes are evaluated through student writing. As a result, students welcome peer feedback and sometimes generate self-feedback to widen their perspectives and obtain feedback, but often lack the support to do so. This study, as part of a larger project, sought to address this prevalent problem in large-scale courses by allowing students to write essays as an expression of their opinions and response to others, conduct peer and self-evaluation, using provided rubric and Artificial Intelligence (AI)-enabled evaluation to aid the giving and receiving of feedback. A total of 605 undergraduate students were part of a large-scale online course and contributed over 2500 short essays during a semester. The research design uses a mixed-methods approach, consisting qualitative measures used during essay coding, and quantitative methods from the application of machine learning algorithms. With limited instructors and resources, students first use instructor-developed rubric to conduct peer and self-assessment, while instructors qualitatively code a subset of essays that are used as inputs for training a machine learning model, which is subsequently used to provide automated scores and an accuracy rate for the remaining essays. With AI-enabled evaluation, the provision of feedback can become a sustainable process with students receiving and using meaningful feedback for their work, entailing shared responsibility from teachers and students, and becoming more effective.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.90
自引率
6.50%
发文量
90
审稿时长
62 days
期刊介绍: Studies in Educational Evaluation publishes original reports of evaluation studies. Four types of articles are published by the journal: (a) Empirical evaluation studies representing evaluation practice in educational systems around the world; (b) Theoretical reflections and empirical studies related to issues involved in the evaluation of educational programs, educational institutions, educational personnel and student assessment; (c) Articles summarizing the state-of-the-art concerning specific topics in evaluation in general or in a particular country or group of countries; (d) Book reviews and brief abstracts of evaluation studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信