Participatory Co-Design and Evaluation of a Novel Approach to Generative AI-Integrated Coursework Assessment in Higher Education.

IF 2.5 3区 心理学 Q2 PSYCHOLOGY, MULTIDISCIPLINARY
Alex F Martin, Svitlana Tubaltseva, Anja Harrison, G James Rubin
{"title":"Participatory Co-Design and Evaluation of a Novel Approach to Generative AI-Integrated Coursework Assessment in Higher Education.","authors":"Alex F Martin, Svitlana Tubaltseva, Anja Harrison, G James Rubin","doi":"10.3390/bs15060808","DOIUrl":null,"url":null,"abstract":"<p><p>Generative AI tools offer opportunities for enhancing learning and assessment, but raise concerns about equity, academic integrity, and the ability to critically engage with AI-generated content. This study explores these issues within a psychology-oriented postgraduate programme at a UK university. We co-designed and evaluated a novel AI-integrated assessment aimed at improving critical AI literacy among students and teaching staff (pre-registration: osf.io/jqpce). Students were randomly allocated to two groups: the 'compliant' group used AI tools to assist with writing a blog and critically reflected on the outputs, while the 'unrestricted' group had free rein to use AI to produce the assessment. Teaching staff, blinded to group allocation, marked the blogs using an adapted rubric. Focus groups, interviews, and workshops were conducted to assess the feasibility, acceptability, and perceived integrity of the approach. Findings suggest that, when carefully scaffolded, integrating AI into assessments can promote both technical fluency and ethical reflection. A key contribution of this study is its participatory co-design and evaluation method, which was effective and transferable, and is presented as a practical toolkit for educators. This approach supports growing calls for authentic assessment that mirrors real-world tasks, while highlighting the ongoing need to balance academic integrity with skill development.</p>","PeriodicalId":8742,"journal":{"name":"Behavioral Sciences","volume":"15 6","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12189063/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavioral Sciences","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3390/bs15060808","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Generative AI tools offer opportunities for enhancing learning and assessment, but raise concerns about equity, academic integrity, and the ability to critically engage with AI-generated content. This study explores these issues within a psychology-oriented postgraduate programme at a UK university. We co-designed and evaluated a novel AI-integrated assessment aimed at improving critical AI literacy among students and teaching staff (pre-registration: osf.io/jqpce). Students were randomly allocated to two groups: the 'compliant' group used AI tools to assist with writing a blog and critically reflected on the outputs, while the 'unrestricted' group had free rein to use AI to produce the assessment. Teaching staff, blinded to group allocation, marked the blogs using an adapted rubric. Focus groups, interviews, and workshops were conducted to assess the feasibility, acceptability, and perceived integrity of the approach. Findings suggest that, when carefully scaffolded, integrating AI into assessments can promote both technical fluency and ethical reflection. A key contribution of this study is its participatory co-design and evaluation method, which was effective and transferable, and is presented as a practical toolkit for educators. This approach supports growing calls for authentic assessment that mirrors real-world tasks, while highlighting the ongoing need to balance academic integrity with skill development.

参与式协同设计与高等教育生成式ai整合课程作业评估新方法之评估。
生成式人工智能工具为加强学习和评估提供了机会,但也引发了对公平、学术诚信以及批判性地参与人工智能生成内容的能力的担忧。本研究在英国一所大学的心理学研究生课程中探讨了这些问题。我们共同设计并评估了一项新的人工智能综合评估,旨在提高学生和教职员工对人工智能的批判性素养(预注册:osf.io/jqpce)。学生被随机分配到两组:“顺从”组使用人工智能工具协助撰写博客并对输出进行批判性反思,而“不受限制”组可以自由使用人工智能进行评估。教学人员对分组分配不知情,用一种经过调整的标题给博客做了标记。进行焦点小组、访谈和研讨会,以评估该方法的可行性、可接受性和感知完整性。研究结果表明,在精心搭建的框架下,将人工智能纳入评估可以促进技术流畅性和道德反思。本研究的一个关键贡献是其参与式共同设计和评估方法,该方法有效且可转移,并作为教育工作者的实用工具包呈现。这种方法支持了对真实评估的日益增长的呼吁,真实评估反映了现实世界的任务,同时强调了平衡学术诚信与技能发展的持续需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Behavioral Sciences
Behavioral Sciences Social Sciences-Development
CiteScore
2.60
自引率
7.70%
发文量
429
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信