Constance Wraith, Alasdair Carnegy, Celia Brown, Ana Baptista, Amir H Sam
{"title":"教育者能否区分医科学生和生成式人工智能撰写的反思?","authors":"Constance Wraith, Alasdair Carnegy, Celia Brown, Ana Baptista, Amir H Sam","doi":"10.1111/medu.15750","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Reflection is integral to the modern doctor's practice and, whilst it can take many forms, written reflection is commonly found on medical school curricula. Generative artificial intelligence (GenAI) is increasingly being used, including in the completion of written assignments in medical curricula. We sought to explore if educators can distinguish between GenAI- and student-authored reflections and what features they use to do so.</p><p><strong>Methods: </strong>This was a mixed-methods study. Twenty-eight educators attended a 'think aloud' interview and were presented with a set of four reflections, either all authored by students, all by GenAI or a mixture. They were asked to identify who they thought had written the reflection, speaking aloud whilst they did so. Sensitivity (AI reflections correctly identified) and specificity (student reflections correctly identified) were then calculated, and the interview transcripts were analysed using thematic analysis.</p><p><strong>Results: </strong>Educators were unable to reliably distinguish between student and GenAI-authored reflections. Sensitivity across the four reflections ranged from 0.36 (95% CI: 0.16-0.61) to 0.64 (95% CI: 0.39-0.84). Specificity ranged from 0.64 (95% CI: 0.39-0.84) to 0.86 (95% CI: 0.60-0.96). Thematic analysis revealed three main themes when considering what features of the reflection educators used to make judgements about authorship: features of writing, features of reflection and educators' preconceptions and experiences.</p><p><strong>Discussion: </strong>This study demonstrates the challenges in differentiating between student- and GenAI-authored reflections, as well as highlighting the range of factors that influence this decision. Rather than developing ways to more accurately make this distinction or trying to stop students using GenAI, we suggest it could instead be harnessed to teach students reflective practice skills, and help students for whom written reflection in particular may be challenging.</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":" ","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can educators distinguish between medical student and generative AI-authored reflections?\",\"authors\":\"Constance Wraith, Alasdair Carnegy, Celia Brown, Ana Baptista, Amir H Sam\",\"doi\":\"10.1111/medu.15750\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Reflection is integral to the modern doctor's practice and, whilst it can take many forms, written reflection is commonly found on medical school curricula. Generative artificial intelligence (GenAI) is increasingly being used, including in the completion of written assignments in medical curricula. We sought to explore if educators can distinguish between GenAI- and student-authored reflections and what features they use to do so.</p><p><strong>Methods: </strong>This was a mixed-methods study. Twenty-eight educators attended a 'think aloud' interview and were presented with a set of four reflections, either all authored by students, all by GenAI or a mixture. They were asked to identify who they thought had written the reflection, speaking aloud whilst they did so. Sensitivity (AI reflections correctly identified) and specificity (student reflections correctly identified) were then calculated, and the interview transcripts were analysed using thematic analysis.</p><p><strong>Results: </strong>Educators were unable to reliably distinguish between student and GenAI-authored reflections. Sensitivity across the four reflections ranged from 0.36 (95% CI: 0.16-0.61) to 0.64 (95% CI: 0.39-0.84). Specificity ranged from 0.64 (95% CI: 0.39-0.84) to 0.86 (95% CI: 0.60-0.96). Thematic analysis revealed three main themes when considering what features of the reflection educators used to make judgements about authorship: features of writing, features of reflection and educators' preconceptions and experiences.</p><p><strong>Discussion: </strong>This study demonstrates the challenges in differentiating between student- and GenAI-authored reflections, as well as highlighting the range of factors that influence this decision. Rather than developing ways to more accurately make this distinction or trying to stop students using GenAI, we suggest it could instead be harnessed to teach students reflective practice skills, and help students for whom written reflection in particular may be challenging.</p>\",\"PeriodicalId\":18370,\"journal\":{\"name\":\"Medical Education\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1111/medu.15750\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1111/medu.15750","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
Can educators distinguish between medical student and generative AI-authored reflections?
Introduction: Reflection is integral to the modern doctor's practice and, whilst it can take many forms, written reflection is commonly found on medical school curricula. Generative artificial intelligence (GenAI) is increasingly being used, including in the completion of written assignments in medical curricula. We sought to explore if educators can distinguish between GenAI- and student-authored reflections and what features they use to do so.
Methods: This was a mixed-methods study. Twenty-eight educators attended a 'think aloud' interview and were presented with a set of four reflections, either all authored by students, all by GenAI or a mixture. They were asked to identify who they thought had written the reflection, speaking aloud whilst they did so. Sensitivity (AI reflections correctly identified) and specificity (student reflections correctly identified) were then calculated, and the interview transcripts were analysed using thematic analysis.
Results: Educators were unable to reliably distinguish between student and GenAI-authored reflections. Sensitivity across the four reflections ranged from 0.36 (95% CI: 0.16-0.61) to 0.64 (95% CI: 0.39-0.84). Specificity ranged from 0.64 (95% CI: 0.39-0.84) to 0.86 (95% CI: 0.60-0.96). Thematic analysis revealed three main themes when considering what features of the reflection educators used to make judgements about authorship: features of writing, features of reflection and educators' preconceptions and experiences.
Discussion: This study demonstrates the challenges in differentiating between student- and GenAI-authored reflections, as well as highlighting the range of factors that influence this decision. Rather than developing ways to more accurately make this distinction or trying to stop students using GenAI, we suggest it could instead be harnessed to teach students reflective practice skills, and help students for whom written reflection in particular may be challenging.
期刊介绍:
Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives.
The journal welcomes high quality papers on all aspects of health professional education including;
-undergraduate education
-postgraduate training
-continuing professional development
-interprofessional education