通过物理论文评估学术写作中的人工智能和人类作者质量

IF 0.6 4区 教育学 Q4 EDUCATION, SCIENTIFIC DISCIPLINES
Will Yeadon, Elise Agra, Oto-Obong Inyang, Paul Mackay, Arin Mizouri
{"title":"通过物理论文评估学术写作中的人工智能和人类作者质量","authors":"Will Yeadon, Elise Agra, Oto-Obong Inyang, Paul Mackay, Arin Mizouri","doi":"10.1088/1361-6404/ad669d","DOIUrl":null,"url":null,"abstract":"This study aims to compare the academic writing quality and detectability of authorship between human and AI-generated texts by evaluating <italic toggle=\"yes\">n</italic> = 300 short-form physics essay submissions, equally divided between student work submitted before the introduction of ChatGPT and those generated by OpenAI’s GPT-4. In blinded evaluations conducted by five independent markers who were unaware of the origin of the essays, we observed no statistically significant differences in scores between essays authored by humans and those produced by AI (<italic toggle=\"yes\">p</italic>-value = 0.107, <italic toggle=\"yes\">α</italic> = 0.05). Additionally, when the markers subsequently attempted to identify the authorship of the essays on a 4-point Likert scale—from ‘Definitely AI’ to ‘Definitely Human’—their performance was only marginally better than random chance. This outcome not only underscores the convergence of AI and human authorship quality but also highlights the difficulty of discerning AI-generated content solely through human judgment. Furthermore, the effectiveness of five commercially available software tools for identifying essay authorship was evaluated. Among these, ZeroGPT was the most accurate, achieving a 98% accuracy rate and a precision score of 1.0 when its classifications were reduced to binary outcomes. This result is a source of potential optimism for maintaining assessment integrity. Finally, we propose that texts with ≤50% AI-generated content should be considered the upper limit for classification as human-authored, a boundary inclusive of a future with ubiquitous AI assistance whilst also respecting human-authorship.","PeriodicalId":50480,"journal":{"name":"European Journal of Physics","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating AI and human authorship quality in academic writing through physics essays\",\"authors\":\"Will Yeadon, Elise Agra, Oto-Obong Inyang, Paul Mackay, Arin Mizouri\",\"doi\":\"10.1088/1361-6404/ad669d\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study aims to compare the academic writing quality and detectability of authorship between human and AI-generated texts by evaluating <italic toggle=\\\"yes\\\">n</italic> = 300 short-form physics essay submissions, equally divided between student work submitted before the introduction of ChatGPT and those generated by OpenAI’s GPT-4. In blinded evaluations conducted by five independent markers who were unaware of the origin of the essays, we observed no statistically significant differences in scores between essays authored by humans and those produced by AI (<italic toggle=\\\"yes\\\">p</italic>-value = 0.107, <italic toggle=\\\"yes\\\">α</italic> = 0.05). Additionally, when the markers subsequently attempted to identify the authorship of the essays on a 4-point Likert scale—from ‘Definitely AI’ to ‘Definitely Human’—their performance was only marginally better than random chance. This outcome not only underscores the convergence of AI and human authorship quality but also highlights the difficulty of discerning AI-generated content solely through human judgment. Furthermore, the effectiveness of five commercially available software tools for identifying essay authorship was evaluated. Among these, ZeroGPT was the most accurate, achieving a 98% accuracy rate and a precision score of 1.0 when its classifications were reduced to binary outcomes. This result is a source of potential optimism for maintaining assessment integrity. Finally, we propose that texts with ≤50% AI-generated content should be considered the upper limit for classification as human-authored, a boundary inclusive of a future with ubiquitous AI assistance whilst also respecting human-authorship.\",\"PeriodicalId\":50480,\"journal\":{\"name\":\"European Journal of Physics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2024-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Physics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1088/1361-6404/ad669d\",\"RegionNum\":4,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Physics","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1088/1361-6404/ad669d","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

摘要

本研究旨在通过对 n = 300 篇短篇物理论文进行评估,比较人类和人工智能生成的文本之间的学术写作质量和作者身份的可检测性,这些论文平均分为 ChatGPT 推出之前提交的学生作品和 OpenAI 的 GPT-4 生成的作品。在由五位不知道论文来源的独立阅卷人进行的盲评中,我们观察到人类撰写的论文和人工智能生成的论文在得分上没有显著的统计学差异(p 值 = 0.107,α = 0.05)。此外,当阅卷人随后尝试用李克特 4 点量表(从 "肯定是人工智能 "到 "肯定是人类")来识别文章的作者时,他们的表现仅略高于随机概率。这一结果不仅强调了人工智能和人类作者质量的趋同性,还突出了仅通过人类判断来辨别人工智能生成内容的难度。此外,我们还评估了五款市售软件工具在识别论文作者身份方面的有效性。其中,ZeroGPT 的准确率最高,在将其分类简化为二元结果时,准确率达到 98%,精确度为 1.0。这一结果为保持评估的完整性提供了乐观的前景。最后,我们建议,人工智能生成内容≤50%的文本应被视为人类撰写的分类上限,这一界限包括了未来无处不在的人工智能辅助,同时也尊重人类撰写。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating AI and human authorship quality in academic writing through physics essays
This study aims to compare the academic writing quality and detectability of authorship between human and AI-generated texts by evaluating n = 300 short-form physics essay submissions, equally divided between student work submitted before the introduction of ChatGPT and those generated by OpenAI’s GPT-4. In blinded evaluations conducted by five independent markers who were unaware of the origin of the essays, we observed no statistically significant differences in scores between essays authored by humans and those produced by AI (p-value = 0.107, α = 0.05). Additionally, when the markers subsequently attempted to identify the authorship of the essays on a 4-point Likert scale—from ‘Definitely AI’ to ‘Definitely Human’—their performance was only marginally better than random chance. This outcome not only underscores the convergence of AI and human authorship quality but also highlights the difficulty of discerning AI-generated content solely through human judgment. Furthermore, the effectiveness of five commercially available software tools for identifying essay authorship was evaluated. Among these, ZeroGPT was the most accurate, achieving a 98% accuracy rate and a precision score of 1.0 when its classifications were reduced to binary outcomes. This result is a source of potential optimism for maintaining assessment integrity. Finally, we propose that texts with ≤50% AI-generated content should be considered the upper limit for classification as human-authored, a boundary inclusive of a future with ubiquitous AI assistance whilst also respecting human-authorship.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
European Journal of Physics
European Journal of Physics 物理-物理:综合
CiteScore
1.70
自引率
28.60%
发文量
128
审稿时长
3-8 weeks
期刊介绍: European Journal of Physics is a journal of the European Physical Society and its primary mission is to assist in maintaining and improving the standard of taught physics in universities and other institutes of higher education. Authors submitting articles must indicate the usefulness of their material to physics education and make clear the level of readership (undergraduate or graduate) for which the article is intended. Submissions that omit this information or which, in the publisher''s opinion, do not contribute to the above mission will not be considered for publication. To this end, we welcome articles that provide original insights and aim to enhance learning in one or more areas of physics. They should normally include at least one of the following: Explanations of how contemporary research can inform the understanding of physics at university level: for example, a survey of a research field at a level accessible to students, explaining how it illustrates some general principles. Original insights into the derivation of results. These should be of some general interest, consisting of more than corrections to textbooks. Descriptions of novel laboratory exercises illustrating new techniques of general interest. Those based on relatively inexpensive equipment are especially welcome. Articles of a scholarly or reflective nature that are aimed to be of interest to, and at a level appropriate for, physics students or recent graduates. Descriptions of successful and original student projects, experimental, theoretical or computational. Discussions of the history, philosophy and epistemology of physics, at a level accessible to physics students and teachers. Reports of new developments in physics curricula and the techniques for teaching physics. Physics Education Research reports: articles that provide original experimental and/or theoretical research contributions that directly relate to the teaching and learning of university-level physics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信