学术界对 ChatGPT 生成的书面成果的看法:图灵模仿游戏的实际应用

IF 3.3 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Joshua A Matthews, Catherine Rita Volpe
{"title":"学术界对 ChatGPT 生成的书面成果的看法:图灵模仿游戏的实际应用","authors":"Joshua A Matthews, Catherine Rita Volpe","doi":"10.14742/ajet.8896","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) technology, such as Chat Generative Pre-trained Transformer (ChatGPT), is evolving quickly and having a significant impact on the higher education sector. Although the impact of ChatGPT on academic integrity processes is a key concern, little is known about whether academics can reliably recognise texts that have been generated by AI. This qualitative study applies Turing’s Imitation Game to investigate 16 education academics’ perceptions of two pairs of texts written by either ChatGPT or a human. Pairs of texts, written in response to the same task, were used as the stimulus for interviews that probed academics’ perceptions of text authorship and the textual features that were important in their decision-making. Results indicated academics were only able to identify AI-generated texts half of the time, highlighting the sophistication of contemporary generative AI technology. Academics perceived the following categories as important for their decision-making: voice, word usage, structure, task achievement and flow. All five categories of decision-making were variously used to rationalise both accurate and inaccurate decisions about text authorship. The implications of these results are discussed with a particular focus on what strategies can be applied to support academics more effectively as they manage the ongoing challenge of AI in higher education.\nImplications for practice or policy:\n\nExperienced academics may be unable to distinguish between texts written by contemporary generative AI technology and humans.\nAcademics are uncertain about the current capabilities of generative AI and need support in redesigning assessments that succeed in providing robust evidence of student achievement of learning outcomes.\nInstitutions must assess the adequacy of their assessment designs, AI use policies, and AI-related procedures to enhance students’ capacity for effective and ethical use of generative AI technology.\n","PeriodicalId":47812,"journal":{"name":"Australasian Journal of Educational Technology","volume":"3 6","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Academics' perceptions of ChatGPT-generated written outputs: A practical application of Turing’s Imitation Game\",\"authors\":\"Joshua A Matthews, Catherine Rita Volpe\",\"doi\":\"10.14742/ajet.8896\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) technology, such as Chat Generative Pre-trained Transformer (ChatGPT), is evolving quickly and having a significant impact on the higher education sector. Although the impact of ChatGPT on academic integrity processes is a key concern, little is known about whether academics can reliably recognise texts that have been generated by AI. This qualitative study applies Turing’s Imitation Game to investigate 16 education academics’ perceptions of two pairs of texts written by either ChatGPT or a human. Pairs of texts, written in response to the same task, were used as the stimulus for interviews that probed academics’ perceptions of text authorship and the textual features that were important in their decision-making. Results indicated academics were only able to identify AI-generated texts half of the time, highlighting the sophistication of contemporary generative AI technology. Academics perceived the following categories as important for their decision-making: voice, word usage, structure, task achievement and flow. All five categories of decision-making were variously used to rationalise both accurate and inaccurate decisions about text authorship. The implications of these results are discussed with a particular focus on what strategies can be applied to support academics more effectively as they manage the ongoing challenge of AI in higher education.\\nImplications for practice or policy:\\n\\nExperienced academics may be unable to distinguish between texts written by contemporary generative AI technology and humans.\\nAcademics are uncertain about the current capabilities of generative AI and need support in redesigning assessments that succeed in providing robust evidence of student achievement of learning outcomes.\\nInstitutions must assess the adequacy of their assessment designs, AI use policies, and AI-related procedures to enhance students’ capacity for effective and ethical use of generative AI technology.\\n\",\"PeriodicalId\":47812,\"journal\":{\"name\":\"Australasian Journal of Educational Technology\",\"volume\":\"3 6\",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Australasian Journal of Educational Technology\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.14742/ajet.8896\",\"RegionNum\":3,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australasian Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.14742/ajet.8896","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)技术,如聊天生成预训练转换器(ChatGPT),正在迅速发展,并对高等教育领域产生了重大影响。虽然 ChatGPT 对学术诚信流程的影响是一个关键问题,但学术界是否能可靠地识别人工智能生成的文本却知之甚少。这项定性研究运用图灵的模仿游戏,调查了 16 位教育界学者对 ChatGPT 或人类撰写的两对文本的看法。这两对文本是针对同一任务撰写的,作为访谈的刺激因素,访谈探究了学者们对文本作者的看法以及对他们的决策具有重要意义的文本特征。结果显示,学者们只有一半的时间能够识别出人工智能生成的文本,这凸显了当代人工智能生成技术的复杂性。学者们认为以下类别对他们的决策非常重要:语音、用词、结构、任务完成情况和流程。所有五个决策类别都被不同程度地用于合理解释关于文本作者的准确和不准确决策。我们讨论了这些结果的影响,尤其关注了可以采用哪些策略来更有效地支持学者应对人工智能在高等教育中的持续挑战。对实践或政策的影响:经验丰富的学者可能无法区分当代人工智能生成技术和人类撰写的文本。院校必须评估其评估设计、人工智能使用政策和人工智能相关程序的适当性,以提高学生有效、合乎道德地使用人工智能生成技术的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Academics' perceptions of ChatGPT-generated written outputs: A practical application of Turing’s Imitation Game
Artificial intelligence (AI) technology, such as Chat Generative Pre-trained Transformer (ChatGPT), is evolving quickly and having a significant impact on the higher education sector. Although the impact of ChatGPT on academic integrity processes is a key concern, little is known about whether academics can reliably recognise texts that have been generated by AI. This qualitative study applies Turing’s Imitation Game to investigate 16 education academics’ perceptions of two pairs of texts written by either ChatGPT or a human. Pairs of texts, written in response to the same task, were used as the stimulus for interviews that probed academics’ perceptions of text authorship and the textual features that were important in their decision-making. Results indicated academics were only able to identify AI-generated texts half of the time, highlighting the sophistication of contemporary generative AI technology. Academics perceived the following categories as important for their decision-making: voice, word usage, structure, task achievement and flow. All five categories of decision-making were variously used to rationalise both accurate and inaccurate decisions about text authorship. The implications of these results are discussed with a particular focus on what strategies can be applied to support academics more effectively as they manage the ongoing challenge of AI in higher education. Implications for practice or policy: Experienced academics may be unable to distinguish between texts written by contemporary generative AI technology and humans. Academics are uncertain about the current capabilities of generative AI and need support in redesigning assessments that succeed in providing robust evidence of student achievement of learning outcomes. Institutions must assess the adequacy of their assessment designs, AI use policies, and AI-related procedures to enhance students’ capacity for effective and ethical use of generative AI technology.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Australasian Journal of Educational Technology
Australasian Journal of Educational Technology EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
7.60
自引率
7.30%
发文量
54
审稿时长
36 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信