Disciplinary variation of metadiscourse: A comparison of human-written and ChatGPT-generated English research article abstracts

IF 3.1 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Man Zhang , Jiawei Zhang
{"title":"Disciplinary variation of metadiscourse: A comparison of human-written and ChatGPT-generated English research article abstracts","authors":"Man Zhang ,&nbsp;Jiawei Zhang","doi":"10.1016/j.jeap.2025.101540","DOIUrl":null,"url":null,"abstract":"<div><div>In order to identify more fundamental and subtler similarities and differences between human-written and ChatGPT-generated academic texts, and enhance the development and application of LLMs and understanding of human language, we use a self-built corpus and incorporate a bottom-up approach and statistical methods to compare metadiscourse variation across eight disciplines in human-written and ChatGPT-generated English research article abstracts. Results show that disciplinary variation of metadiscourse in human-written and ChatGPT-generated abstracts agrees in general but not in detail. Generally, in both types of abstracts, all disciplines use metadiscourse to fulfill three broad and eight specific discourse functions: Referring to text participants (Referring to writer, Referring to text), Describing text actions (Introducing, Arguing, Finding, Presenting), Describing text circumstances (Phoric marking, Code glossing), among which Referring to text participants and Introducing are prominent. Besides, disciplines in both types of abstracts exhibit the hard-soft discipline division in both frequencies and discourse functions, with metadiscourse markers and major discourse functions more prevalent in soft disciplines. Specifically, compared to disciplines of human-written abstracts, those of ChatGPT-generated abstracts differ more in frequencies but less in major discourse functions. The similarities and differences can be attributed to ChatGPT's working mechanism, training process, and limitation in accomplishing domain-specific tasks.</div></div>","PeriodicalId":47717,"journal":{"name":"Journal of English for Academic Purposes","volume":"76 ","pages":"Article 101540"},"PeriodicalIF":3.1000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of English for Academic Purposes","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1475158525000712","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

In order to identify more fundamental and subtler similarities and differences between human-written and ChatGPT-generated academic texts, and enhance the development and application of LLMs and understanding of human language, we use a self-built corpus and incorporate a bottom-up approach and statistical methods to compare metadiscourse variation across eight disciplines in human-written and ChatGPT-generated English research article abstracts. Results show that disciplinary variation of metadiscourse in human-written and ChatGPT-generated abstracts agrees in general but not in detail. Generally, in both types of abstracts, all disciplines use metadiscourse to fulfill three broad and eight specific discourse functions: Referring to text participants (Referring to writer, Referring to text), Describing text actions (Introducing, Arguing, Finding, Presenting), Describing text circumstances (Phoric marking, Code glossing), among which Referring to text participants and Introducing are prominent. Besides, disciplines in both types of abstracts exhibit the hard-soft discipline division in both frequencies and discourse functions, with metadiscourse markers and major discourse functions more prevalent in soft disciplines. Specifically, compared to disciplines of human-written abstracts, those of ChatGPT-generated abstracts differ more in frequencies but less in major discourse functions. The similarities and differences can be attributed to ChatGPT's working mechanism, training process, and limitation in accomplishing domain-specific tasks.
元话语的学科变异:人工写作和chatgpt生成的英语研究论文摘要的比较
为了识别人类写作和chatgpt生成的学术文本之间更基本和更微妙的异同,并加强法学硕士的开发和应用以及对人类语言的理解,我们使用自建的语料库,结合自下而上的方法和统计方法,比较了人类写作和chatgpt生成的英语研究论文摘要中8个学科的元话语变化。结果表明,在人类撰写的摘要和chatgpt生成的摘要中,元话语的学科变化大体上一致,但在细节上不一致。一般来说,在这两种类型的摘要中,所有学科都使用元话语来完成三种广泛和八种具体的话语功能:提到文本参与者(提到作者,提到文本),描述文本动作(介绍,争论,发现,呈现),描述文本情况(照片标记,代码注释),其中提到文本参与者和介绍是突出的。此外,两类摘要的学科在频率和话语功能上都呈现出软硬学科的划分,软学科的元话语标记和主要话语功能更为普遍。具体来说,与人类撰写的摘要学科相比,chatgpt生成的摘要在频率上差异更大,但在主要话语功能上差异更小。其异同可归因于ChatGPT的工作机制、训练过程以及在完成特定领域任务方面的局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.60
自引率
13.30%
发文量
81
审稿时长
57 days
期刊介绍: The Journal of English for Academic Purposes provides a forum for the dissemination of information and views which enables practitioners of and researchers in EAP to keep current with developments in their field and to contribute to its continued updating. JEAP publishes articles, book reviews, conference reports, and academic exchanges in the linguistic, sociolinguistic and psycholinguistic description of English as it occurs in the contexts of academic study and scholarly exchange itself.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信