{"title":"Can AI simulate or emulate human stance? Using metadiscourse to compare GPT-generated and human-authored academic book reviews","authors":"Guangyuan Yao , Zhaoxia Liu","doi":"10.1016/j.pragma.2025.07.018","DOIUrl":null,"url":null,"abstract":"<div><div>This study investigates whether generative AI (represented by ChatGPT) can simulate or even emulate the stance expressed by human authors in the specific genre of academic book reviews. Through a comparative analysis of ChatGPT-generated reviews and human-authored reviews, this study focuses on the use of interactional metadiscourse markers (e.g., hedges, boosters, attitude markers, and self-mention) to reveal current AI's capabilities and limitations in handling complex evaluative discourse and interpersonal interaction. The findings indicate that ChatGPT overall employs interactional metadiscourse markers more frequently than human authors, due to its significant overuse of attitude markers. However, it underuses hedges and self-mention significantly, suggesting a reliance on explicit evaluative language while lacking nuanced caution and authorial presence. These results highlight that current AI's simulation of human writing is genre-sensitive but incomplete, particularly in achieving the balance of caution, conviction, and authorial presence, which is typical of human reviewers. The distinct metadiscoursal patterns identified may serve as linguistic fingerprints for distinguishing AI-generated reviews from human-authored ones. The study also offers pedagogical implications, emphasizing the need for educators and students to recognize current AI's limitations in modeling nuanced stance and fostering authentic authorial voice in evaluative genres.</div></div>","PeriodicalId":16899,"journal":{"name":"Journal of Pragmatics","volume":"247 ","pages":"Pages 103-115"},"PeriodicalIF":1.7000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pragmatics","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378216625001833","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0
Abstract
This study investigates whether generative AI (represented by ChatGPT) can simulate or even emulate the stance expressed by human authors in the specific genre of academic book reviews. Through a comparative analysis of ChatGPT-generated reviews and human-authored reviews, this study focuses on the use of interactional metadiscourse markers (e.g., hedges, boosters, attitude markers, and self-mention) to reveal current AI's capabilities and limitations in handling complex evaluative discourse and interpersonal interaction. The findings indicate that ChatGPT overall employs interactional metadiscourse markers more frequently than human authors, due to its significant overuse of attitude markers. However, it underuses hedges and self-mention significantly, suggesting a reliance on explicit evaluative language while lacking nuanced caution and authorial presence. These results highlight that current AI's simulation of human writing is genre-sensitive but incomplete, particularly in achieving the balance of caution, conviction, and authorial presence, which is typical of human reviewers. The distinct metadiscoursal patterns identified may serve as linguistic fingerprints for distinguishing AI-generated reviews from human-authored ones. The study also offers pedagogical implications, emphasizing the need for educators and students to recognize current AI's limitations in modeling nuanced stance and fostering authentic authorial voice in evaluative genres.
期刊介绍:
Since 1977, the Journal of Pragmatics has provided a forum for bringing together a wide range of research in pragmatics, including cognitive pragmatics, corpus pragmatics, experimental pragmatics, historical pragmatics, interpersonal pragmatics, multimodal pragmatics, sociopragmatics, theoretical pragmatics and related fields. Our aim is to publish innovative pragmatic scholarship from all perspectives, which contributes to theories of how speakers produce and interpret language in different contexts drawing on attested data from a wide range of languages/cultures in different parts of the world. The Journal of Pragmatics also encourages work that uses attested language data to explore the relationship between pragmatics and neighbouring research areas such as semantics, discourse analysis, conversation analysis and ethnomethodology, interactional linguistics, sociolinguistics, linguistic anthropology, media studies, psychology, sociology, and the philosophy of language. Alongside full-length articles, discussion notes and book reviews, the journal welcomes proposals for high quality special issues in all areas of pragmatics which make a significant contribution to a topical or developing area at the cutting-edge of research.