{"title":"GPT作为书评人:GPT生成的与学者撰写的学术书评的移动和句法复杂性分析","authors":"Yao Guangyuan , Liu Zhaoxia","doi":"10.1016/j.jeap.2025.101533","DOIUrl":null,"url":null,"abstract":"<div><div>This study investigates the rhetorical and syntactic features of GPT-generated versus scholar-written academic book reviews through move analysis and syntactic complexity assessment. Drawing on Swales’ genre analysis framework, we compare 50 human-authored reviews from SSCI-indexed journals with 100 GPT-generated reviews produced using two prompting strategies. Our analysis reveals three key findings: a systematic divergence in rhetorical strategy, fundamentally different syntactic strategies across rhetorical moves, and a clear hierarchy of rhetorical accessibility to GPT. Human reviewers tend to tailor their approach to the context and employ rhetorical steps selectively, whereas GPT applies a more exhaustive and potentially formulaic strategy. In addition, human-authored reviews employ higher levels of subordination to create hierarchical relationships between ideas, while GPT-generated reviews achieve complexity through increased nominal elaboration and coordination. Furthermore, GPT demonstrates proficiency in standardized structural moves (content description and closing evaluation), but faces significant challenges with disciplinary contextualization. These patterns suggest that while GPT can effectively replicate formal aspects of academic genres, it lacks the rhetorical judgment and disciplinary insight that characterize expert human writing. Our findings contribute to theoretical understandings of genre analysis by revealing how rhetorical moves exist along a spectrum of technological accessibility and implications for AI-human collaboration in academic writing contexts.</div></div>","PeriodicalId":47717,"journal":{"name":"Journal of English for Academic Purposes","volume":"76 ","pages":"Article 101533"},"PeriodicalIF":3.1000,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GPT as book reviewer: A move and syntactic complexity analysis of GPT-generated versus scholar-written academic book reviews\",\"authors\":\"Yao Guangyuan , Liu Zhaoxia\",\"doi\":\"10.1016/j.jeap.2025.101533\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This study investigates the rhetorical and syntactic features of GPT-generated versus scholar-written academic book reviews through move analysis and syntactic complexity assessment. Drawing on Swales’ genre analysis framework, we compare 50 human-authored reviews from SSCI-indexed journals with 100 GPT-generated reviews produced using two prompting strategies. Our analysis reveals three key findings: a systematic divergence in rhetorical strategy, fundamentally different syntactic strategies across rhetorical moves, and a clear hierarchy of rhetorical accessibility to GPT. Human reviewers tend to tailor their approach to the context and employ rhetorical steps selectively, whereas GPT applies a more exhaustive and potentially formulaic strategy. In addition, human-authored reviews employ higher levels of subordination to create hierarchical relationships between ideas, while GPT-generated reviews achieve complexity through increased nominal elaboration and coordination. Furthermore, GPT demonstrates proficiency in standardized structural moves (content description and closing evaluation), but faces significant challenges with disciplinary contextualization. These patterns suggest that while GPT can effectively replicate formal aspects of academic genres, it lacks the rhetorical judgment and disciplinary insight that characterize expert human writing. Our findings contribute to theoretical understandings of genre analysis by revealing how rhetorical moves exist along a spectrum of technological accessibility and implications for AI-human collaboration in academic writing contexts.</div></div>\",\"PeriodicalId\":47717,\"journal\":{\"name\":\"Journal of English for Academic Purposes\",\"volume\":\"76 \",\"pages\":\"Article 101533\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of English for Academic Purposes\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1475158525000645\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of English for Academic Purposes","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1475158525000645","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
GPT as book reviewer: A move and syntactic complexity analysis of GPT-generated versus scholar-written academic book reviews
This study investigates the rhetorical and syntactic features of GPT-generated versus scholar-written academic book reviews through move analysis and syntactic complexity assessment. Drawing on Swales’ genre analysis framework, we compare 50 human-authored reviews from SSCI-indexed journals with 100 GPT-generated reviews produced using two prompting strategies. Our analysis reveals three key findings: a systematic divergence in rhetorical strategy, fundamentally different syntactic strategies across rhetorical moves, and a clear hierarchy of rhetorical accessibility to GPT. Human reviewers tend to tailor their approach to the context and employ rhetorical steps selectively, whereas GPT applies a more exhaustive and potentially formulaic strategy. In addition, human-authored reviews employ higher levels of subordination to create hierarchical relationships between ideas, while GPT-generated reviews achieve complexity through increased nominal elaboration and coordination. Furthermore, GPT demonstrates proficiency in standardized structural moves (content description and closing evaluation), but faces significant challenges with disciplinary contextualization. These patterns suggest that while GPT can effectively replicate formal aspects of academic genres, it lacks the rhetorical judgment and disciplinary insight that characterize expert human writing. Our findings contribute to theoretical understandings of genre analysis by revealing how rhetorical moves exist along a spectrum of technological accessibility and implications for AI-human collaboration in academic writing contexts.
期刊介绍:
The Journal of English for Academic Purposes provides a forum for the dissemination of information and views which enables practitioners of and researchers in EAP to keep current with developments in their field and to contribute to its continued updating. JEAP publishes articles, book reviews, conference reports, and academic exchanges in the linguistic, sociolinguistic and psycholinguistic description of English as it occurs in the contexts of academic study and scholarly exchange itself.