Sharif Alghazo , Ghaleb Rabab'ah , Dina Abdel Salam El-Dakhs , Ayah Mustafa
{"title":"Engagement strategies in human-written and AI-generated academic essays: A corpus-based study","authors":"Sharif Alghazo , Ghaleb Rabab'ah , Dina Abdel Salam El-Dakhs , Ayah Mustafa","doi":"10.1016/j.amper.2025.100237","DOIUrl":null,"url":null,"abstract":"<div><div>Based on an appraisal theory framework, this corpus-based study explores the use and functions of engagement strategies in human-written and AI-generated academic essays. A total of 80 essays (40 human-written from the LOCNESS corpus, which includes essays written by university-level native English writers, and 40 AI-generated by ChatGPT) were analysed. A mixed-methods approach was employed, involving both quantitative (including chi-square tests) and qualitative analyses of Expansion and Contraction strategies. Analysis shows that both Expansion and Contraction strategies occur more significantly in human-written texts than in AI-generated texts. Native English writers utilise a more significant proportion of <em>Entertain</em> markers, with a sensitive regard for alternative standpoints, and utilise <em>Disclaim</em> markers in actively opposing counterarguments. AI-generated texts, in contrast, utilise a high proportion of objective citing and hedging, with little objective use of strong <em>Proclaim</em> markers and a virtual lack of <strong>Concur</strong> dialogistic positions. There is a striking contrast in engagement functions, with humans utilising a more significant proportion of complex rhetoric and more profound argumentation supported through statistical analysis. The findings provide implications for educators and writing instructors aiming to enhance students’ argumentative skills and for developers of AI writing tools seeking to improve rhetorical complexity and engagement in generated texts.</div></div>","PeriodicalId":35076,"journal":{"name":"Ampersand","volume":"15 ","pages":"Article 100237"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ampersand","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2215039025000219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0
Abstract
Based on an appraisal theory framework, this corpus-based study explores the use and functions of engagement strategies in human-written and AI-generated academic essays. A total of 80 essays (40 human-written from the LOCNESS corpus, which includes essays written by university-level native English writers, and 40 AI-generated by ChatGPT) were analysed. A mixed-methods approach was employed, involving both quantitative (including chi-square tests) and qualitative analyses of Expansion and Contraction strategies. Analysis shows that both Expansion and Contraction strategies occur more significantly in human-written texts than in AI-generated texts. Native English writers utilise a more significant proportion of Entertain markers, with a sensitive regard for alternative standpoints, and utilise Disclaim markers in actively opposing counterarguments. AI-generated texts, in contrast, utilise a high proportion of objective citing and hedging, with little objective use of strong Proclaim markers and a virtual lack of Concur dialogistic positions. There is a striking contrast in engagement functions, with humans utilising a more significant proportion of complex rhetoric and more profound argumentation supported through statistical analysis. The findings provide implications for educators and writing instructors aiming to enhance students’ argumentative skills and for developers of AI writing tools seeking to improve rhetorical complexity and engagement in generated texts.