Kylie J Nabata, Yasir AlShehri, Abdullah Mashat, Sam M Wiseman
{"title":"Evaluating human ability to distinguish between ChatGPT-generated and original scientific abstracts.","authors":"Kylie J Nabata, Yasir AlShehri, Abdullah Mashat, Sam M Wiseman","doi":"10.1007/s13304-025-02106-3","DOIUrl":null,"url":null,"abstract":"<p><p>This study aims to analyze the accuracy of human reviewers in identifying scientific abstracts generated by ChatGPT compared to the original abstracts. Participants completed an online survey presenting two research abstracts: one generated by ChatGPT and one original abstract. They had to identify which abstract was generated by AI and provide feedback on their preference and perceptions of AI technology in academic writing. This observational cross-sectional study involved surgical trainees and faculty at the University of British Columbia. The survey was distributed to all surgeons and trainees affiliated with the University of British Columbia, which includes general surgery, orthopedic surgery, thoracic surgery, plastic surgery, cardiovascular surgery, vascular surgery, neurosurgery, urology, otolaryngology, pediatric surgery, and obstetrics and gynecology. A total of 41 participants completed the survey. 41 participants responded, comprising 10 (23.3%) surgeons. Eighteen (40.0%) participants correctly identified the original abstract. Twenty-six (63.4%) participants preferred the ChatGPT abstract (p = 0.0001). On multivariate analysis, preferring the original abstract was associated with correct identification of the original abstract [OR 7.46, 95% CI (1.78, 31.4), p = 0.006]. Results suggest that human reviewers cannot accurately distinguish between human and AI-generated abstracts, and overall, there was a trend toward a preference for AI-generated abstracts. The findings contributed to understanding the implications of AI in manuscript production, including its benefits and ethical considerations.</p>","PeriodicalId":23391,"journal":{"name":"Updates in Surgery","volume":" ","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Updates in Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s13304-025-02106-3","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
This study aims to analyze the accuracy of human reviewers in identifying scientific abstracts generated by ChatGPT compared to the original abstracts. Participants completed an online survey presenting two research abstracts: one generated by ChatGPT and one original abstract. They had to identify which abstract was generated by AI and provide feedback on their preference and perceptions of AI technology in academic writing. This observational cross-sectional study involved surgical trainees and faculty at the University of British Columbia. The survey was distributed to all surgeons and trainees affiliated with the University of British Columbia, which includes general surgery, orthopedic surgery, thoracic surgery, plastic surgery, cardiovascular surgery, vascular surgery, neurosurgery, urology, otolaryngology, pediatric surgery, and obstetrics and gynecology. A total of 41 participants completed the survey. 41 participants responded, comprising 10 (23.3%) surgeons. Eighteen (40.0%) participants correctly identified the original abstract. Twenty-six (63.4%) participants preferred the ChatGPT abstract (p = 0.0001). On multivariate analysis, preferring the original abstract was associated with correct identification of the original abstract [OR 7.46, 95% CI (1.78, 31.4), p = 0.006]. Results suggest that human reviewers cannot accurately distinguish between human and AI-generated abstracts, and overall, there was a trend toward a preference for AI-generated abstracts. The findings contributed to understanding the implications of AI in manuscript production, including its benefits and ethical considerations.
期刊介绍:
Updates in Surgery (UPIS) has been founded in 2010 as the official journal of the Italian Society of Surgery. It’s an international, English-language, peer-reviewed journal dedicated to the surgical sciences. Its main goal is to offer a valuable update on the most recent developments of those surgical techniques that are rapidly evolving, forcing the community of surgeons to a rigorous debate and a continuous refinement of standards of care. In this respect position papers on the mostly debated surgical approaches and accreditation criteria have been published and are welcome for the future.
Beside its focus on general surgery, the journal draws particular attention to cutting edge topics and emerging surgical fields that are publishing in monothematic issues guest edited by well-known experts.
Updates in Surgery has been considering various types of papers: editorials, comprehensive reviews, original studies and technical notes related to specific surgical procedures and techniques on liver, colorectal, gastric, pancreatic, robotic and bariatric surgery.