Domenico Marrella , Su Jiang , Kyros Ipaktchi , Philippe Liverneaux
{"title":"比较人工智能和人类同行评议:对11篇文章的研究。","authors":"Domenico Marrella , Su Jiang , Kyros Ipaktchi , Philippe Liverneaux","doi":"10.1016/j.hansur.2025.102225","DOIUrl":null,"url":null,"abstract":"<div><div>While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT’s ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers.</div><div>Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article’s eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study.</div><div>An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale.</div><div>The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9).</div><div>In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid \"hallucinations.\" Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.</div></div>","PeriodicalId":54301,"journal":{"name":"Hand Surgery & Rehabilitation","volume":"44 4","pages":"Article 102225"},"PeriodicalIF":1.0000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing AI-generated and human peer reviews: A study on 11 articles\",\"authors\":\"Domenico Marrella , Su Jiang , Kyros Ipaktchi , Philippe Liverneaux\",\"doi\":\"10.1016/j.hansur.2025.102225\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT’s ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers.</div><div>Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article’s eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study.</div><div>An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale.</div><div>The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9).</div><div>In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid \\\"hallucinations.\\\" Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.</div></div>\",\"PeriodicalId\":54301,\"journal\":{\"name\":\"Hand Surgery & Rehabilitation\",\"volume\":\"44 4\",\"pages\":\"Article 102225\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Hand Surgery & Rehabilitation\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468122925001471\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Hand Surgery & Rehabilitation","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468122925001471","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
Comparing AI-generated and human peer reviews: A study on 11 articles
While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT’s ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers.
Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article’s eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study.
An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale.
The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9).
In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid "hallucinations." Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.
期刊介绍:
As the official publication of the French, Belgian and Swiss Societies for Surgery of the Hand, as well as of the French Society of Rehabilitation of the Hand & Upper Limb, ''Hand Surgery and Rehabilitation'' - formerly named "Chirurgie de la Main" - publishes original articles, literature reviews, technical notes, and clinical cases. It is indexed in the main international databases (including Medline). Initially a platform for French-speaking hand surgeons, the journal will now publish its articles in English to disseminate its author''s scientific findings more widely. The journal also includes a biannual supplement in French, the monograph of the French Society for Surgery of the Hand, where comprehensive reviews in the fields of hand, peripheral nerve and upper limb surgery are presented.
Organe officiel de la Société française de chirurgie de la main, de la Société française de Rééducation de la main (SFRM-GEMMSOR), de la Société suisse de chirurgie de la main et du Belgian Hand Group, indexée dans les grandes bases de données internationales (Medline, Embase, Pascal, Scopus), Hand Surgery and Rehabilitation - anciennement titrée Chirurgie de la main - publie des articles originaux, des revues de la littérature, des notes techniques, des cas clinique. Initialement plateforme d''expression francophone de la spécialité, la revue s''oriente désormais vers l''anglais pour devenir une référence scientifique et de formation de la spécialité en France et en Europe. Avec 6 publications en anglais par an, la revue comprend également un supplément biannuel, la monographie du GEM, où sont présentées en français, des mises au point complètes dans les domaines de la chirurgie de la main, des nerfs périphériques et du membre supérieur.