Domenico Marrella, Su Jiang, Kyros Ipaktchi, Philippe Liverneaux
{"title":"比较人工智能和人类同行评议:对11篇文章的研究。","authors":"Domenico Marrella, Su Jiang, Kyros Ipaktchi, Philippe Liverneaux","doi":"10.1016/j.hansur.2025.102225","DOIUrl":null,"url":null,"abstract":"<p><p>While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid \"hallucinations.\" Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.</p>","PeriodicalId":94023,"journal":{"name":"Hand surgery & rehabilitation","volume":" ","pages":"102225"},"PeriodicalIF":1.0000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing AI-generated and human peer reviews: A study on 11 articles.\",\"authors\":\"Domenico Marrella, Su Jiang, Kyros Ipaktchi, Philippe Liverneaux\",\"doi\":\"10.1016/j.hansur.2025.102225\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid \\\"hallucinations.\\\" Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.</p>\",\"PeriodicalId\":94023,\"journal\":{\"name\":\"Hand surgery & rehabilitation\",\"volume\":\" \",\"pages\":\"102225\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Hand surgery & rehabilitation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.hansur.2025.102225\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Hand surgery & rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.hansur.2025.102225","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparing AI-generated and human peer reviews: A study on 11 articles.
While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid "hallucinations." Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.