比较人工智能和人类同行评议:对11篇文章的研究。

IF 1
Domenico Marrella, Su Jiang, Kyros Ipaktchi, Philippe Liverneaux
{"title":"比较人工智能和人类同行评议:对11篇文章的研究。","authors":"Domenico Marrella, Su Jiang, Kyros Ipaktchi, Philippe Liverneaux","doi":"10.1016/j.hansur.2025.102225","DOIUrl":null,"url":null,"abstract":"<p><p>While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid \"hallucinations.\" Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.</p>","PeriodicalId":94023,"journal":{"name":"Hand surgery & rehabilitation","volume":" ","pages":"102225"},"PeriodicalIF":1.0000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing AI-generated and human peer reviews: A study on 11 articles.\",\"authors\":\"Domenico Marrella, Su Jiang, Kyros Ipaktchi, Philippe Liverneaux\",\"doi\":\"10.1016/j.hansur.2025.102225\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid \\\"hallucinations.\\\" Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.</p>\",\"PeriodicalId\":94023,\"journal\":{\"name\":\"Hand surgery & rehabilitation\",\"volume\":\" \",\"pages\":\"102225\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Hand surgery & rehabilitation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.hansur.2025.102225\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Hand surgery & rehabilitation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.hansur.2025.102225","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

虽然同行评议过程仍然是评估科学文章质量的黄金标准,但由于提交量的增加和评议时间的延长,它正面临危机。本研究评估了ChatGPT制定编辑决策和对外科相关稿件进行同行评审的能力。我们测试了ChatGPT的同行评审质量超过人类评审的假设。11篇在手外科领域发表的文章,最初被一家杂志拒绝,后来被另一家杂志接受,通过删除原始PDF提交的标题页来匿名化,随后通过请求ChatGPT 40和01来评估每篇文章的发表资格,并产生同行评议。禁止向大型语言模型提交未发表的手稿的政策没有被违反,因为所有文章在研究时都已发表。一位经验丰富的手外科医生使用ARCADIA评分评估了所有同行评议(包括来自拒绝和接受期刊以及chatgpt生成的原始人类评议),该评分由20个项目组成,在李克特量表上从1到5打分。ChatGPT 40的平均合格率为95%,而ChatGPT 01的平均合格率为98%。ChatGPT 40的决策与影响因子最高的期刊的决策一致性为32%,而ChatGPT 01的决策一致性为29%。ChatGPT 40的决定与影响因子最低的期刊(68%)一致,而ChatGPT 01的决定与影响因子最低的期刊(71%)一致。人工审稿人生成的ARCADIA同行评议分数(接受文章的期刊为2.8分,拒绝文章的期刊为3.2分)低于ChatGPT 40(4.8分)和01(4.9分)。总之,ChatGPT可以优化科学文章的同行评议过程,如果它能收到避免“幻觉”的精确指示的话。它的许多功能都超过了人类的能力,但严格管理它的局限性对于提高出版质量至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Comparing AI-generated and human peer reviews: A study on 11 articles.

While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid "hallucinations." Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信