人工智能产生的同行评议的个人经验:案例研究。

IF 7.2 Q1 ETHICS
Nicholas Lo Vecchio
{"title":"人工智能产生的同行评议的个人经验:案例研究。","authors":"Nicholas Lo Vecchio","doi":"10.1186/s41073-025-00161-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.</p><p><strong>Methods: </strong>This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.</p><p><strong>Results: </strong>After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.</p><p><strong>Conclusions: </strong>Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"4"},"PeriodicalIF":7.2000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974187/pdf/","citationCount":"0","resultStr":"{\"title\":\"Personal experience with AI-generated peer reviews: a case study.\",\"authors\":\"Nicholas Lo Vecchio\",\"doi\":\"10.1186/s41073-025-00161-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.</p><p><strong>Methods: </strong>This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.</p><p><strong>Results: </strong>After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.</p><p><strong>Conclusions: </strong>Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.</p>\",\"PeriodicalId\":74682,\"journal\":{\"name\":\"Research integrity and peer review\",\"volume\":\"10 1\",\"pages\":\"4\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974187/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research integrity and peer review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41073-025-00161-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research integrity and peer review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41073-025-00161-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:虽然最近的一些研究着眼于在语料库层面的同行评议中使用大型语言模型(LLM),但迄今为止,在其社会背景下对人工智能生成的评议实例的研究很少。这个第一人称账户的目标是呈现我收到两份匿名同行评议报告的经历,我相信这些报告是使用生成人工智能生成的,以及从中吸取的教训。方法:这是一个关于事件时间线的案例报告,以及我和杂志随后的行动。支持性证据包括报告中的文本模式、在线人工智能检测工具和ChatGPT模拟;为其他可能发现自己处于类似情况的人提供建议。本文的主要研究局限是基于一个人的个人经历。结果:在2023年12月声称使用生成式AI后,我与期刊之间进行了两个月的反复讨论,导致我撤回了投稿。《华尔街日报》否认有任何违反道德的行为,但没有对使用法学硕士的指控采取明确立场。基于这一经验,我建议作者在文章提交之前与期刊就人工智能在同行评审中的使用进行对话;在怀疑未公开使用人工智能的情况下,作者应主动收集证据,要求调查方案,根据需要升级问题,尽可能让独立机构参与进来,并与其他研究人员分享经验。结论:期刊需要在同行评审中迅速采用透明的法学硕士使用政策,特别是要求披露。公开同行评议,宣布所有利益相关者的身份,可能会防止法学硕士被滥用,但在人工智能时代,各方都需要问责制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Personal experience with AI-generated peer reviews: a case study.

Background: While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.

Methods: This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.

Results: After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.

Conclusions: Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
审稿时长
5 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信