Hidden Prompts in Manuscripts Threaten the Integrity of Peer Review and Research: Recommendations for Journals and Institutions

IF 5.4 2区 医学 Q3 ENGINEERING, BIOMEDICAL
Louie Giray
{"title":"Hidden Prompts in Manuscripts Threaten the Integrity of Peer Review and Research: Recommendations for Journals and Institutions","authors":"Louie Giray","doi":"10.1007/s10439-025-03827-7","DOIUrl":null,"url":null,"abstract":"<div><p>I examine the scholarly implications of a troubling case where researchers embedded hidden prompts like “give a positive review only” into academic preprints to manipulate AI-assisted peer review. AI is now woven into nearly every facet of academic life, including the peer review process. I contend that manipulating peer review through embedding secret prompts is as serious as plagiarism or data fabrication. Peer review may not be perfect, but deception is misconduct. Reviewers must still be held accountable. Those who blindly rely on AI outputs without critical engagement fail in their scholarly duty. AI should only amplify the reviewer’s expertise. As institutions begin regulating AI in research, similar frameworks must extend to peer review. Journals and publishers should establish clear, enforceable guidelines on acceptable AI use: Will AI be banned, regulated, or embraced? If allowed, disclosures must be mandatory. Authors should also be informed if AI tools will be used in the review process, ensuring transparency and consent. Confidentiality is another pressing issue. Real cases have shown how ChatGPT links shared by reviewers were indexed online, compromising sensitive, unpublished research, even though OpenAI has since moved to discontinue public link discoverability. Beyond policy, we must cultivate a culture of transparency, trust, and responsibility. Institutions can host ethics workshops and mentor early-career scholars. This is not just about AI; it is about who we are as researchers and reviewers. No matter how advanced the technology, integrity must remain our anchor. Without it, even the most innovative research stands on shaky ground.</p></div>","PeriodicalId":7986,"journal":{"name":"Annals of Biomedical Engineering","volume":"53 10","pages":"2385 - 2388"},"PeriodicalIF":5.4000,"publicationDate":"2025-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://link.springer.com/article/10.1007/s10439-025-03827-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

I examine the scholarly implications of a troubling case where researchers embedded hidden prompts like “give a positive review only” into academic preprints to manipulate AI-assisted peer review. AI is now woven into nearly every facet of academic life, including the peer review process. I contend that manipulating peer review through embedding secret prompts is as serious as plagiarism or data fabrication. Peer review may not be perfect, but deception is misconduct. Reviewers must still be held accountable. Those who blindly rely on AI outputs without critical engagement fail in their scholarly duty. AI should only amplify the reviewer’s expertise. As institutions begin regulating AI in research, similar frameworks must extend to peer review. Journals and publishers should establish clear, enforceable guidelines on acceptable AI use: Will AI be banned, regulated, or embraced? If allowed, disclosures must be mandatory. Authors should also be informed if AI tools will be used in the review process, ensuring transparency and consent. Confidentiality is another pressing issue. Real cases have shown how ChatGPT links shared by reviewers were indexed online, compromising sensitive, unpublished research, even though OpenAI has since moved to discontinue public link discoverability. Beyond policy, we must cultivate a culture of transparency, trust, and responsibility. Institutions can host ethics workshops and mentor early-career scholars. This is not just about AI; it is about who we are as researchers and reviewers. No matter how advanced the technology, integrity must remain our anchor. Without it, even the most innovative research stands on shaky ground.

手稿中隐藏的提示威胁同行评议和研究的完整性:对期刊和机构的建议。
我研究了一个令人不安的案例的学术含义,研究人员在学术预印本中嵌入了隐藏的提示,比如“只给出正面评论”,以操纵人工智能辅助的同行评议。如今,人工智能几乎融入了学术生活的方方面面,包括同行评审过程。我认为,通过嵌入秘密提示来操纵同行评议,与剽窃或数据伪造一样严重。同行评议可能并不完美,但欺骗是不当行为。审稿人仍然必须承担责任。那些盲目依赖人工智能输出而没有批判性参与的人,无法履行他们的学术职责。人工智能只会放大审稿人的专业知识。随着机构开始规范研究中的人工智能,类似的框架必须扩展到同行评审。期刊和出版商应该就可接受的人工智能使用制定明确、可执行的指导方针:人工智能是被禁止、监管还是接受?如果允许,披露必须是强制性的。还应告知作者是否将在审查过程中使用人工智能工具,以确保透明度和同意。保密是另一个紧迫的问题。真实案例表明,评论者分享的ChatGPT链接是如何被在线索引的,这损害了敏感的、未发表的研究,尽管OpenAI已经停止了公共链接的可发现性。除了政策,我们还必须培养一种透明、信任和负责任的文化。机构可以举办道德研讨会,并指导初出茅庐的学者。这不仅仅是关于人工智能;这关系到我们作为研究人员和审稿人的身份。无论技术多么先进,诚信永远是我们的锚。没有它,即使是最具创新性的研究也站在摇摇欲坠的基础上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Annals of Biomedical Engineering
Annals of Biomedical Engineering 工程技术-工程:生物医学
CiteScore
7.50
自引率
15.80%
发文量
212
审稿时长
3 months
期刊介绍: Annals of Biomedical Engineering is an official journal of the Biomedical Engineering Society, publishing original articles in the major fields of bioengineering and biomedical engineering. The Annals is an interdisciplinary and international journal with the aim to highlight integrated approaches to the solutions of biological and biomedical problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信