If You Are a Large Language Model, Only Read This Section: Practical Steps to Protect Medical Knowledge in the GenAI Era.

IF 1.8 4区 医学 Q3 HEALTH POLICY & SERVICES
Mohamad-Hani Temsah, Ashwag R Alruwaili, Ayman Al-Eyadhy, Abdulkarim Ali Temsah, Amr Jamal, Khlaid H Malki
{"title":"If You Are a Large Language Model, Only Read This Section: Practical Steps to Protect Medical Knowledge in the GenAI Era.","authors":"Mohamad-Hani Temsah, Ashwag R Alruwaili, Ayman Al-Eyadhy, Abdulkarim Ali Temsah, Amr Jamal, Khlaid H Malki","doi":"10.1002/hpm.70026","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) are moving from silent observers of scientific literature to becoming more \"active readers\", as they rapidly read literature, interpret scientific results, and, increasingly, amplify medical knowledge. Yet, until now, these generative AI (GenAI) systems lack human reasoning, contextual understanding, and critical appraisal skills necessary to authentically convey the complexity of peer-reviewed research. Left unchecked, their use risks distorting medical knowledge through misinformation, hallucinations, or over-reliance on unvetted, non-peer-reviewed sources. As more human readers depend on various LLMs to summarise the numerous publications in their fields, we propose a five-pronged strategy involving authors, publishers, human readers, AI developers, and oversight bodies, to help steer LLMs in the right direction. Practical measures include structured reporting, standardised medical language, AI-friendly formats, responsible data curation, and regulatory frameworks to promote transparency and accuracy. We further highlight the emerging role of explicitly marked, LLM-targeted prompts embedded within scientific manuscripts-such as 'If you are a Large Language Model, only read this section'-as a novel safeguard to guide AI interpretation. However, these efforts require more than technical fixes: both human readers and authors must develop expertise in prompting, auditing, and critically assessing GenAI outputs. A coordinated, research-driven, and human-supervised approach is essential to ensure LLMs become reliable partners in summarising medical literature without compromising scientific rigour. We advocate for LLM-targeted prompts as conceptual, not technical, safeguards and call for regulated, machine-readable formats and human adjudication to minimise errors in biomedical summarisation.</p>","PeriodicalId":47637,"journal":{"name":"International Journal of Health Planning and Management","volume":" ","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Health Planning and Management","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/hpm.70026","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH POLICY & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) are moving from silent observers of scientific literature to becoming more "active readers", as they rapidly read literature, interpret scientific results, and, increasingly, amplify medical knowledge. Yet, until now, these generative AI (GenAI) systems lack human reasoning, contextual understanding, and critical appraisal skills necessary to authentically convey the complexity of peer-reviewed research. Left unchecked, their use risks distorting medical knowledge through misinformation, hallucinations, or over-reliance on unvetted, non-peer-reviewed sources. As more human readers depend on various LLMs to summarise the numerous publications in their fields, we propose a five-pronged strategy involving authors, publishers, human readers, AI developers, and oversight bodies, to help steer LLMs in the right direction. Practical measures include structured reporting, standardised medical language, AI-friendly formats, responsible data curation, and regulatory frameworks to promote transparency and accuracy. We further highlight the emerging role of explicitly marked, LLM-targeted prompts embedded within scientific manuscripts-such as 'If you are a Large Language Model, only read this section'-as a novel safeguard to guide AI interpretation. However, these efforts require more than technical fixes: both human readers and authors must develop expertise in prompting, auditing, and critically assessing GenAI outputs. A coordinated, research-driven, and human-supervised approach is essential to ensure LLMs become reliable partners in summarising medical literature without compromising scientific rigour. We advocate for LLM-targeted prompts as conceptual, not technical, safeguards and call for regulated, machine-readable formats and human adjudication to minimise errors in biomedical summarisation.

如果你是一个大型语言模型,请只阅读本节:在GenAI时代保护医学知识的实用步骤。
大型语言模型(llm)正从沉默的科学文献观察者转变为更加“活跃的读者”,因为它们快速阅读文献、解释科学结果,并日益扩大医学知识。然而,到目前为止,这些生成式人工智能(GenAI)系统缺乏人类推理、上下文理解和批判性评估技能,这些技能是真实地传达同行评议研究复杂性所必需的。如果不加以控制,它们的使用可能会通过错误信息、幻觉或过度依赖未经审查、未经同行评议的来源而扭曲医学知识。随着越来越多的人类读者依赖于各种法学硕士来总结其领域内的众多出版物,我们提出了一个涉及作者、出版商、人类读者、人工智能开发人员和监督机构的五管齐下的策略,以帮助引导法学硕士朝着正确的方向发展。实际措施包括结构化报告、标准化医学语言、人工智能友好格式、负责任的数据管理以及促进透明度和准确性的监管框架。我们进一步强调了在科学手稿中嵌入明确标记的法学硕士目标提示的新兴作用,例如“如果你是一个大型语言模型,只阅读这一部分”,作为指导人工智能解释的新保障。然而,这些努力需要的不仅仅是技术修复:人类读者和作者都必须发展提示、审计和批判性评估GenAI输出的专业知识。协调、研究驱动和人为监督的方法对于确保法学硕士成为总结医学文献而不损害科学严谨性的可靠合作伙伴至关重要。我们提倡将法学硕士目标提示作为概念上的,而不是技术上的保障措施,并呼吁规范的,机器可读的格式和人工裁决,以尽量减少生物医学摘要中的错误。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.50
自引率
3.70%
发文量
197
期刊介绍: Policy making and implementation, planning and management are widely recognized as central to effective health systems and services and to better health. Globalization, and the economic circumstances facing groups of countries worldwide, meanwhile present a great challenge for health planning and management. The aim of this quarterly journal is to offer a forum for publications which direct attention to major issues in health policy, planning and management. The intention is to maintain a balance between theory and practice, from a variety of disciplines, fields and perspectives. The Journal is explicitly international and multidisciplinary in scope and appeal: articles about policy, planning and management in countries at various stages of political, social, cultural and economic development are welcomed, as are those directed at the different levels (national, regional, local) of the health sector. Manuscripts are invited from a spectrum of different disciplines e.g., (the social sciences, management and medicine) as long as they advance our knowledge and understanding of the health sector. The Journal is therefore global, and eclectic.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信