人工智能支持的健康指南问题开发。

IF 19.6 1区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Annals of Internal Medicine Pub Date : 2024-11-01 Epub Date: 2024-09-24 DOI:10.7326/ANNALS-24-00363
Bernardo Sousa-Pinto, Rafael José Vieira, Manuel Marques-Cruz, Antonio Bognanni, Sara Gil-Mata, Slava Jankin, Joana Amaro, Liliane Pinheiro, Marta Mota, Mattia Giovannini, Leticia de Las Vecillas, Ana Margarida Pereira, Justyna Lityńska, Boleslaw Samolinski, Jonathan Bernstein, Mark Dykewicz, Martin Hofmann-Apitius, Marc Jacobs, Nikolaos Papadopoulos, Sian Williams, Torsten Zuberbier, João A Fonseca, Ricardo Cruz-Correia, Jean Bousquet, Holger J Schünemann
{"title":"人工智能支持的健康指南问题开发。","authors":"Bernardo Sousa-Pinto, Rafael José Vieira, Manuel Marques-Cruz, Antonio Bognanni, Sara Gil-Mata, Slava Jankin, Joana Amaro, Liliane Pinheiro, Marta Mota, Mattia Giovannini, Leticia de Las Vecillas, Ana Margarida Pereira, Justyna Lityńska, Boleslaw Samolinski, Jonathan Bernstein, Mark Dykewicz, Martin Hofmann-Apitius, Marc Jacobs, Nikolaos Papadopoulos, Sian Williams, Torsten Zuberbier, João A Fonseca, Ricardo Cruz-Correia, Jean Bousquet, Holger J Schünemann","doi":"10.7326/ANNALS-24-00363","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Guideline questions are typically proposed by experts.</p><p><strong>Objective: </strong>To assess how large language models (LLMs) can support the development of guideline questions, providing insights on approaches and lessons learned.</p><p><strong>Design: </strong>Two approaches for guideline question generation were assessed: 1) identification of questions conveyed by online search queries and 2) direct generation of guideline questions by LLMs. For the former, the researchers retrieved popular queries on allergic rhinitis using Google Trends (GT) and identified those conveying questions using both manual and LLM-based methods. They then manually structured as guideline questions the queries that conveyed relevant questions. For the second approach, they tasked an LLM with proposing guideline questions, assuming the role of either a patient or a clinician.</p><p><strong>Setting: </strong>Allergic Rhinitis and its Impact on Asthma (ARIA) 2024 guidelines.</p><p><strong>Participants: </strong>None.</p><p><strong>Measurements: </strong>Frequency of relevant questions generated.</p><p><strong>Results: </strong>The authors retrieved 3975 unique queries using GT. From these, they identified 37 questions, of which 22 had not been previously posed by guideline panel members and 2 were eventually prioritized by the panel. Direct interactions with LLMs resulted in the generation of 22 unique relevant questions (11 not previously suggested by panel members), and 4 were eventually prioritized by the panel. In total, 6 of 39 final questions prioritized for the 2024 ARIA guidelines were not initially thought of by the panel. The researchers provide a set of practical insights on the implementation of their approaches based on the lessons learned.</p><p><strong>Limitation: </strong>Single case study (ARIA guidelines).</p><p><strong>Conclusion: </strong>Approaches using LLMs can support the development of guideline questions, complementing traditional methods and potentially augmenting questions prioritized by guideline panels.</p><p><strong>Primary funding source: </strong>Fraunhofer Cluster of Excellence for Immune-Mediated Diseases.</p>","PeriodicalId":7932,"journal":{"name":"Annals of Internal Medicine","volume":" ","pages":"1518-1529"},"PeriodicalIF":19.6000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence-Supported Development of Health Guideline Questions.\",\"authors\":\"Bernardo Sousa-Pinto, Rafael José Vieira, Manuel Marques-Cruz, Antonio Bognanni, Sara Gil-Mata, Slava Jankin, Joana Amaro, Liliane Pinheiro, Marta Mota, Mattia Giovannini, Leticia de Las Vecillas, Ana Margarida Pereira, Justyna Lityńska, Boleslaw Samolinski, Jonathan Bernstein, Mark Dykewicz, Martin Hofmann-Apitius, Marc Jacobs, Nikolaos Papadopoulos, Sian Williams, Torsten Zuberbier, João A Fonseca, Ricardo Cruz-Correia, Jean Bousquet, Holger J Schünemann\",\"doi\":\"10.7326/ANNALS-24-00363\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Guideline questions are typically proposed by experts.</p><p><strong>Objective: </strong>To assess how large language models (LLMs) can support the development of guideline questions, providing insights on approaches and lessons learned.</p><p><strong>Design: </strong>Two approaches for guideline question generation were assessed: 1) identification of questions conveyed by online search queries and 2) direct generation of guideline questions by LLMs. For the former, the researchers retrieved popular queries on allergic rhinitis using Google Trends (GT) and identified those conveying questions using both manual and LLM-based methods. They then manually structured as guideline questions the queries that conveyed relevant questions. For the second approach, they tasked an LLM with proposing guideline questions, assuming the role of either a patient or a clinician.</p><p><strong>Setting: </strong>Allergic Rhinitis and its Impact on Asthma (ARIA) 2024 guidelines.</p><p><strong>Participants: </strong>None.</p><p><strong>Measurements: </strong>Frequency of relevant questions generated.</p><p><strong>Results: </strong>The authors retrieved 3975 unique queries using GT. From these, they identified 37 questions, of which 22 had not been previously posed by guideline panel members and 2 were eventually prioritized by the panel. Direct interactions with LLMs resulted in the generation of 22 unique relevant questions (11 not previously suggested by panel members), and 4 were eventually prioritized by the panel. In total, 6 of 39 final questions prioritized for the 2024 ARIA guidelines were not initially thought of by the panel. The researchers provide a set of practical insights on the implementation of their approaches based on the lessons learned.</p><p><strong>Limitation: </strong>Single case study (ARIA guidelines).</p><p><strong>Conclusion: </strong>Approaches using LLMs can support the development of guideline questions, complementing traditional methods and potentially augmenting questions prioritized by guideline panels.</p><p><strong>Primary funding source: </strong>Fraunhofer Cluster of Excellence for Immune-Mediated Diseases.</p>\",\"PeriodicalId\":7932,\"journal\":{\"name\":\"Annals of Internal Medicine\",\"volume\":\" \",\"pages\":\"1518-1529\"},\"PeriodicalIF\":19.6000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Internal Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.7326/ANNALS-24-00363\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/9/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Internal Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.7326/ANNALS-24-00363","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/24 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

摘要

背景:准则问题通常由专家提出:指南问题通常由专家提出:目的:评估大型语言模型(LLM)如何支持指南问题的制定,提供有关方法和经验教训的见解:设计:评估了指南问题生成的两种方法:设计:评估了两种指南问题生成方法:1)通过在线搜索查询确定问题;2)通过大语言模型直接生成指南问题。对于前者,研究人员使用谷歌趋势(Google Trends,GT)检索了有关过敏性鼻炎的热门查询,并使用手动和基于 LLM 的方法识别了那些传达问题的查询。然后,他们将传达相关问题的查询手动编排为指导性问题。在第二种方法中,他们让一名 LLM 假设病人或临床医生的角色,提出指南问题:过敏性鼻炎及其对哮喘的影响(ARIA)2024 指南:无参与者:测量:产生相关问题的频率:结果:作者使用 GT 检索了 3975 个独特的查询。从中,他们确定了 37 个问题,其中 22 个问题指南小组成员之前未提出过,2 个问题最终被指南小组列为优先问题。通过与 LLM 的直接互动,产生了 22 个独特的相关问题(其中 11 个是专家组成员之前未提出过的),4 个问题最终被专家组成员优先考虑。总之,在 2024 年 ARIA 指南优先考虑的 39 个最终问题中,有 6 个问题最初并不是由专家小组提出的。研究人员在总结经验教训的基础上,就如何实施这些方法提出了一系列实用见解:局限性:单一案例研究(ARIA指南):使用 LLMs 的方法可以支持指南问题的开发,是对传统方法的补充,并有可能增加指南小组优先考虑的问题:主要资金来源:弗劳恩霍夫免疫相关疾病英才集群。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence-Supported Development of Health Guideline Questions.

Background: Guideline questions are typically proposed by experts.

Objective: To assess how large language models (LLMs) can support the development of guideline questions, providing insights on approaches and lessons learned.

Design: Two approaches for guideline question generation were assessed: 1) identification of questions conveyed by online search queries and 2) direct generation of guideline questions by LLMs. For the former, the researchers retrieved popular queries on allergic rhinitis using Google Trends (GT) and identified those conveying questions using both manual and LLM-based methods. They then manually structured as guideline questions the queries that conveyed relevant questions. For the second approach, they tasked an LLM with proposing guideline questions, assuming the role of either a patient or a clinician.

Setting: Allergic Rhinitis and its Impact on Asthma (ARIA) 2024 guidelines.

Participants: None.

Measurements: Frequency of relevant questions generated.

Results: The authors retrieved 3975 unique queries using GT. From these, they identified 37 questions, of which 22 had not been previously posed by guideline panel members and 2 were eventually prioritized by the panel. Direct interactions with LLMs resulted in the generation of 22 unique relevant questions (11 not previously suggested by panel members), and 4 were eventually prioritized by the panel. In total, 6 of 39 final questions prioritized for the 2024 ARIA guidelines were not initially thought of by the panel. The researchers provide a set of practical insights on the implementation of their approaches based on the lessons learned.

Limitation: Single case study (ARIA guidelines).

Conclusion: Approaches using LLMs can support the development of guideline questions, complementing traditional methods and potentially augmenting questions prioritized by guideline panels.

Primary funding source: Fraunhofer Cluster of Excellence for Immune-Mediated Diseases.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Annals of Internal Medicine
Annals of Internal Medicine 医学-医学:内科
CiteScore
23.90
自引率
1.80%
发文量
1136
审稿时长
3-8 weeks
期刊介绍: Established in 1927 by the American College of Physicians (ACP), Annals of Internal Medicine is the premier internal medicine journal. Annals of Internal Medicine’s mission is to promote excellence in medicine, enable physicians and other health care professionals to be well informed members of the medical community and society, advance standards in the conduct and reporting of medical research, and contribute to improving the health of people worldwide. To achieve this mission, the journal publishes a wide variety of original research, review articles, practice guidelines, and commentary relevant to clinical practice, health care delivery, public health, health care policy, medical education, ethics, and research methodology. In addition, the journal publishes personal narratives that convey the feeling and the art of medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信