HealthQ:揭示法学硕士链在医疗保健对话中的提问能力

Q2 Health Professions
Ziyu Wang , Hao Li , Di Huang , Hye-Sung Kim , Chae-Won Shin , Amir M. Rahmani
{"title":"HealthQ:揭示法学硕士链在医疗保健对话中的提问能力","authors":"Ziyu Wang ,&nbsp;Hao Li ,&nbsp;Di Huang ,&nbsp;Hye-Sung Kim ,&nbsp;Chae-Won Shin ,&nbsp;Amir M. Rahmani","doi":"10.1016/j.smhl.2025.100570","DOIUrl":null,"url":null,"abstract":"<div><div>Effective patient care in digital healthcare requires large language models (LLMs) that not only answer questions but also actively gather critical information through well-crafted inquiries. This paper introduces HealthQ, a novel framework for evaluating the questioning capabilities of LLM healthcare chains. By implementing advanced LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, HealthQ assesses how effectively these chains elicit comprehensive and relevant patient information. To achieve this, we integrate an LLM judge to evaluate generated questions across metrics such as specificity, relevance, and usefulness, while aligning these evaluations with traditional Natural Language Processing (NLP) metrics like ROUGE and Named Entity Recognition (NER)-based set comparisons. We validate HealthQ using two custom datasets constructed from public medical datasets, ChatDoctor and MTS-Dialog, and demonstrate its robustness across multiple LLM judge models, including GPT-3.5, GPT-4, and Claude. Our contributions are threefold: we present the first systematic framework for assessing questioning capabilities in healthcare conversations, establish a model-agnostic evaluation methodology, and provide empirical evidence linking high-quality questions to improved patient information elicitation.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100570"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HealthQ: Unveiling questioning capabilities of LLM chains in healthcare conversations\",\"authors\":\"Ziyu Wang ,&nbsp;Hao Li ,&nbsp;Di Huang ,&nbsp;Hye-Sung Kim ,&nbsp;Chae-Won Shin ,&nbsp;Amir M. Rahmani\",\"doi\":\"10.1016/j.smhl.2025.100570\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Effective patient care in digital healthcare requires large language models (LLMs) that not only answer questions but also actively gather critical information through well-crafted inquiries. This paper introduces HealthQ, a novel framework for evaluating the questioning capabilities of LLM healthcare chains. By implementing advanced LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, HealthQ assesses how effectively these chains elicit comprehensive and relevant patient information. To achieve this, we integrate an LLM judge to evaluate generated questions across metrics such as specificity, relevance, and usefulness, while aligning these evaluations with traditional Natural Language Processing (NLP) metrics like ROUGE and Named Entity Recognition (NER)-based set comparisons. We validate HealthQ using two custom datasets constructed from public medical datasets, ChatDoctor and MTS-Dialog, and demonstrate its robustness across multiple LLM judge models, including GPT-3.5, GPT-4, and Claude. Our contributions are threefold: we present the first systematic framework for assessing questioning capabilities in healthcare conversations, establish a model-agnostic evaluation methodology, and provide empirical evidence linking high-quality questions to improved patient information elicitation.</div></div>\",\"PeriodicalId\":37151,\"journal\":{\"name\":\"Smart Health\",\"volume\":\"36 \",\"pages\":\"Article 100570\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart Health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352648325000315\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Health Professions\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352648325000315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Health Professions","Score":null,"Total":0}
引用次数: 0

摘要

在数字医疗保健中,有效的患者护理需要大型语言模型(llm),不仅要回答问题,还要通过精心设计的查询积极收集关键信息。本文介绍了HealthQ,一个用于评估法学硕士医疗保健链的提问能力的新框架。通过实施先进的LLM链,包括检索增强生成(RAG)、思维链(CoT)和反射链,HealthQ评估这些链如何有效地获取全面和相关的患者信息。为了实现这一目标,我们集成了一个法学硕士判断器,通过特异性、相关性和有用性等指标来评估生成的问题,同时将这些评估与传统的自然语言处理(NLP)指标(如ROUGE和基于命名实体识别(NER)的集合比较)保持一致。我们使用从公共医疗数据集ChatDoctor和MTS-Dialog构建的两个自定义数据集验证HealthQ,并演示其在多个LLM判断模型(包括GPT-3.5、GPT-4和Claude)中的鲁棒性。我们的贡献有三个方面:我们提出了第一个用于评估医疗保健对话中的提问能力的系统框架,建立了一个模型不可知论的评估方法,并提供了将高质量问题与改进的患者信息获取联系起来的经验证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

HealthQ: Unveiling questioning capabilities of LLM chains in healthcare conversations

HealthQ: Unveiling questioning capabilities of LLM chains in healthcare conversations
Effective patient care in digital healthcare requires large language models (LLMs) that not only answer questions but also actively gather critical information through well-crafted inquiries. This paper introduces HealthQ, a novel framework for evaluating the questioning capabilities of LLM healthcare chains. By implementing advanced LLM chains, including Retrieval-Augmented Generation (RAG), Chain of Thought (CoT), and reflective chains, HealthQ assesses how effectively these chains elicit comprehensive and relevant patient information. To achieve this, we integrate an LLM judge to evaluate generated questions across metrics such as specificity, relevance, and usefulness, while aligning these evaluations with traditional Natural Language Processing (NLP) metrics like ROUGE and Named Entity Recognition (NER)-based set comparisons. We validate HealthQ using two custom datasets constructed from public medical datasets, ChatDoctor and MTS-Dialog, and demonstrate its robustness across multiple LLM judge models, including GPT-3.5, GPT-4, and Claude. Our contributions are threefold: we present the first systematic framework for assessing questioning capabilities in healthcare conversations, establish a model-agnostic evaluation methodology, and provide empirical evidence linking high-quality questions to improved patient information elicitation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Smart Health
Smart Health Computer Science-Computer Science Applications
CiteScore
6.50
自引率
0.00%
发文量
81
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信