Privacy-preserving LLM-based chatbots for hypertensive patient self-management

Q2 Health Professions
Sara Montagna , Stefano Ferretti , Lorenz Cuno Klopfenstein , Michelangelo Ungolo , Martino Francesco Pengo , Gianluca Aguzzi , Matteo Magnini
{"title":"Privacy-preserving LLM-based chatbots for hypertensive patient self-management","authors":"Sara Montagna ,&nbsp;Stefano Ferretti ,&nbsp;Lorenz Cuno Klopfenstein ,&nbsp;Michelangelo Ungolo ,&nbsp;Martino Francesco Pengo ,&nbsp;Gianluca Aguzzi ,&nbsp;Matteo Magnini","doi":"10.1016/j.smhl.2025.100552","DOIUrl":null,"url":null,"abstract":"<div><div>Medical chatbots are becoming a basic component in telemedicine, propelled by advancements in Large Language Models (LLMs). However, LLMs’ integration into clinical settings comes with several issues, with privacy concerns being particularly significant.</div><div>The paper proposes a tailored architectural solution and an information workflow that address privacy issues, while preserving the benefits of LLMs. We examine two solutions to prevent the disclosure of sensitive information: <em>(i)</em> a filtering mechanism that processes sensitive data locally but leverage a robust OpenAI’s online LLM for engaging with the user effectively, and <em>(ii)</em> a fully local deployment of open-source LLMs. The effectiveness of these solutions is assessed in the context of hypertension management across various tasks, ranging from intent recognition to reliable and emphatic conversation. Interestingly, while the first solution proved to be more robust in intent recognition, an evaluation by domain experts of the models’ responses, based on reliability and empathetic principles, revealed that two out of six open LLMs received the highest scores.</div><div>The study underscores the viability of incorporating LLMs into medical chatbots. In particular, our findings suggest that open LLMs can offer a privacy-preserving, yet promising, alternative to external LLM services, ensuring safer and more reliable telemedicine practices. Future efforts will focus on fine-tuning local models to enhance their performance across all tasks.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100552"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352648325000133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Health Professions","Score":null,"Total":0}
引用次数: 0

Abstract

Medical chatbots are becoming a basic component in telemedicine, propelled by advancements in Large Language Models (LLMs). However, LLMs’ integration into clinical settings comes with several issues, with privacy concerns being particularly significant.
The paper proposes a tailored architectural solution and an information workflow that address privacy issues, while preserving the benefits of LLMs. We examine two solutions to prevent the disclosure of sensitive information: (i) a filtering mechanism that processes sensitive data locally but leverage a robust OpenAI’s online LLM for engaging with the user effectively, and (ii) a fully local deployment of open-source LLMs. The effectiveness of these solutions is assessed in the context of hypertension management across various tasks, ranging from intent recognition to reliable and emphatic conversation. Interestingly, while the first solution proved to be more robust in intent recognition, an evaluation by domain experts of the models’ responses, based on reliability and empathetic principles, revealed that two out of six open LLMs received the highest scores.
The study underscores the viability of incorporating LLMs into medical chatbots. In particular, our findings suggest that open LLMs can offer a privacy-preserving, yet promising, alternative to external LLM services, ensuring safer and more reliable telemedicine practices. Future efforts will focus on fine-tuning local models to enhance their performance across all tasks.
保护隐私的基于llm的高血压患者自我管理聊天机器人
在大型语言模型(llm)的推动下,医疗聊天机器人正在成为远程医疗的基本组成部分。然而,法学硕士融入临床环境有几个问题,隐私问题尤其重要。本文提出了一个量身定制的架构解决方案和信息工作流,以解决隐私问题,同时保留法学硕士的优势。我们研究了防止敏感信息泄露的两种解决方案:(i)在本地处理敏感数据的过滤机制,但利用强大的OpenAI在线法学硕士有效地与用户互动,以及(ii)完全在本地部署开源法学硕士。这些解决方案的有效性在各种任务的高血压管理背景下进行评估,从意图识别到可靠和强调的对话。有趣的是,虽然第一个解决方案被证明在意图识别方面更加稳健,但领域专家基于可靠性和移情原则对模型反应的评估显示,六个开放llm中有两个获得了最高分。这项研究强调了将法学硕士纳入医疗聊天机器人的可行性。特别是,我们的研究结果表明,开放式LLM可以提供一种隐私保护,但有希望的外部LLM服务替代方案,确保更安全,更可靠的远程医疗实践。未来的工作将集中在微调本地模型,以提高它们在所有任务中的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Smart Health
Smart Health Computer Science-Computer Science Applications
CiteScore
6.50
自引率
0.00%
发文量
81
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信