基于自适应压缩的敏感医疗保健隐私保护大语言模型。

IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xinrong Gong, Jiaran Gao, Song Sun, Zhijie Zhong, Yifan Shi, Huanqiang Zeng, Kaixiang Yang
{"title":"基于自适应压缩的敏感医疗保健隐私保护大语言模型。","authors":"Xinrong Gong, Jiaran Gao, Song Sun, Zhijie Zhong, Yifan Shi, Huanqiang Zeng, Kaixiang Yang","doi":"10.1109/JBHI.2025.3558935","DOIUrl":null,"url":null,"abstract":"<p><p>The emergence of large language models (LLMs) has been a key enabler of technological innovation in healthcare. People can conveniently obtain a more accurate medical consultation service by utilizing LLMs' powerful knowledge inference capability. However, existing LLMs require users to upload explicit requests during remote healthcare consultations, which involves the risk of exposing personal privacy. Furthermore, the reliability of the response content generated by LLMs is not guaranteed. To tackle the above challenges, this paper proposes a novel privacy-preserving LLM for user-activated health, called Adaptive Compressed-based Privacy-preserving LLM (ACP2LLM). Specifically, an adaptive token compression method based on information entropy is carefully designed to ensure that ACP2LLM can preserve user-sensitive information when invoking the medical consultation of LLMs deployed on the cloud platform. Moreover, a multi-doctor one-chief physician mechanism is proposed to rationally split and collaboratively infer the patients' requests to achieve the privacy-utility trade-off. Notably, the proposed ACP2LLM also provides highly competitive performance in various token compression rates. Extensive experiments on multiple Medical Question and Answers datasets demonstrate that the proposed ACP2LLM has strong privacy protection capabilities and high answer precision, outperforming current state-of-the-art LLM methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Compressed-based Privacy-preserving Large Language Model for Sensitive Healthcare.\",\"authors\":\"Xinrong Gong, Jiaran Gao, Song Sun, Zhijie Zhong, Yifan Shi, Huanqiang Zeng, Kaixiang Yang\",\"doi\":\"10.1109/JBHI.2025.3558935\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The emergence of large language models (LLMs) has been a key enabler of technological innovation in healthcare. People can conveniently obtain a more accurate medical consultation service by utilizing LLMs' powerful knowledge inference capability. However, existing LLMs require users to upload explicit requests during remote healthcare consultations, which involves the risk of exposing personal privacy. Furthermore, the reliability of the response content generated by LLMs is not guaranteed. To tackle the above challenges, this paper proposes a novel privacy-preserving LLM for user-activated health, called Adaptive Compressed-based Privacy-preserving LLM (ACP2LLM). Specifically, an adaptive token compression method based on information entropy is carefully designed to ensure that ACP2LLM can preserve user-sensitive information when invoking the medical consultation of LLMs deployed on the cloud platform. Moreover, a multi-doctor one-chief physician mechanism is proposed to rationally split and collaboratively infer the patients' requests to achieve the privacy-utility trade-off. Notably, the proposed ACP2LLM also provides highly competitive performance in various token compression rates. Extensive experiments on multiple Medical Question and Answers datasets demonstrate that the proposed ACP2LLM has strong privacy protection capabilities and high answer precision, outperforming current state-of-the-art LLM methods.</p>\",\"PeriodicalId\":13073,\"journal\":{\"name\":\"IEEE Journal of Biomedical and Health Informatics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2025-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Biomedical and Health Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/JBHI.2025.3558935\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3558935","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)的出现是医疗保健技术创新的关键推动因素。利用LLMs强大的知识推理能力,人们可以方便地获得更精准的医疗咨询服务。但是,现有的llm要求用户在远程医疗咨询期间上传明确的请求,这涉及暴露个人隐私的风险。此外,llm生成的响应内容的可靠性也得不到保证。为了解决上述问题,本文提出了一种新的用户激活健康隐私保护LLM,称为基于自适应压缩的隐私保护LLM (ACP2LLM)。具体而言,精心设计了一种基于信息熵的自适应令牌压缩方法,以确保ACP2LLM在调用部署在云平台上的llm的医疗会诊时能够保留用户敏感信息。此外,提出了一种多医生一主任医师机制,合理分割和协同推断患者的需求,实现隐私与效用的权衡。值得注意的是,所提出的ACP2LLM还在各种令牌压缩率方面提供了极具竞争力的性能。在多个医学问答数据集上的大量实验表明,所提出的ACP2LLM具有较强的隐私保护能力和较高的答案精度,优于目前最先进的LLM方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adaptive Compressed-based Privacy-preserving Large Language Model for Sensitive Healthcare.

The emergence of large language models (LLMs) has been a key enabler of technological innovation in healthcare. People can conveniently obtain a more accurate medical consultation service by utilizing LLMs' powerful knowledge inference capability. However, existing LLMs require users to upload explicit requests during remote healthcare consultations, which involves the risk of exposing personal privacy. Furthermore, the reliability of the response content generated by LLMs is not guaranteed. To tackle the above challenges, this paper proposes a novel privacy-preserving LLM for user-activated health, called Adaptive Compressed-based Privacy-preserving LLM (ACP2LLM). Specifically, an adaptive token compression method based on information entropy is carefully designed to ensure that ACP2LLM can preserve user-sensitive information when invoking the medical consultation of LLMs deployed on the cloud platform. Moreover, a multi-doctor one-chief physician mechanism is proposed to rationally split and collaboratively infer the patients' requests to achieve the privacy-utility trade-off. Notably, the proposed ACP2LLM also provides highly competitive performance in various token compression rates. Extensive experiments on multiple Medical Question and Answers datasets demonstrate that the proposed ACP2LLM has strong privacy protection capabilities and high answer precision, outperforming current state-of-the-art LLM methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信