大型语言模型和临床信用系统的模拟滥用

James Anibal, Hannah Huth, Jasmine Gunkel, Bradford Wood
{"title":"大型语言模型和临床信用系统的模拟滥用","authors":"James Anibal, Hannah Huth, Jasmine Gunkel, Bradford Wood","doi":"10.1101/2024.04.10.24305470","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI models may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources based on unfair, inaccurate, or unjust criteria. For example, a social credit system uses big data to assess “trustworthiness” in society, punishing those who score poorly based on evaluation metrics defined only by a power structure (corporate entity, governing body). Such a system may be amplified by powerful LLMs which can rate individuals based on high-dimensional multimodal data - financial transactions, internet activity, and other behavioural inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty via a “clinical credit system”, which may include limiting or rationing access to standard care. This report simulates how clinical datasets might be exploited and proposes strategies to mitigate the risks inherent to the development of AI models for healthcare.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Simulated Misuse of Large Language Models and Clinical Credit Systems\",\"authors\":\"James Anibal, Hannah Huth, Jasmine Gunkel, Bradford Wood\",\"doi\":\"10.1101/2024.04.10.24305470\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI models may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources based on unfair, inaccurate, or unjust criteria. For example, a social credit system uses big data to assess “trustworthiness” in society, punishing those who score poorly based on evaluation metrics defined only by a power structure (corporate entity, governing body). Such a system may be amplified by powerful LLMs which can rate individuals based on high-dimensional multimodal data - financial transactions, internet activity, and other behavioural inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty via a “clinical credit system”, which may include limiting or rationing access to standard care. This report simulates how clinical datasets might be exploited and proposes strategies to mitigate the risks inherent to the development of AI models for healthcare.\",\"PeriodicalId\":501154,\"journal\":{\"name\":\"medRxiv - Medical Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Medical Ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.04.10.24305470\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Medical Ethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.04.10.24305470","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)已被提出用于支持许多医疗保健任务,包括疾病诊断和个性化治疗。虽然人工智能模型可用于协助或加强医疗服务的提供,但也存在滥用的风险。LLM 可用于根据不公平、不准确或不公正的标准分配资源。例如,社会信用系统利用大数据评估社会中的 "可信度",根据仅由权力结构(公司实体、管理机构)定义的评价指标惩罚那些得分较低的人。强大的 LLM 可以根据高维多模态数据(金融交易、互联网活动和其他行为输入)对个人进行评级,从而放大这种系统。医疗保健数据可能是可收集到的最敏感信息,有可能被用于通过 "临床信用系统 "侵犯公民自由,其中可能包括限制或配给标准医疗服务。本报告模拟了临床数据集可能被利用的方式,并提出了降低医疗人工智能模型开发固有风险的策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Simulated Misuse of Large Language Models and Clinical Credit Systems
Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI models may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources based on unfair, inaccurate, or unjust criteria. For example, a social credit system uses big data to assess “trustworthiness” in society, punishing those who score poorly based on evaluation metrics defined only by a power structure (corporate entity, governing body). Such a system may be amplified by powerful LLMs which can rate individuals based on high-dimensional multimodal data - financial transactions, internet activity, and other behavioural inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty via a “clinical credit system”, which may include limiting or rationing access to standard care. This report simulates how clinical datasets might be exploited and proposes strategies to mitigate the risks inherent to the development of AI models for healthcare.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信