Advancing Disability Healthcare Solutions Through Privacy-Preserving Federated Learning With Theme Framework

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Expert Systems Pub Date : 2024-12-12 DOI:10.1111/exsy.13807
Madallah Alruwaili, Muhammad Hameed Siddiqi, Muhammad Idris, Salman Alruwaili, Abdullah Saleh Alanazi, Faheem Khan
{"title":"Advancing Disability Healthcare Solutions Through Privacy-Preserving Federated Learning With Theme Framework","authors":"Madallah Alruwaili,&nbsp;Muhammad Hameed Siddiqi,&nbsp;Muhammad Idris,&nbsp;Salman Alruwaili,&nbsp;Abdullah Saleh Alanazi,&nbsp;Faheem Khan","doi":"10.1111/exsy.13807","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The application of machine learning, particularly federated learning, in collaborative model training, has demonstrated significant potential for enhancing diversity and efficiency in outcomes. In the healthcare domain, particularly healthcare with disabilities, the sensitive nature of data presents a significant challenge as sharing even the computation on these data can risk exposing personal health information. This research addresses the problem of enabling shared model training for healthcare data—particularly with disabilities decreasing the risk of leaking or compromising sensitive information. Technologies such as federated learning provide solution for decentralised model training but fall short in addressing concerns related to trust building, accountability and control over participation and data. We propose a framework that integrates federated learning with advanced identity management as well as privacy and trust management technologies. Our framework called <i>Theme</i> (Trusted Healthcare Machine Learning Environment) leverages digital identities (e.g., W3C decentralised identifiers and verified credentials) and policy enforcements to regulate participation. This is to ensure that only authorised and trusted entities can contribute to the model training. Additionally, we introduce the mechanisms to track contributions per participant and offer the flexibility for participants to opt out of model training at any point. Participants can choose to be either contributors (providers) or consumers (model users) or both, and they can also choose to participate in subset of activities. This is particularly important in healthcare settings, where individuals and healthcare institutions have the flexibility to control how their data are used without compromising the benefits. In summary, this research work contributes to privacy preserving shared model training leveraging federated learning without exposing sensitive data; trust and accountability mechanisms; contribution tracking per participant for accountability and back-tracking; and fine-grained control and autonomy per participant. By addressing the specific needs of healthcare data for people with disabilities or such institutions, the Theme framework offers a robust solution to balance the benefits of shared machine learning with critical need to protecting sensitive data.</p>\n </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.13807","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The application of machine learning, particularly federated learning, in collaborative model training, has demonstrated significant potential for enhancing diversity and efficiency in outcomes. In the healthcare domain, particularly healthcare with disabilities, the sensitive nature of data presents a significant challenge as sharing even the computation on these data can risk exposing personal health information. This research addresses the problem of enabling shared model training for healthcare data—particularly with disabilities decreasing the risk of leaking or compromising sensitive information. Technologies such as federated learning provide solution for decentralised model training but fall short in addressing concerns related to trust building, accountability and control over participation and data. We propose a framework that integrates federated learning with advanced identity management as well as privacy and trust management technologies. Our framework called Theme (Trusted Healthcare Machine Learning Environment) leverages digital identities (e.g., W3C decentralised identifiers and verified credentials) and policy enforcements to regulate participation. This is to ensure that only authorised and trusted entities can contribute to the model training. Additionally, we introduce the mechanisms to track contributions per participant and offer the flexibility for participants to opt out of model training at any point. Participants can choose to be either contributors (providers) or consumers (model users) or both, and they can also choose to participate in subset of activities. This is particularly important in healthcare settings, where individuals and healthcare institutions have the flexibility to control how their data are used without compromising the benefits. In summary, this research work contributes to privacy preserving shared model training leveraging federated learning without exposing sensitive data; trust and accountability mechanisms; contribution tracking per participant for accountability and back-tracking; and fine-grained control and autonomy per participant. By addressing the specific needs of healthcare data for people with disabilities or such institutions, the Theme framework offers a robust solution to balance the benefits of shared machine learning with critical need to protecting sensitive data.

通过主题框架的隐私保护联合学习推进残疾人医疗保健解决方案
机器学习,特别是联邦学习,在协作模型训练中的应用,已经显示出增强结果多样性和效率的巨大潜力。在医疗保健领域,特别是残疾人医疗保健领域,数据的敏感性带来了重大挑战,因为即使共享这些数据的计算也可能有暴露个人健康信息的风险。本研究解决了为医疗保健数据(特别是残疾人)启用共享模型训练的问题,从而降低了泄露或泄露敏感信息的风险。联邦学习等技术为分散的模型训练提供了解决方案,但在解决与信任建立、问责制以及对参与和数据的控制相关的问题方面存在不足。我们提出了一个将联邦学习与高级身份管理以及隐私和信任管理技术相结合的框架。我们的框架名为Theme(可信医疗机器学习环境),利用数字身份(例如,W3C分散的标识符和经过验证的凭据)和政策实施来规范参与。这是为了确保只有授权和可信的实体才能为模型培训做出贡献。此外,我们引入了跟踪每个参与者贡献的机制,并为参与者提供了在任何时候选择退出模型培训的灵活性。参与者可以选择成为参与者(提供者)或消费者(模型用户),或者两者兼而有之,并且他们还可以选择参与活动的子集。这在医疗保健环境中尤其重要,因为个人和医疗保健机构可以灵活地控制如何使用其数据,而不会损害其利益。总之,这项研究工作有助于在不暴露敏感数据的情况下利用联邦学习保护共享模型训练的隐私;信任和问责机制;跟踪每个参与者的贡献,以便问责和回溯;以及每个参与者的细粒度控制和自主权。通过满足残疾人或此类机构对医疗保健数据的特定需求,Theme框架提供了一个强大的解决方案,可以在共享机器学习的好处与保护敏感数据的关键需求之间取得平衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems
Expert Systems 工程技术-计算机:理论方法
CiteScore
7.40
自引率
6.10%
发文量
266
审稿时长
24 months
期刊介绍: Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper. As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信