在联合学习中优化权重共享的基于深度学习的脑肿瘤架构

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Expert Systems Pub Date : 2024-06-06 DOI:10.1111/exsy.13643
Ameer N. Onaizah, Yuanqing Xia, Ahmed J. Obaid, Khurram Hussain
{"title":"在联合学习中优化权重共享的基于深度学习的脑肿瘤架构","authors":"Ameer N. Onaizah, Yuanqing Xia, Ahmed J. Obaid, Khurram Hussain","doi":"10.1111/exsy.13643","DOIUrl":null,"url":null,"abstract":"Large amounts of data is necessary for deep learning models to semantically segment images. A major issue in the field of medical imaging is accumulating adequate data and then applying specialized skills to label those medical imaging data. Collaboration across institutions might be able to alleviate this problem, but sharing medical data to a centralized place is complicated due to legal, privacy, technical, and data ownership constraints, particularly among international institutions. By guaranteeing user privacy and preventing unauthorized access to raw data, Federated Learning plays a significant role especially in decentralized deep learning applications. Each client is given a unique learning process assignment. Clients first train a machine learning model locally using data from their area. Then, clients upload training data (local updates of model weights and biases) to a server. After that, the server compiles client‐provided updates to build a global learning model. Due to the numerous parameters (weights and biases) employed by deep learning models, the constant transmission between clients and the server raises communication costs and is inefficient from the standpoint of resource use. When there are more contributing clients and communication rounds, the cost of communication becomes a bigger concern. In this paper, a novel federated learning with weight sharing optimization compression architecture FedWSOcomp is proposed for cross institutional collaboration. In FedWSOcomp, the weights from deep learning models between clients and servers help in considerably reducing the amount of updates. Top‐z sparsification, quantization with clustering, and compression with three separate encoding techniques are all implemented by FedWSOcomp. Modern compression techniques are outperformed by FedWSOcomp, which achieves compression rates of up to 1085× while saving up to 99% of bandwidth and 99% of energy for clients during communication.","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep learning based brain tumour architecture for weight sharing optimization in federated learning\",\"authors\":\"Ameer N. Onaizah, Yuanqing Xia, Ahmed J. Obaid, Khurram Hussain\",\"doi\":\"10.1111/exsy.13643\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large amounts of data is necessary for deep learning models to semantically segment images. A major issue in the field of medical imaging is accumulating adequate data and then applying specialized skills to label those medical imaging data. Collaboration across institutions might be able to alleviate this problem, but sharing medical data to a centralized place is complicated due to legal, privacy, technical, and data ownership constraints, particularly among international institutions. By guaranteeing user privacy and preventing unauthorized access to raw data, Federated Learning plays a significant role especially in decentralized deep learning applications. Each client is given a unique learning process assignment. Clients first train a machine learning model locally using data from their area. Then, clients upload training data (local updates of model weights and biases) to a server. After that, the server compiles client‐provided updates to build a global learning model. Due to the numerous parameters (weights and biases) employed by deep learning models, the constant transmission between clients and the server raises communication costs and is inefficient from the standpoint of resource use. When there are more contributing clients and communication rounds, the cost of communication becomes a bigger concern. In this paper, a novel federated learning with weight sharing optimization compression architecture FedWSOcomp is proposed for cross institutional collaboration. In FedWSOcomp, the weights from deep learning models between clients and servers help in considerably reducing the amount of updates. Top‐z sparsification, quantization with clustering, and compression with three separate encoding techniques are all implemented by FedWSOcomp. Modern compression techniques are outperformed by FedWSOcomp, which achieves compression rates of up to 1085× while saving up to 99% of bandwidth and 99% of energy for clients during communication.\",\"PeriodicalId\":51053,\"journal\":{\"name\":\"Expert Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1111/exsy.13643\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1111/exsy.13643","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

深度学习模型需要大量数据才能对图像进行语义分割。医学影像领域的一个主要问题是积累足够的数据,然后运用专业技能为这些医学影像数据贴标签。跨机构合作或许能缓解这一问题,但由于法律、隐私、技术和数据所有权方面的限制,尤其是国际机构之间的限制,将医疗数据共享到一个集中的地方非常复杂。通过保证用户隐私和防止未经授权访问原始数据,联邦学习(Federated Learning)尤其在分散式深度学习应用中发挥着重要作用。每个客户端都有一个独特的学习过程任务。客户端首先使用其所在区域的数据在本地训练机器学习模型。然后,客户端将训练数据(模型权重和偏置的本地更新)上传到服务器。之后,服务器对客户提供的更新进行编译,以建立全局学习模型。由于深度学习模型使用了大量参数(权重和偏置),客户端和服务器之间的持续传输会增加通信成本,从资源使用的角度来看效率低下。当贡献的客户端和通信轮数较多时,通信成本就会成为一个更大的问题。本文为跨机构协作提出了一种新颖的联合学习与权重共享优化压缩架构 FedWSOcomp。在 FedWSOcomp 中,客户端和服务器之间深度学习模型的权重有助于大大减少更新量。FedWSOcomp 实现了 Top-z 稀疏化、带聚类的量化和三种独立编码技术的压缩。FedWSOcomp 的性能优于现代压缩技术,其压缩率高达 1085 倍,同时在通信过程中为客户端节省了高达 99% 的带宽和 99% 的能源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep learning based brain tumour architecture for weight sharing optimization in federated learning
Large amounts of data is necessary for deep learning models to semantically segment images. A major issue in the field of medical imaging is accumulating adequate data and then applying specialized skills to label those medical imaging data. Collaboration across institutions might be able to alleviate this problem, but sharing medical data to a centralized place is complicated due to legal, privacy, technical, and data ownership constraints, particularly among international institutions. By guaranteeing user privacy and preventing unauthorized access to raw data, Federated Learning plays a significant role especially in decentralized deep learning applications. Each client is given a unique learning process assignment. Clients first train a machine learning model locally using data from their area. Then, clients upload training data (local updates of model weights and biases) to a server. After that, the server compiles client‐provided updates to build a global learning model. Due to the numerous parameters (weights and biases) employed by deep learning models, the constant transmission between clients and the server raises communication costs and is inefficient from the standpoint of resource use. When there are more contributing clients and communication rounds, the cost of communication becomes a bigger concern. In this paper, a novel federated learning with weight sharing optimization compression architecture FedWSOcomp is proposed for cross institutional collaboration. In FedWSOcomp, the weights from deep learning models between clients and servers help in considerably reducing the amount of updates. Top‐z sparsification, quantization with clustering, and compression with three separate encoding techniques are all implemented by FedWSOcomp. Modern compression techniques are outperformed by FedWSOcomp, which achieves compression rates of up to 1085× while saving up to 99% of bandwidth and 99% of energy for clients during communication.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Expert Systems
Expert Systems 工程技术-计算机:理论方法
CiteScore
7.40
自引率
6.10%
发文量
266
审稿时长
24 months
期刊介绍: Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper. As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信