Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning

You Jun Kim, C. Hong
{"title":"Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning","authors":"You Jun Kim, C. Hong","doi":"10.23919/APNOMS50412.2020.9236949","DOIUrl":null,"url":null,"abstract":"Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can't collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.","PeriodicalId":122940,"journal":{"name":"2020 21st Asia-Pacific Network Operations and Management Symposium (APNOMS)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 21st Asia-Pacific Network Operations and Management Symposium (APNOMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/APNOMS50412.2020.9236949","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can't collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
联邦学习中卷积深度神经网络的优化量化
联邦学习是一种分布式学习方法,它在用户设备上训练深度网络,而不需要从中央服务器收集数据。当中央服务器无法收集数据时,它很有用。然而,中心服务器上缺少数据意味着无法使用数据进行深度网络压缩。深度网络压缩非常重要,因为它可以在低容量设备上进行推理。在本文中,我们提出了一种新的量化方法,该方法可以显着降低深度网络中的FPROPS(每秒浮点操作),而不会泄露联邦学习中的用户数据。量化参数采用一般学习损失法训练,并与权值同步更新。我们把这种方法称为OQFL(最优化量化联邦学习)。OQFL是一种学习深度网络和量化的方法,同时在包括边缘计算在内的分布式网络环境中保持安全性。介绍了OQFL方法,并在各种卷积深度神经网络中进行了仿真。研究表明,在大多数具有代表性的卷积深度神经网络中,OQFL是可能的。令人惊讶的是,OQFL(4bits)在测试数据集中可以保持传统联邦学习(32bits)的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信