Ravikumar Balakrishnan, M. Akdeniz, S. Dhakal, N. Himayat
{"title":"无线边缘网络上联邦学习的资源管理与公平性","authors":"Ravikumar Balakrishnan, M. Akdeniz, S. Dhakal, N. Himayat","doi":"10.1109/spawc48557.2020.9154285","DOIUrl":null,"url":null,"abstract":"Federated Learning has the potential to break the barrier of AI adoption at the edge through better data privacy and reduced client to server communication cost. However, the heterogeneity among the clients' compute capabilities, communication rates, the amount and quality of data can affect the training performance in terms of overall accuracy, model fairness and convergence time. We develop compute-communication and data importance aware resource management schemes to optimize the above metrics and evaluate the training performance on benchmark datasets. We observe that the proposed algorithms strikes a balance between model performance and total training time by achieving 4x - 10x reduction in convergence time without loss of test performance. Further, our algorithms also show superior fairness performance measured by variance and worst case 10th percentile accuracy/loss on benchmark datasets.","PeriodicalId":172835,"journal":{"name":"2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"Resource Management and Fairness for Federated Learning over Wireless Edge Networks\",\"authors\":\"Ravikumar Balakrishnan, M. Akdeniz, S. Dhakal, N. Himayat\",\"doi\":\"10.1109/spawc48557.2020.9154285\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning has the potential to break the barrier of AI adoption at the edge through better data privacy and reduced client to server communication cost. However, the heterogeneity among the clients' compute capabilities, communication rates, the amount and quality of data can affect the training performance in terms of overall accuracy, model fairness and convergence time. We develop compute-communication and data importance aware resource management schemes to optimize the above metrics and evaluate the training performance on benchmark datasets. We observe that the proposed algorithms strikes a balance between model performance and total training time by achieving 4x - 10x reduction in convergence time without loss of test performance. Further, our algorithms also show superior fairness performance measured by variance and worst case 10th percentile accuracy/loss on benchmark datasets.\",\"PeriodicalId\":172835,\"journal\":{\"name\":\"2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)\",\"volume\":\"65 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/spawc48557.2020.9154285\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/spawc48557.2020.9154285","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Resource Management and Fairness for Federated Learning over Wireless Edge Networks
Federated Learning has the potential to break the barrier of AI adoption at the edge through better data privacy and reduced client to server communication cost. However, the heterogeneity among the clients' compute capabilities, communication rates, the amount and quality of data can affect the training performance in terms of overall accuracy, model fairness and convergence time. We develop compute-communication and data importance aware resource management schemes to optimize the above metrics and evaluate the training performance on benchmark datasets. We observe that the proposed algorithms strikes a balance between model performance and total training time by achieving 4x - 10x reduction in convergence time without loss of test performance. Further, our algorithms also show superior fairness performance measured by variance and worst case 10th percentile accuracy/loss on benchmark datasets.