Zilinghan Li, Shilan He, Ze Yang, Minseok Ryu, Kibaek Kim, Ravi Madduri
{"title":"APPFL 的进展:全面、可扩展的联合学习框架","authors":"Zilinghan Li, Shilan He, Ze Yang, Minseok Ryu, Kibaek Kim, Ravi Madduri","doi":"arxiv-2409.11585","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a distributed machine learning paradigm enabling\ncollaborative model training while preserving data privacy. In today's\nlandscape, where most data is proprietary, confidential, and distributed, FL\nhas become a promising approach to leverage such data effectively, particularly\nin sensitive domains such as medicine and the electric grid. Heterogeneity and\nsecurity are the key challenges in FL, however; most existing FL frameworks\neither fail to address these challenges adequately or lack the flexibility to\nincorporate new solutions. To this end, we present the recent advances in\ndeveloping APPFL, an extensible framework and benchmarking suite for federated\nlearning, which offers comprehensive solutions for heterogeneity and security\nconcerns, as well as user-friendly interfaces for integrating new algorithms or\nadapting to new applications. We demonstrate the capabilities of APPFL through\nextensive experiments evaluating various aspects of FL, including communication\nefficiency, privacy preservation, computational performance, and resource\nutilization. We further highlight the extensibility of APPFL through case\nstudies in vertical, hierarchical, and decentralized FL. APPFL is open-sourced\nat https://github.com/APPFL/APPFL.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework\",\"authors\":\"Zilinghan Li, Shilan He, Ze Yang, Minseok Ryu, Kibaek Kim, Ravi Madduri\",\"doi\":\"arxiv-2409.11585\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is a distributed machine learning paradigm enabling\\ncollaborative model training while preserving data privacy. In today's\\nlandscape, where most data is proprietary, confidential, and distributed, FL\\nhas become a promising approach to leverage such data effectively, particularly\\nin sensitive domains such as medicine and the electric grid. Heterogeneity and\\nsecurity are the key challenges in FL, however; most existing FL frameworks\\neither fail to address these challenges adequately or lack the flexibility to\\nincorporate new solutions. To this end, we present the recent advances in\\ndeveloping APPFL, an extensible framework and benchmarking suite for federated\\nlearning, which offers comprehensive solutions for heterogeneity and security\\nconcerns, as well as user-friendly interfaces for integrating new algorithms or\\nadapting to new applications. We demonstrate the capabilities of APPFL through\\nextensive experiments evaluating various aspects of FL, including communication\\nefficiency, privacy preservation, computational performance, and resource\\nutilization. We further highlight the extensibility of APPFL through case\\nstudies in vertical, hierarchical, and decentralized FL. APPFL is open-sourced\\nat https://github.com/APPFL/APPFL.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11585\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11585","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Advances in APPFL: A Comprehensive and Extensible Federated Learning Framework
Federated learning (FL) is a distributed machine learning paradigm enabling
collaborative model training while preserving data privacy. In today's
landscape, where most data is proprietary, confidential, and distributed, FL
has become a promising approach to leverage such data effectively, particularly
in sensitive domains such as medicine and the electric grid. Heterogeneity and
security are the key challenges in FL, however; most existing FL frameworks
either fail to address these challenges adequately or lack the flexibility to
incorporate new solutions. To this end, we present the recent advances in
developing APPFL, an extensible framework and benchmarking suite for federated
learning, which offers comprehensive solutions for heterogeneity and security
concerns, as well as user-friendly interfaces for integrating new algorithms or
adapting to new applications. We demonstrate the capabilities of APPFL through
extensive experiments evaluating various aspects of FL, including communication
efficiency, privacy preservation, computational performance, and resource
utilization. We further highlight the extensibility of APPFL through case
studies in vertical, hierarchical, and decentralized FL. APPFL is open-sourced
at https://github.com/APPFL/APPFL.