{"title":"非凸惩罚分位数回归的联邦平滑近端梯度","authors":"Reza Mirzaeifard;Diyako Ghaderyan;Stefan Werner","doi":"10.1109/TSIPN.2025.3587440","DOIUrl":null,"url":null,"abstract":"The rise of internet-of-things (IoT) systems has led to the generation of vast and high-dimensional data across distributed edge devices, often requiring sparse modeling techniques to manage model complexity efficiently. In these environments, quantile regression offers a robust alternative to mean-based models by capturing conditional distributional behavior, which is particularly useful under heavy-tailed noise or heterogeneous data. However, penalized quantile regression in federated learning (FL) remains challenging due to the non-smooth nature of the quantile loss and the non-convex, non-smooth penalties such as MCP and SCAD used for sparsity. To address this gap, we propose the Federated Smoothing Proximal Gradient (FSPG) algorithm, which integrates a smoothing technique into the proximal gradient framework to enable effective, stable, and theoretically guaranteed optimization in decentralized settings. FSPG guarantees monotonic reduction in the objective function and achieves faster convergence than existing methods. We further extend FSPG to handle partial client participation (PCP-FSPG), making the algorithm robust to intermittent node availability by adaptively updating local parameters based on client activity. Extensive experiments validate that FSPG and PCP-FSPG achieve superior accuracy, convergence behavior, and variable selection performance compared to existing baselines, demonstrating their practical utility in real-world federated applications.","PeriodicalId":56268,"journal":{"name":"IEEE Transactions on Signal and Information Processing over Networks","volume":"11 ","pages":"696-710"},"PeriodicalIF":3.0000,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Smoothing Proximal Gradient for Quantile Regression With Non-Convex Penalties\",\"authors\":\"Reza Mirzaeifard;Diyako Ghaderyan;Stefan Werner\",\"doi\":\"10.1109/TSIPN.2025.3587440\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rise of internet-of-things (IoT) systems has led to the generation of vast and high-dimensional data across distributed edge devices, often requiring sparse modeling techniques to manage model complexity efficiently. In these environments, quantile regression offers a robust alternative to mean-based models by capturing conditional distributional behavior, which is particularly useful under heavy-tailed noise or heterogeneous data. However, penalized quantile regression in federated learning (FL) remains challenging due to the non-smooth nature of the quantile loss and the non-convex, non-smooth penalties such as MCP and SCAD used for sparsity. To address this gap, we propose the Federated Smoothing Proximal Gradient (FSPG) algorithm, which integrates a smoothing technique into the proximal gradient framework to enable effective, stable, and theoretically guaranteed optimization in decentralized settings. FSPG guarantees monotonic reduction in the objective function and achieves faster convergence than existing methods. We further extend FSPG to handle partial client participation (PCP-FSPG), making the algorithm robust to intermittent node availability by adaptively updating local parameters based on client activity. Extensive experiments validate that FSPG and PCP-FSPG achieve superior accuracy, convergence behavior, and variable selection performance compared to existing baselines, demonstrating their practical utility in real-world federated applications.\",\"PeriodicalId\":56268,\"journal\":{\"name\":\"IEEE Transactions on Signal and Information Processing over Networks\",\"volume\":\"11 \",\"pages\":\"696-710\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Signal and Information Processing over Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11077990/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal and Information Processing over Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11077990/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Federated Smoothing Proximal Gradient for Quantile Regression With Non-Convex Penalties
The rise of internet-of-things (IoT) systems has led to the generation of vast and high-dimensional data across distributed edge devices, often requiring sparse modeling techniques to manage model complexity efficiently. In these environments, quantile regression offers a robust alternative to mean-based models by capturing conditional distributional behavior, which is particularly useful under heavy-tailed noise or heterogeneous data. However, penalized quantile regression in federated learning (FL) remains challenging due to the non-smooth nature of the quantile loss and the non-convex, non-smooth penalties such as MCP and SCAD used for sparsity. To address this gap, we propose the Federated Smoothing Proximal Gradient (FSPG) algorithm, which integrates a smoothing technique into the proximal gradient framework to enable effective, stable, and theoretically guaranteed optimization in decentralized settings. FSPG guarantees monotonic reduction in the objective function and achieves faster convergence than existing methods. We further extend FSPG to handle partial client participation (PCP-FSPG), making the algorithm robust to intermittent node availability by adaptively updating local parameters based on client activity. Extensive experiments validate that FSPG and PCP-FSPG achieve superior accuracy, convergence behavior, and variable selection performance compared to existing baselines, demonstrating their practical utility in real-world federated applications.
期刊介绍:
The IEEE Transactions on Signal and Information Processing over Networks publishes high-quality papers that extend the classical notions of processing of signals defined over vector spaces (e.g. time and space) to processing of signals and information (data) defined over networks, potentially dynamically varying. In signal processing over networks, the topology of the network may define structural relationships in the data, or may constrain processing of the data. Topics include distributed algorithms for filtering, detection, estimation, adaptation and learning, model selection, data fusion, and diffusion or evolution of information over such networks, and applications of distributed signal processing.