{"title":"Chained continuous quantum federated learning framework","authors":"Dev Gurung, Shiva Raj Pokhrel","doi":"10.1016/j.future.2025.107800","DOIUrl":null,"url":null,"abstract":"<div><div>The integration of quantum machine learning into federated learning paradigms is poised to transform the future of technologies that depend on diverse machine learning methodologies. This research delves into Quantum Federated Learning (QFL), presenting an initial framework modeled on the Federated Averaging (FedAvg) algorithm, implemented via Qiskit. Despite its potential, QFL encounters critical challenges, including (i) susceptibility to a single point of failure, (ii) communication bottlenecks, and (iii) uncertainty in model convergence. Subsequently, we dive deeper into QFL and propose an innovative alternative to traditional server-based QFL. Our approach introduces a chained continuous QFL framework (ccQFL), which eliminates the need for a central server and the FedAvg method. In our framework, clients engage in a chained continuous training process, where they exchange models and collaboratively enhance each other’s performance. This approach improves both the efficiency of communication and the accuracy of the training process. Our experimental evaluation includes a proof-of-concept to demonstrate initial feasibility and a prototype study simulating TCP/IP communication between clients. This simulation enables concurrent operations, verifying the potential of ccQFL for real-world applications. We examine various datasets, including Iris, MNIST, synthetic and Genomic, covering a range of data sizes from small to large. For further validity of our proposed method, we extend our experimental analysis in other frameworks such as PennyLane and TensorCircuit where we include various ablation studies covering major considerations and factors that impact the framework to study validity, robustness, practicality, and others. Our results show that the ccQFL framework achieves model convergence, and we evaluate other critical metrics such as performance and communication delay. In addition, we provide a theoretical analysis to establish and discuss many factors such as model convergence, communication costs, etc.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"169 ","pages":"Article 107800"},"PeriodicalIF":6.2000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25000950","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of quantum machine learning into federated learning paradigms is poised to transform the future of technologies that depend on diverse machine learning methodologies. This research delves into Quantum Federated Learning (QFL), presenting an initial framework modeled on the Federated Averaging (FedAvg) algorithm, implemented via Qiskit. Despite its potential, QFL encounters critical challenges, including (i) susceptibility to a single point of failure, (ii) communication bottlenecks, and (iii) uncertainty in model convergence. Subsequently, we dive deeper into QFL and propose an innovative alternative to traditional server-based QFL. Our approach introduces a chained continuous QFL framework (ccQFL), which eliminates the need for a central server and the FedAvg method. In our framework, clients engage in a chained continuous training process, where they exchange models and collaboratively enhance each other’s performance. This approach improves both the efficiency of communication and the accuracy of the training process. Our experimental evaluation includes a proof-of-concept to demonstrate initial feasibility and a prototype study simulating TCP/IP communication between clients. This simulation enables concurrent operations, verifying the potential of ccQFL for real-world applications. We examine various datasets, including Iris, MNIST, synthetic and Genomic, covering a range of data sizes from small to large. For further validity of our proposed method, we extend our experimental analysis in other frameworks such as PennyLane and TensorCircuit where we include various ablation studies covering major considerations and factors that impact the framework to study validity, robustness, practicality, and others. Our results show that the ccQFL framework achieves model convergence, and we evaluate other critical metrics such as performance and communication delay. In addition, we provide a theoretical analysis to establish and discuss many factors such as model convergence, communication costs, etc.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.