{"title":"State of practice: Evaluating GPU performance of state vector and tensor network methods","authors":"Marzio Vallero, Paolo Rech, Flavio Vella","doi":"10.1016/j.future.2025.107927","DOIUrl":null,"url":null,"abstract":"<div><div>The frontier of quantum computing (QC) simulation on classical hardware is quickly reaching the hard scalability limits for computational feasibility. Nonetheless, there is still a need to simulate large quantum systems classically, as the Noisy Intermediate Scale Quantum (NISQ) devices are yet to be considered fault tolerant and performant enough in terms of operations per second. Each of the two main exact simulation techniques, state vector and tensor network simulators, boasts specific limitations.</div><div>This article investigates the limits of current state-of-the-art simulation techniques on a test bench made of eight widely used quantum subroutines, each in different configurations, with a special emphasis on performance. We perform both single process and distributed scaleability experiments on a supercomputer. We correlate the performance measures from such experiments with the metrics that characterise the benchmark circuits, identifying the main reasons behind the observed performance trends. Specifically, we perform distributed sliced tensor contractions, and we analyse the impact of pathfinding quality on contraction time, correlating both results with topological circuit characteristics. From our observations, given the structure of a quantum circuit and the number of qubits, we highlight how to select the best simulation strategy, demonstrating how preventive circuit analysis can guide and improve simulation performance by more than an order of magnitude.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"174 ","pages":"Article 107927"},"PeriodicalIF":6.2000,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Future Generation Computer Systems-The International Journal of Escience","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167739X25002225","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
The frontier of quantum computing (QC) simulation on classical hardware is quickly reaching the hard scalability limits for computational feasibility. Nonetheless, there is still a need to simulate large quantum systems classically, as the Noisy Intermediate Scale Quantum (NISQ) devices are yet to be considered fault tolerant and performant enough in terms of operations per second. Each of the two main exact simulation techniques, state vector and tensor network simulators, boasts specific limitations.
This article investigates the limits of current state-of-the-art simulation techniques on a test bench made of eight widely used quantum subroutines, each in different configurations, with a special emphasis on performance. We perform both single process and distributed scaleability experiments on a supercomputer. We correlate the performance measures from such experiments with the metrics that characterise the benchmark circuits, identifying the main reasons behind the observed performance trends. Specifically, we perform distributed sliced tensor contractions, and we analyse the impact of pathfinding quality on contraction time, correlating both results with topological circuit characteristics. From our observations, given the structure of a quantum circuit and the number of qubits, we highlight how to select the best simulation strategy, demonstrating how preventive circuit analysis can guide and improve simulation performance by more than an order of magnitude.
期刊介绍:
Computing infrastructures and systems are constantly evolving, resulting in increasingly complex and collaborative scientific applications. To cope with these advancements, there is a growing need for collaborative tools that can effectively map, control, and execute these applications.
Furthermore, with the explosion of Big Data, there is a requirement for innovative methods and infrastructures to collect, analyze, and derive meaningful insights from the vast amount of data generated. This necessitates the integration of computational and storage capabilities, databases, sensors, and human collaboration.
Future Generation Computer Systems aims to pioneer advancements in distributed systems, collaborative environments, high-performance computing, and Big Data analytics. It strives to stay at the forefront of developments in grids, clouds, and the Internet of Things (IoT) to effectively address the challenges posed by these wide-area, fully distributed sensing and computing systems.