Improving 2–5 Qubit Quantum Phase Estimation Circuits Using Machine Learning

IF 1.8 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Algorithms Pub Date : 2024-05-15 DOI:10.3390/a17050214
Charles Woodrum, Torrey Wagner, David Weeks
{"title":"Improving 2–5 Qubit Quantum Phase Estimation Circuits Using Machine Learning","authors":"Charles Woodrum, Torrey Wagner, David Weeks","doi":"10.3390/a17050214","DOIUrl":null,"url":null,"abstract":"Quantum computing has the potential to solve problems that are currently intractable to classical computers with algorithms like Quantum Phase Estimation (QPE); however, noise significantly hinders the performance of today’s quantum computers. Machine learning has the potential to improve the performance of QPE algorithms, especially in the presence of noise. In this work, QPE circuits were simulated with varying levels of depolarizing noise to generate datasets of QPE output. In each case, the phase being estimated was generated with a phase gate, and each circuit modeled was defined by a randomly selected phase. The model accuracy, prediction speed, overfitting level and variation in accuracy with noise level was determined for 5 machine learning algorithms. These attributes were compared to the traditional method of post-processing and a 6x–36 improvement in model performance was noted, depending on the dataset. No algorithm was a clear winner when considering these 4 criteria, as the lowest-error model (neural network) was also the slowest predictor; the algorithm with the lowest overfitting and fastest prediction time (linear regression) had the highest error level and a high degree of variation of error with noise. The XGBoost ensemble algorithm was judged to be the best tradeoff between these criteria due to its error level, prediction time and low variation of error with noise. For the first time, a machine learning model was validated using a 2-qubit datapoint obtained from an IBMQ quantum computer. The best 2-qubit model predicted within 2% of the actual phase, while the traditional method possessed a 25% error.","PeriodicalId":7636,"journal":{"name":"Algorithms","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/a17050214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Quantum computing has the potential to solve problems that are currently intractable to classical computers with algorithms like Quantum Phase Estimation (QPE); however, noise significantly hinders the performance of today’s quantum computers. Machine learning has the potential to improve the performance of QPE algorithms, especially in the presence of noise. In this work, QPE circuits were simulated with varying levels of depolarizing noise to generate datasets of QPE output. In each case, the phase being estimated was generated with a phase gate, and each circuit modeled was defined by a randomly selected phase. The model accuracy, prediction speed, overfitting level and variation in accuracy with noise level was determined for 5 machine learning algorithms. These attributes were compared to the traditional method of post-processing and a 6x–36 improvement in model performance was noted, depending on the dataset. No algorithm was a clear winner when considering these 4 criteria, as the lowest-error model (neural network) was also the slowest predictor; the algorithm with the lowest overfitting and fastest prediction time (linear regression) had the highest error level and a high degree of variation of error with noise. The XGBoost ensemble algorithm was judged to be the best tradeoff between these criteria due to its error level, prediction time and low variation of error with noise. For the first time, a machine learning model was validated using a 2-qubit datapoint obtained from an IBMQ quantum computer. The best 2-qubit model predicted within 2% of the actual phase, while the traditional method possessed a 25% error.
利用机器学习改进 2-5 Qubit 量子相位估计电路
量子计算有可能利用量子相位估计(QPE)等算法解决经典计算机目前难以解决的问题;然而,噪声极大地阻碍了当今量子计算机的性能。机器学习有可能提高 QPE 算法的性能,尤其是在存在噪声的情况下。在这项工作中,我们用不同程度的去极化噪声模拟了 QPE 电路,以生成 QPE 输出数据集。在每种情况下,估算的相位都由相位门产生,每个建模电路都由随机选择的相位定义。确定了 5 种机器学习算法的模型准确度、预测速度、过拟合程度以及准确度随噪声水平的变化。将这些属性与传统的后处理方法进行比较,发现模型性能提高了 6 倍至 36 倍,具体取决于数据集。考虑到这 4 项标准,没有一种算法能明显胜出,因为误差最小的模型(神经网络)同时也是预测速度最慢的;过拟合最小、预测时间最快的算法(线性回归)误差水平最高,误差随噪声的变化程度也很大。XGBoost 集合算法的误差水平、预测时间和误差随噪声的变化较小,因此被认为是这些标准之间的最佳平衡。首次使用从 IBMQ 量子计算机获得的 2 量子比特数据点对机器学习模型进行了验证。最佳的 2 量子位模型预测的实际相位误差在 2% 以内,而传统方法的误差为 25%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Algorithms
Algorithms Mathematics-Numerical Analysis
CiteScore
4.10
自引率
4.30%
发文量
394
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信