Incremental Linear Regression Attack

Juncheng Chen, Jun-Sheng Ng, Nay Aung Kyaw, Zhili Zou, Kwen-Siong Chong, Zhiping Lin, B. Gwee
{"title":"Incremental Linear Regression Attack","authors":"Juncheng Chen, Jun-Sheng Ng, Nay Aung Kyaw, Zhili Zou, Kwen-Siong Chong, Zhiping Lin, B. Gwee","doi":"10.1109/AsianHOST56390.2022.10022167","DOIUrl":null,"url":null,"abstract":"Linear Regression Attack (LRA) is an effective Side-Channel Analysis (SCA) distinguisher designed to overcome the inaccuracies in leakage models (e.g, Hamming Weight). However, the implementation of the original LRA (termed as baseline LRA) involves many matrix arithmetic operations. The sizes of these matrices are determined by the scale of the Physical Leakage Information (PLI) traces. When processing large-scale PLI traces, extremely high memory capacity is required to execute the baseline LRA. In this paper, we propose a new implementation of LRA coined as incremental LRA. Theoretically, we reformulate the process of baseline LRA to break down the large dataset and process smaller batches of the dataset iteratively. Experimentally, we first validate that our proposed incremental LRA provides flexible choice of batch size and enables a progressive increase on the PLI traces to present attack results incrementally. Second, our proposed incremental LRA reduces execution memory and time significantly as compared to the baseline LRA. We demonstrate that the best execution performance of our incremental LRA requires only 0.65% of memory requirement (154x smaller) and takes only 3.37% of the processing time (30x speed-up) of the baseline LRA while attacking the same amount of traces.","PeriodicalId":207435,"journal":{"name":"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AsianHOST56390.2022.10022167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Linear Regression Attack (LRA) is an effective Side-Channel Analysis (SCA) distinguisher designed to overcome the inaccuracies in leakage models (e.g, Hamming Weight). However, the implementation of the original LRA (termed as baseline LRA) involves many matrix arithmetic operations. The sizes of these matrices are determined by the scale of the Physical Leakage Information (PLI) traces. When processing large-scale PLI traces, extremely high memory capacity is required to execute the baseline LRA. In this paper, we propose a new implementation of LRA coined as incremental LRA. Theoretically, we reformulate the process of baseline LRA to break down the large dataset and process smaller batches of the dataset iteratively. Experimentally, we first validate that our proposed incremental LRA provides flexible choice of batch size and enables a progressive increase on the PLI traces to present attack results incrementally. Second, our proposed incremental LRA reduces execution memory and time significantly as compared to the baseline LRA. We demonstrate that the best execution performance of our incremental LRA requires only 0.65% of memory requirement (154x smaller) and takes only 3.37% of the processing time (30x speed-up) of the baseline LRA while attacking the same amount of traces.
增量线性回归攻击
线性回归攻击(LRA)是一种有效的侧信道分析(SCA)区分器,旨在克服泄漏模型(例如汉明权重)的不准确性。然而,原始LRA(称为基线LRA)的实现涉及许多矩阵算术运算。这些矩阵的大小由物理泄漏信息(PLI)迹线的规模决定。当处理大规模PLI跟踪时,需要极高的内存容量来执行基线LRA。在本文中,我们提出了一种新的LRA实现方法,称为增量LRA。从理论上讲,我们重新制定了基线LRA的过程,以分解大数据集并迭代处理小批量数据集。实验上,我们首先验证了我们提出的增量LRA提供了灵活的批量大小选择,并允许PLI跟踪的逐步增加,以增量地呈现攻击结果。其次,与基线LRA相比,我们提出的增量LRA显著减少了执行内存和时间。我们证明,在攻击相同数量的跟踪时,增量LRA的最佳执行性能只需要0.65%的内存需求(减少154倍),并且只需要3.37%的处理时间(加速30倍)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信