{"title":"增量线性回归攻击","authors":"Juncheng Chen, Jun-Sheng Ng, Nay Aung Kyaw, Zhili Zou, Kwen-Siong Chong, Zhiping Lin, B. Gwee","doi":"10.1109/AsianHOST56390.2022.10022167","DOIUrl":null,"url":null,"abstract":"Linear Regression Attack (LRA) is an effective Side-Channel Analysis (SCA) distinguisher designed to overcome the inaccuracies in leakage models (e.g, Hamming Weight). However, the implementation of the original LRA (termed as baseline LRA) involves many matrix arithmetic operations. The sizes of these matrices are determined by the scale of the Physical Leakage Information (PLI) traces. When processing large-scale PLI traces, extremely high memory capacity is required to execute the baseline LRA. In this paper, we propose a new implementation of LRA coined as incremental LRA. Theoretically, we reformulate the process of baseline LRA to break down the large dataset and process smaller batches of the dataset iteratively. Experimentally, we first validate that our proposed incremental LRA provides flexible choice of batch size and enables a progressive increase on the PLI traces to present attack results incrementally. Second, our proposed incremental LRA reduces execution memory and time significantly as compared to the baseline LRA. We demonstrate that the best execution performance of our incremental LRA requires only 0.65% of memory requirement (154x smaller) and takes only 3.37% of the processing time (30x speed-up) of the baseline LRA while attacking the same amount of traces.","PeriodicalId":207435,"journal":{"name":"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Incremental Linear Regression Attack\",\"authors\":\"Juncheng Chen, Jun-Sheng Ng, Nay Aung Kyaw, Zhili Zou, Kwen-Siong Chong, Zhiping Lin, B. Gwee\",\"doi\":\"10.1109/AsianHOST56390.2022.10022167\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Linear Regression Attack (LRA) is an effective Side-Channel Analysis (SCA) distinguisher designed to overcome the inaccuracies in leakage models (e.g, Hamming Weight). However, the implementation of the original LRA (termed as baseline LRA) involves many matrix arithmetic operations. The sizes of these matrices are determined by the scale of the Physical Leakage Information (PLI) traces. When processing large-scale PLI traces, extremely high memory capacity is required to execute the baseline LRA. In this paper, we propose a new implementation of LRA coined as incremental LRA. Theoretically, we reformulate the process of baseline LRA to break down the large dataset and process smaller batches of the dataset iteratively. Experimentally, we first validate that our proposed incremental LRA provides flexible choice of batch size and enables a progressive increase on the PLI traces to present attack results incrementally. Second, our proposed incremental LRA reduces execution memory and time significantly as compared to the baseline LRA. We demonstrate that the best execution performance of our incremental LRA requires only 0.65% of memory requirement (154x smaller) and takes only 3.37% of the processing time (30x speed-up) of the baseline LRA while attacking the same amount of traces.\",\"PeriodicalId\":207435,\"journal\":{\"name\":\"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AsianHOST56390.2022.10022167\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AsianHOST56390.2022.10022167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Linear Regression Attack (LRA) is an effective Side-Channel Analysis (SCA) distinguisher designed to overcome the inaccuracies in leakage models (e.g, Hamming Weight). However, the implementation of the original LRA (termed as baseline LRA) involves many matrix arithmetic operations. The sizes of these matrices are determined by the scale of the Physical Leakage Information (PLI) traces. When processing large-scale PLI traces, extremely high memory capacity is required to execute the baseline LRA. In this paper, we propose a new implementation of LRA coined as incremental LRA. Theoretically, we reformulate the process of baseline LRA to break down the large dataset and process smaller batches of the dataset iteratively. Experimentally, we first validate that our proposed incremental LRA provides flexible choice of batch size and enables a progressive increase on the PLI traces to present attack results incrementally. Second, our proposed incremental LRA reduces execution memory and time significantly as compared to the baseline LRA. We demonstrate that the best execution performance of our incremental LRA requires only 0.65% of memory requirement (154x smaller) and takes only 3.37% of the processing time (30x speed-up) of the baseline LRA while attacking the same amount of traces.