{"title":"用基于随机存储器的内存计算技术求解$O(1)$的最小二乘拟合","authors":"Xiaoming Chen, Yinhe Han","doi":"10.1109/asp-dac52403.2022.9712568","DOIUrl":null,"url":null,"abstract":"Least-squares fitting (LSF) is a fundamental statistical method that is widely used in linear regression problems, such as modeling, data fitting, predictive analysis, etc. For large-scale data sets, LSF is computationally complex and poorly scaled due to the $O(N^{2})-O(N^{3})$ computational complexity. The computing-in-memory technique has potential to improve the performance and scalability of LSF. In this paper, we propose a computing-in-memory accelerator based on resistive random-access memory (RRAM) devices. We not only utilize the conventional idea of accelerating matrix-vector multiplications by RRAM-based crossbar arrays, but also elaborate the hardware and the mapping strategy. Our approach has a unique feature that it can finish a complete LSF problem in $O$ (1) time complexity. We also propose a scalable and configurable architecture such that the problem scale that can be solved is not restricted by the crossbar array size. Experimental results have demonstrated the superior performance and energy efficiency of our accelerator.","PeriodicalId":239260,"journal":{"name":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Solving Least-Squares Fitting in $O(1)$ Using RRAM-based Computing-in-Memory Technique\",\"authors\":\"Xiaoming Chen, Yinhe Han\",\"doi\":\"10.1109/asp-dac52403.2022.9712568\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Least-squares fitting (LSF) is a fundamental statistical method that is widely used in linear regression problems, such as modeling, data fitting, predictive analysis, etc. For large-scale data sets, LSF is computationally complex and poorly scaled due to the $O(N^{2})-O(N^{3})$ computational complexity. The computing-in-memory technique has potential to improve the performance and scalability of LSF. In this paper, we propose a computing-in-memory accelerator based on resistive random-access memory (RRAM) devices. We not only utilize the conventional idea of accelerating matrix-vector multiplications by RRAM-based crossbar arrays, but also elaborate the hardware and the mapping strategy. Our approach has a unique feature that it can finish a complete LSF problem in $O$ (1) time complexity. We also propose a scalable and configurable architecture such that the problem scale that can be solved is not restricted by the crossbar array size. Experimental results have demonstrated the superior performance and energy efficiency of our accelerator.\",\"PeriodicalId\":239260,\"journal\":{\"name\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/asp-dac52403.2022.9712568\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/asp-dac52403.2022.9712568","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Solving Least-Squares Fitting in $O(1)$ Using RRAM-based Computing-in-Memory Technique
Least-squares fitting (LSF) is a fundamental statistical method that is widely used in linear regression problems, such as modeling, data fitting, predictive analysis, etc. For large-scale data sets, LSF is computationally complex and poorly scaled due to the $O(N^{2})-O(N^{3})$ computational complexity. The computing-in-memory technique has potential to improve the performance and scalability of LSF. In this paper, we propose a computing-in-memory accelerator based on resistive random-access memory (RRAM) devices. We not only utilize the conventional idea of accelerating matrix-vector multiplications by RRAM-based crossbar arrays, but also elaborate the hardware and the mapping strategy. Our approach has a unique feature that it can finish a complete LSF problem in $O$ (1) time complexity. We also propose a scalable and configurable architecture such that the problem scale that can be solved is not restricted by the crossbar array size. Experimental results have demonstrated the superior performance and energy efficiency of our accelerator.