S. Gorgin, M. Gholamrezaei, D. Javaheri, Jeong-A. Lee
{"title":"An Energy-Efficient K-means Clustering FPGA Accelerator via Most-Significant Digit First Arithmetic","authors":"S. Gorgin, M. Gholamrezaei, D. Javaheri, Jeong-A. Lee","doi":"10.1109/ICFPT56656.2022.9974222","DOIUrl":null,"url":null,"abstract":"K-means clustering is the most well-known unsupervised learning method that partitions the input dataset into $K$ clusters based on the similarity between the data samples. In this paper, to achieve an energy-efficient implementation without sacrificing performance, we take advantage of massive parallelism and digit-level pipelining via FPGA and the most-significant digit first arithmetic. Having the result of the most-significant digits in advance provides the possibility of early termination for unnecessary computations and fetching just the required most-significant part of data points from memory. This early termination technique significantly increases performance and decreases energy consumption. Our experimental results from various datasets and comparisons with the state-of-the-art FPGA accelerators indicate that our proposed design has effectively reduced energy consumption without any performance loss.","PeriodicalId":239314,"journal":{"name":"2022 International Conference on Field-Programmable Technology (ICFPT)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Field-Programmable Technology (ICFPT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICFPT56656.2022.9974222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
K-means clustering is the most well-known unsupervised learning method that partitions the input dataset into $K$ clusters based on the similarity between the data samples. In this paper, to achieve an energy-efficient implementation without sacrificing performance, we take advantage of massive parallelism and digit-level pipelining via FPGA and the most-significant digit first arithmetic. Having the result of the most-significant digits in advance provides the possibility of early termination for unnecessary computations and fetching just the required most-significant part of data points from memory. This early termination technique significantly increases performance and decreases energy consumption. Our experimental results from various datasets and comparisons with the state-of-the-art FPGA accelerators indicate that our proposed design has effectively reduced energy consumption without any performance loss.