FPGA-Based Acceleration of Expectation Maximization Algorithm Using High-Level Synthesis

M. A. Momen, Mohammed A. S. Khalid, Mohammad Abdul Moin Oninda
{"title":"FPGA-Based Acceleration of Expectation Maximization Algorithm Using High-Level Synthesis","authors":"M. A. Momen, Mohammed A. S. Khalid, Mohammad Abdul Moin Oninda","doi":"10.1109/DASIP48288.2019.9049183","DOIUrl":null,"url":null,"abstract":"Expectation Maximization (EM) is a soft clustering algorithm which partitions data iteratively into M clusters. It is one of the most popular data mining algorithms that uses Gaussian Mixture Models (GMM) for probability density modeling and is widely used in applications such as signal processing and Machine Learning (ML). EM requires high computation time when dealing with large data sets. This paper presents an optimized implementation of EM algorithm on Stratix V and Arria 10 FPGAs using Intel FPGA Software Development Kit (SDK) for Open Computing Language (OpenCL). Comparison of performance and power consumption between Central Processing Unit (CPU), Graphics Processing Unit (GPU) and FPGA is presented for various dimension and cluster sizes. Compared to Intel® Xeon® CPU E5-2637, our fully optimized OpenCL model for EM targeting Arria 10 FPGA achieved up to 1000x speedup in terms of throughput (T) and 5395x speedup in terms of throughput per unit of power consumed (T/P). Compared to previous research on EM-GMM implementation on GPUs, Arria 10 FPGA obtained up to 64.74x speedup (T) and 486.78x speedup (T/P).","PeriodicalId":120855,"journal":{"name":"2019 Conference on Design and Architectures for Signal and Image Processing (DASIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Conference on Design and Architectures for Signal and Image Processing (DASIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASIP48288.2019.9049183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Expectation Maximization (EM) is a soft clustering algorithm which partitions data iteratively into M clusters. It is one of the most popular data mining algorithms that uses Gaussian Mixture Models (GMM) for probability density modeling and is widely used in applications such as signal processing and Machine Learning (ML). EM requires high computation time when dealing with large data sets. This paper presents an optimized implementation of EM algorithm on Stratix V and Arria 10 FPGAs using Intel FPGA Software Development Kit (SDK) for Open Computing Language (OpenCL). Comparison of performance and power consumption between Central Processing Unit (CPU), Graphics Processing Unit (GPU) and FPGA is presented for various dimension and cluster sizes. Compared to Intel® Xeon® CPU E5-2637, our fully optimized OpenCL model for EM targeting Arria 10 FPGA achieved up to 1000x speedup in terms of throughput (T) and 5395x speedup in terms of throughput per unit of power consumed (T/P). Compared to previous research on EM-GMM implementation on GPUs, Arria 10 FPGA obtained up to 64.74x speedup (T) and 486.78x speedup (T/P).
基于fpga的期望最大化加速高级综合算法
期望最大化(EM)是一种将数据迭代划分为M个聚类的软聚类算法。它是使用高斯混合模型(GMM)进行概率密度建模的最流行的数据挖掘算法之一,广泛应用于信号处理和机器学习(ML)等应用。EM在处理大型数据集时需要很高的计算时间。本文利用Intel面向开放计算语言(OpenCL)的FPGA软件开发工具包(SDK)在Stratix V和Arria 10 FPGA上优化实现了EM算法。在不同的维数和簇大小下,对中央处理器(CPU)、图形处理器(GPU)和FPGA的性能和功耗进行了比较。与Intel®至强®CPU E5-2637相比,我们针对EM的完全优化的OpenCL模型针对Arria 10 FPGA实现了高达1000倍的吞吐量加速(T)和5395倍的单位功耗吞吐量加速(T/P)。与之前在gpu上实现EM-GMM的研究相比,Arria 10 FPGA获得高达64.74倍的加速(T)和486.78倍的加速(T/P)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信