Taisuke Ono, H. M. Waidyasooriya, M. Hariyama, Tsukasa Ishigaki
{"title":"基于lda推理的FPGA加速器结构","authors":"Taisuke Ono, H. M. Waidyasooriya, M. Hariyama, Tsukasa Ishigaki","doi":"10.1109/SNPD.2017.8022746","DOIUrl":null,"url":null,"abstract":"Latent Dirichlet allocation (LDA) based topic inference is a data classification method, that is used efficiently for extremely large data sets. However, the processing time is very large due to the serial computational behavior of the Markov Chain Monte Carlo method used for the topic inference. We propose a pipelined hardware architecture and memory allocation scheme to accelerate LDA using parallel processing. The proposed architecture is implemented on a reconfigurable hardware called FPGA (field programmable gate array), using OpenCL design environment. According to the experimental results, we achieved maximum speed-up of 2.38 times, while maintaining the same quality compared to the conventional CPU-based implementation.","PeriodicalId":186094,"journal":{"name":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Architecture of an FPGA accelerator for LDA-based inference\",\"authors\":\"Taisuke Ono, H. M. Waidyasooriya, M. Hariyama, Tsukasa Ishigaki\",\"doi\":\"10.1109/SNPD.2017.8022746\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Latent Dirichlet allocation (LDA) based topic inference is a data classification method, that is used efficiently for extremely large data sets. However, the processing time is very large due to the serial computational behavior of the Markov Chain Monte Carlo method used for the topic inference. We propose a pipelined hardware architecture and memory allocation scheme to accelerate LDA using parallel processing. The proposed architecture is implemented on a reconfigurable hardware called FPGA (field programmable gate array), using OpenCL design environment. According to the experimental results, we achieved maximum speed-up of 2.38 times, while maintaining the same quality compared to the conventional CPU-based implementation.\",\"PeriodicalId\":186094,\"journal\":{\"name\":\"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SNPD.2017.8022746\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 18th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNPD.2017.8022746","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Architecture of an FPGA accelerator for LDA-based inference
Latent Dirichlet allocation (LDA) based topic inference is a data classification method, that is used efficiently for extremely large data sets. However, the processing time is very large due to the serial computational behavior of the Markov Chain Monte Carlo method used for the topic inference. We propose a pipelined hardware architecture and memory allocation scheme to accelerate LDA using parallel processing. The proposed architecture is implemented on a reconfigurable hardware called FPGA (field programmable gate array), using OpenCL design environment. According to the experimental results, we achieved maximum speed-up of 2.38 times, while maintaining the same quality compared to the conventional CPU-based implementation.