Sang Nguyen, Z. Salcic, Utsav Trivedi, Xuyun Zhang
{"title":"Predicting Parking Occupancy by FPGA-Accelerated DNN Models at Fog Layer","authors":"Sang Nguyen, Z. Salcic, Utsav Trivedi, Xuyun Zhang","doi":"10.1109/SMARTCOMP52413.2021.00032","DOIUrl":null,"url":null,"abstract":"Model inference is the final stage in machine/deep learning application deployments in practical applications. Hardware-implemented or accelerated model inferences find significant attractions as they offer faster inference than those implemented as programs. This is especially attractive for real-time applications. In this paper, we address models that serve for parking occupancy prediction based on historical time-series parking records. We use the Keras library to build and train software DNN and LSTM models, then compare their prediction performances in terms of accuracy. While the software-implemented inference models indicate advantages of LSTM, we still opted to select only DNN-based models for additional hardware acceleration as the current advanced tool-chains leveraged for automatic software-to-hardware model converting do not allow the creation of LSTM hardware- implemented models. We create, explore and compare the inference performances of hardware (FPGA)-implemented models on relatively low-cost FPGAs. For this, we create an FPGA-accelerated Fog-layer cluster by adding two additional Xilinx FPGA boards of different performances into our existing cluster of four Raspberry Pi (RPi) computers.","PeriodicalId":330785,"journal":{"name":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"260 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP52413.2021.00032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Model inference is the final stage in machine/deep learning application deployments in practical applications. Hardware-implemented or accelerated model inferences find significant attractions as they offer faster inference than those implemented as programs. This is especially attractive for real-time applications. In this paper, we address models that serve for parking occupancy prediction based on historical time-series parking records. We use the Keras library to build and train software DNN and LSTM models, then compare their prediction performances in terms of accuracy. While the software-implemented inference models indicate advantages of LSTM, we still opted to select only DNN-based models for additional hardware acceleration as the current advanced tool-chains leveraged for automatic software-to-hardware model converting do not allow the creation of LSTM hardware- implemented models. We create, explore and compare the inference performances of hardware (FPGA)-implemented models on relatively low-cost FPGAs. For this, we create an FPGA-accelerated Fog-layer cluster by adding two additional Xilinx FPGA boards of different performances into our existing cluster of four Raspberry Pi (RPi) computers.