DEEPEYE: A Deeply Tensor-Compressed Neural Network Hardware Accelerator: Invited Paper

Yuan Cheng, Guangya Li, Ngai Wong, Hai-Bao Chen, Hao Yu
{"title":"DEEPEYE: A Deeply Tensor-Compressed Neural Network Hardware Accelerator: Invited Paper","authors":"Yuan Cheng, Guangya Li, Ngai Wong, Hai-Bao Chen, Hao Yu","doi":"10.1109/iccad45719.2019.8942052","DOIUrl":null,"url":null,"abstract":"Video detection and classification constantly involve high dimensional data that requires a deep neural network (DNN) with huge number of parameters. It is thereby quite challenging to develop a DNN video comprehension at terminal devices. In this paper, we introduce a deeply tensor compressed video comprehension neural network called DEEPEYE for inference at terminal devices. Instead of building a Long Short-Term Memory (LSTM) network directly from raw video data, we build a LSTM-based spatio-temporal model from tensorized time-series features for object detection and action recognition. Moreover, a deep compression is achieved by tensor decomposition and trained quantization of the time-series feature-based spatio-temporal model. We have implemented DEEPEYE on an ARM-core based IOT board with only 2.4W power consumption. Using the video datasets MOMENTS and UCF11 as benchmarks, DEEPEYE achieves a 228.1× model compression with only 0.47% mAP deduction; as well as 15k× parameter reduction yet 16.27% accuracy improvement.","PeriodicalId":363364,"journal":{"name":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccad45719.2019.8942052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Video detection and classification constantly involve high dimensional data that requires a deep neural network (DNN) with huge number of parameters. It is thereby quite challenging to develop a DNN video comprehension at terminal devices. In this paper, we introduce a deeply tensor compressed video comprehension neural network called DEEPEYE for inference at terminal devices. Instead of building a Long Short-Term Memory (LSTM) network directly from raw video data, we build a LSTM-based spatio-temporal model from tensorized time-series features for object detection and action recognition. Moreover, a deep compression is achieved by tensor decomposition and trained quantization of the time-series feature-based spatio-temporal model. We have implemented DEEPEYE on an ARM-core based IOT board with only 2.4W power consumption. Using the video datasets MOMENTS and UCF11 as benchmarks, DEEPEYE achieves a 228.1× model compression with only 0.47% mAP deduction; as well as 15k× parameter reduction yet 16.27% accuracy improvement.
深度张量压缩神经网络硬件加速器
视频检测和分类经常涉及高维数据,这需要具有大量参数的深度神经网络(DNN)。因此,在终端设备上开发深度神经网络视频理解是非常具有挑战性的。本文介绍了一种深度张量压缩视频理解神经网络DEEPEYE,用于终端设备的推理。我们不是直接从原始视频数据中构建长短期记忆(LSTM)网络,而是从张拉的时间序列特征中构建基于LSTM的时空模型,用于目标检测和动作识别。此外,通过张量分解和训练量化的时间序列特征时空模型实现深度压缩。我们已经在基于arm内核的物联网板上实现了DEEPEYE,功耗仅为2.4W。使用视频数据集MOMENTS和UCF11作为基准,DEEPEYE实现了228.1 x的模型压缩,仅扣除0.47%的mAP;同时参数减少15kx,精度提高16.27%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信