Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap

M. Shafique, R. Hafiz, M. Javed, Sarmad Abbas, L. Sekanina, Z. Vašíček, Vojtěch Mrázek
{"title":"Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap","authors":"M. Shafique, R. Hafiz, M. Javed, Sarmad Abbas, L. Sekanina, Z. Vašíček, Vojtěch Mrázek","doi":"10.1109/ISVLSI.2017.124","DOIUrl":null,"url":null,"abstract":"Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.","PeriodicalId":187936,"journal":{"name":"2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2017.124","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 39

Abstract

Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.
机器学习的自适应和节能架构:挑战、机遇和研究路线图
在大数据、物联网(IoT) /万物互联(IoE)和网络物理系统(CSP)时代,巨大的数据产生速度对海量数据的处理、存储和传输提出了不断升级的需求,同时在不可预测、恶劣和能源/功率受限的场景下不断与物理世界交互。因此,这样的系统不仅需要在紧凑的功率/能量包络下支持高性能,还需要具有智能/认知、自学习和鲁棒性。因此,人工智能研究(如深度学习和其他机器学习技术)的炒作在许多社区浮出水面。本文讨论了为机器学习构建节能和自适应架构的挑战和机遇。我们特别关注大脑启发的新兴计算范式,如近似计算;这可以进一步降低系统的能源需求。首先,我们通过一种基于近似计算的方法来指导节能加速器的开发,特别是卷积深度神经网络(dnn)。我们表明,对深度神经网络的数据路径进行深入分析,可以更好地为节能加速器选择近似计算模块。此外,我们证明了一种多目标进化算法可以用于开发硬件上的自适应机器学习系统。最后,我们总结了挑战和相关的研究路线图,可以帮助开发用于机器学习的节能和适应性强的硬件加速器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信