Computation in Memory for Data-Intensive Applications: Beyond CMOS and beyond Von- Neumann

S. Hamdioui
{"title":"Computation in Memory for Data-Intensive Applications: Beyond CMOS and beyond Von- Neumann","authors":"S. Hamdioui","doi":"10.1145/2764967.2771820","DOIUrl":null,"url":null,"abstract":"One of the most critical challenges for today's and future data-intensive and big-data problems (ranging from economics and business activities to public administration, from national security to many scientific research areas) is data storage and analysis. The primary goal is to increase the understanding of processes by extracting highly useful values hidden in the huge volumes of data. The increase of the data size has already surpassed the capabilities of today's computation architectures which suffer from the limited bandwidth (due to communication and memory-access bottlenecks), energy inefficiency and limited scalability (due to CMOS technology). This talk will first address the CMOS scaling and its impact on different aspects of IC and electronics; the major limitations the scaling is facing (such as leakage, yield, reliability, etc) will be shown and the need of a new technology will be motivated. Thereafter, an overview of computing systems, developed since the introduction of Stored program computers by John von Neumann in the forties, will be given. Shortcomings of today's architectures to deal with data-intensive applications will be discussed. It will be shown that the speed at which data is growing has already surpassed the capabilities of today's computation architectures suffering from communication bottleneck and energy inefficiency; hence the need for a new architecture. Finally, the talk will introduce a new architecture paradigm for big data problems; it is based on the integration of the storage and computation in the same physical location (using a cross-bar topology) and the use of non-volatile resistive-switching technology, based on memristors, instead of CMOS technology. The huge potential of such architecture in realizing order of magnitude improvement will be illustrated by comparing it with the state-of-the art architectures (multi-core, GPUs, FPGAs) for different data-intensive applications.","PeriodicalId":110157,"journal":{"name":"Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2764967.2771820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

One of the most critical challenges for today's and future data-intensive and big-data problems (ranging from economics and business activities to public administration, from national security to many scientific research areas) is data storage and analysis. The primary goal is to increase the understanding of processes by extracting highly useful values hidden in the huge volumes of data. The increase of the data size has already surpassed the capabilities of today's computation architectures which suffer from the limited bandwidth (due to communication and memory-access bottlenecks), energy inefficiency and limited scalability (due to CMOS technology). This talk will first address the CMOS scaling and its impact on different aspects of IC and electronics; the major limitations the scaling is facing (such as leakage, yield, reliability, etc) will be shown and the need of a new technology will be motivated. Thereafter, an overview of computing systems, developed since the introduction of Stored program computers by John von Neumann in the forties, will be given. Shortcomings of today's architectures to deal with data-intensive applications will be discussed. It will be shown that the speed at which data is growing has already surpassed the capabilities of today's computation architectures suffering from communication bottleneck and energy inefficiency; hence the need for a new architecture. Finally, the talk will introduce a new architecture paradigm for big data problems; it is based on the integration of the storage and computation in the same physical location (using a cross-bar topology) and the use of non-volatile resistive-switching technology, based on memristors, instead of CMOS technology. The huge potential of such architecture in realizing order of magnitude improvement will be illustrated by comparing it with the state-of-the art architectures (multi-core, GPUs, FPGAs) for different data-intensive applications.
数据密集型应用的内存计算:超越CMOS和超越冯-诺伊曼
当今和未来的数据密集型和大数据问题(从经济和商业活动到公共管理,从国家安全到许多科学研究领域)最关键的挑战之一是数据存储和分析。主要目标是通过提取隐藏在大量数据中的非常有用的值来增加对过程的理解。数据大小的增长已经超过了今天的计算架构的能力,这些架构受到有限的带宽(由于通信和内存访问瓶颈),能源效率低下和有限的可扩展性(由于CMOS技术)的影响。本次演讲将首先讨论CMOS缩放及其对集成电路和电子产品不同方面的影响;将会显示出扩展所面临的主要限制(如泄漏、产量、可靠性等),并激发对新技术的需求。此后,将概述自约翰·冯·诺依曼在四十年代引入存储程序计算机以来发展起来的计算系统。本文将讨论当今处理数据密集型应用程序的体系结构的缺点。它将显示,数据增长的速度已经超过了今天遭受通信瓶颈和能源效率低下的计算架构的能力;因此需要一种新的体系结构。最后,演讲将介绍一种新的大数据问题架构范式;它基于存储和计算在同一物理位置的集成(使用交叉棒拓扑)和使用非易失性电阻开关技术,基于忆阻器,而不是CMOS技术。这种架构在实现数量级改进方面的巨大潜力将通过将其与针对不同数据密集型应用的最先进架构(多核、gpu、fpga)进行比较来说明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信