{"title":"Computation in Memory for Data-Intensive Applications: Beyond CMOS and beyond Von- Neumann","authors":"S. Hamdioui","doi":"10.1145/2764967.2771820","DOIUrl":null,"url":null,"abstract":"One of the most critical challenges for today's and future data-intensive and big-data problems (ranging from economics and business activities to public administration, from national security to many scientific research areas) is data storage and analysis. The primary goal is to increase the understanding of processes by extracting highly useful values hidden in the huge volumes of data. The increase of the data size has already surpassed the capabilities of today's computation architectures which suffer from the limited bandwidth (due to communication and memory-access bottlenecks), energy inefficiency and limited scalability (due to CMOS technology). This talk will first address the CMOS scaling and its impact on different aspects of IC and electronics; the major limitations the scaling is facing (such as leakage, yield, reliability, etc) will be shown and the need of a new technology will be motivated. Thereafter, an overview of computing systems, developed since the introduction of Stored program computers by John von Neumann in the forties, will be given. Shortcomings of today's architectures to deal with data-intensive applications will be discussed. It will be shown that the speed at which data is growing has already surpassed the capabilities of today's computation architectures suffering from communication bottleneck and energy inefficiency; hence the need for a new architecture. Finally, the talk will introduce a new architecture paradigm for big data problems; it is based on the integration of the storage and computation in the same physical location (using a cross-bar topology) and the use of non-volatile resistive-switching technology, based on memristors, instead of CMOS technology. The huge potential of such architecture in realizing order of magnitude improvement will be illustrated by comparing it with the state-of-the art architectures (multi-core, GPUs, FPGAs) for different data-intensive applications.","PeriodicalId":110157,"journal":{"name":"Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2764967.2771820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
One of the most critical challenges for today's and future data-intensive and big-data problems (ranging from economics and business activities to public administration, from national security to many scientific research areas) is data storage and analysis. The primary goal is to increase the understanding of processes by extracting highly useful values hidden in the huge volumes of data. The increase of the data size has already surpassed the capabilities of today's computation architectures which suffer from the limited bandwidth (due to communication and memory-access bottlenecks), energy inefficiency and limited scalability (due to CMOS technology). This talk will first address the CMOS scaling and its impact on different aspects of IC and electronics; the major limitations the scaling is facing (such as leakage, yield, reliability, etc) will be shown and the need of a new technology will be motivated. Thereafter, an overview of computing systems, developed since the introduction of Stored program computers by John von Neumann in the forties, will be given. Shortcomings of today's architectures to deal with data-intensive applications will be discussed. It will be shown that the speed at which data is growing has already surpassed the capabilities of today's computation architectures suffering from communication bottleneck and energy inefficiency; hence the need for a new architecture. Finally, the talk will introduce a new architecture paradigm for big data problems; it is based on the integration of the storage and computation in the same physical location (using a cross-bar topology) and the use of non-volatile resistive-switching technology, based on memristors, instead of CMOS technology. The huge potential of such architecture in realizing order of magnitude improvement will be illustrated by comparing it with the state-of-the art architectures (multi-core, GPUs, FPGAs) for different data-intensive applications.