{"title":"Moving to memoryland: in-memory computation for existing applications","authors":"P. Trancoso","doi":"10.1145/2742854.2742874","DOIUrl":null,"url":null,"abstract":"Migrating computation to memory was proposed a long time ago as a way to overcome the memory bandwidth and latency bottleneck, as well as increase the computation parallelism. While the concept had been applied to several research projects it is only recently that the technological hurdles have been solved and we are able to see products arriving the market. While in most cases we need to concentrate on developing new algorithms and porting applications to new models as to fully exploit the potentials of the new products, we will still want to be able to execute efficiently existing applications. As such, in this work we focus on the analysis of the in-memory computation characteristics of existing applications in a way to evaluate how we would be able to have them move to \"Memoryland\". We present a tool that analyses the locality of the memory accesses for the different routines in an application. The results observed from the execution of this tool on different applications are that while certain applications seem to be able to fit in a small granularity architecture (small memory-to-computation ratio), others have routines that require a large amount of data. Thus we believe that hierarchical in-memory processing architectures are a good fit for the demands of the different applications. In addition, results have shown that for most applications we can limit our analysis to the routines that issue the most memory accesses.","PeriodicalId":417279,"journal":{"name":"Proceedings of the 12th ACM International Conference on Computing Frontiers","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th ACM International Conference on Computing Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2742854.2742874","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Migrating computation to memory was proposed a long time ago as a way to overcome the memory bandwidth and latency bottleneck, as well as increase the computation parallelism. While the concept had been applied to several research projects it is only recently that the technological hurdles have been solved and we are able to see products arriving the market. While in most cases we need to concentrate on developing new algorithms and porting applications to new models as to fully exploit the potentials of the new products, we will still want to be able to execute efficiently existing applications. As such, in this work we focus on the analysis of the in-memory computation characteristics of existing applications in a way to evaluate how we would be able to have them move to "Memoryland". We present a tool that analyses the locality of the memory accesses for the different routines in an application. The results observed from the execution of this tool on different applications are that while certain applications seem to be able to fit in a small granularity architecture (small memory-to-computation ratio), others have routines that require a large amount of data. Thus we believe that hierarchical in-memory processing architectures are a good fit for the demands of the different applications. In addition, results have shown that for most applications we can limit our analysis to the routines that issue the most memory accesses.