Acceleration of CUDA programs for non-GPU users using cloud

Tejas Pisal, Sandip M. Walunj, A. Shrimali, Omprakash Gautam, Lalit P. Patil
{"title":"Acceleration of CUDA programs for non-GPU users using cloud","authors":"Tejas Pisal, Sandip M. Walunj, A. Shrimali, Omprakash Gautam, Lalit P. Patil","doi":"10.1109/ICGCIOT.2015.7380490","DOIUrl":null,"url":null,"abstract":"The use of Graphics processing unit (GPU) and cloud computing has increased at a higher rate. GPU provides high speed computational power for various applications and accelerates there executional speed by the parallel processing units. Maximum utilization of GPU is enabled by CUDA which is one of the parallel processing model. The power of GPU is effectively utilized by it. Cloud computing on the other hand provides remote nature to access the pool of various computational services on the network. If a network connection is in existence then cloud computing model can allow you to access computer resources and information from anywhere. Cloud computing provides a wide range of shared resources such as data storage space, network, processing power and specialized and specific corporate and user services. Cloud services allows an individual to access hardware and software from a remote location which is managed by a third party. The paper proposes a model which combines these two technologies: Processing CUDA programs on GPU and cloud computing. Due to which non-GPU user can access GPU services and resources remotely on cloud. In the model we combine the processing power of GPU and capabilities of cloud computing. It will also enhance the overall computing speed. The issues of cost, flexibility, scalability will be conquered. The system could also accelerate the overall execution speed of a single application by assigning multiple GPU's to it at a time.","PeriodicalId":400178,"journal":{"name":"2015 International Conference on Green Computing and Internet of Things (ICGCIoT)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Green Computing and Internet of Things (ICGCIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICGCIOT.2015.7380490","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The use of Graphics processing unit (GPU) and cloud computing has increased at a higher rate. GPU provides high speed computational power for various applications and accelerates there executional speed by the parallel processing units. Maximum utilization of GPU is enabled by CUDA which is one of the parallel processing model. The power of GPU is effectively utilized by it. Cloud computing on the other hand provides remote nature to access the pool of various computational services on the network. If a network connection is in existence then cloud computing model can allow you to access computer resources and information from anywhere. Cloud computing provides a wide range of shared resources such as data storage space, network, processing power and specialized and specific corporate and user services. Cloud services allows an individual to access hardware and software from a remote location which is managed by a third party. The paper proposes a model which combines these two technologies: Processing CUDA programs on GPU and cloud computing. Due to which non-GPU user can access GPU services and resources remotely on cloud. In the model we combine the processing power of GPU and capabilities of cloud computing. It will also enhance the overall computing speed. The issues of cost, flexibility, scalability will be conquered. The system could also accelerate the overall execution speed of a single application by assigning multiple GPU's to it at a time.
为使用云的非gpu用户加速CUDA程序
图形处理单元(GPU)和云计算的使用以更高的速度增长。GPU为各种应用程序提供高速计算能力,并通过并行处理单元加快其执行速度。GPU的最大利用率是通过CUDA实现的,CUDA是并行处理模型之一。它有效地利用了图形处理器的性能。另一方面,云计算提供了远程访问网络上各种计算服务池的特性。如果存在网络连接,那么云计算模型可以允许您从任何地方访问计算机资源和信息。云计算提供了广泛的共享资源,如数据存储空间、网络、处理能力以及专门和特定的企业和用户服务。云服务允许个人从由第三方管理的远程位置访问硬件和软件。本文提出了一种结合这两种技术的模型:在GPU上处理CUDA程序和云计算。因此,非GPU用户可以远程访问云上的GPU服务和资源。在该模型中,我们将GPU的处理能力与云计算的能力相结合。它还将提高整体计算速度。成本、灵活性和可扩展性的问题将被克服。该系统还可以通过一次分配多个GPU来加快单个应用程序的整体执行速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信