使用学习技术实现动态并行化的案例

Karthik Gurunathan, K. Kartikey, T. Sudarshan, KN Divyaprabha
{"title":"使用学习技术实现动态并行化的案例","authors":"Karthik Gurunathan, K. Kartikey, T. Sudarshan, KN Divyaprabha","doi":"10.1109/CSNT48778.2020.9115757","DOIUrl":null,"url":null,"abstract":"Parallelisation involves dividing computational tasks statically or dynamically. Static analyses and studies on evolution of compilation approaches show how different techniques are employed to distribute the computational load from the main CPU (Central Processing Unit) to associated GPUs (Graphical Processing Units), and other pre-defined set of accelerators. This load sharing is often done before deployment of hardware for its core computational task. Several learning techniques have evolved to optimise such load sharing. The purpose of this paper is to provide an insight into how dynamic parallelisation can be accomplished. This work takes inspiration from current learning techniques in static systems, which continue to grow more scalable, more efficient and offer better memory access and extends these in the field of dynamic load sharing, which is a fledgling field that has not used learning techniques in its fullest, yet. As a precursor, existing static parallelisation techniques are surveyed to provide a compelling case for the above. Learning techniques help evolve a robust data parallelism scheme, that allows any parallelising tool to learn incrementally.","PeriodicalId":131745,"journal":{"name":"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)","volume":"343 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Case for Dynamic Parallelisation using Learning Techniques\",\"authors\":\"Karthik Gurunathan, K. Kartikey, T. Sudarshan, KN Divyaprabha\",\"doi\":\"10.1109/CSNT48778.2020.9115757\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Parallelisation involves dividing computational tasks statically or dynamically. Static analyses and studies on evolution of compilation approaches show how different techniques are employed to distribute the computational load from the main CPU (Central Processing Unit) to associated GPUs (Graphical Processing Units), and other pre-defined set of accelerators. This load sharing is often done before deployment of hardware for its core computational task. Several learning techniques have evolved to optimise such load sharing. The purpose of this paper is to provide an insight into how dynamic parallelisation can be accomplished. This work takes inspiration from current learning techniques in static systems, which continue to grow more scalable, more efficient and offer better memory access and extends these in the field of dynamic load sharing, which is a fledgling field that has not used learning techniques in its fullest, yet. As a precursor, existing static parallelisation techniques are surveyed to provide a compelling case for the above. Learning techniques help evolve a robust data parallelism scheme, that allows any parallelising tool to learn incrementally.\",\"PeriodicalId\":131745,\"journal\":{\"name\":\"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)\",\"volume\":\"343 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSNT48778.2020.9115757\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSNT48778.2020.9115757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

并行化包括静态或动态地划分计算任务。对编译方法演变的静态分析和研究显示了如何使用不同的技术将计算负载从主CPU(中央处理单元)分配到相关的gpu(图形处理单元)和其他预定义的一组加速器。这种负载共享通常在为其核心计算任务部署硬件之前完成。为了优化这种负荷分担,已经发展了几种学习技术。本文的目的是深入了解动态并行化是如何实现的。这项工作从静态系统中当前的学习技术中获得灵感,这些技术不断发展得更具可扩展性,更高效,并提供更好的内存访问,并将这些技术扩展到动态负载共享领域,这是一个尚未充分使用学习技术的新兴领域。作为一个先导,我们调查了现有的静态并行化技术,以提供上述令人信服的案例。学习技术有助于发展健壮的数据并行方案,允许任何并行化工具进行增量学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Case for Dynamic Parallelisation using Learning Techniques
Parallelisation involves dividing computational tasks statically or dynamically. Static analyses and studies on evolution of compilation approaches show how different techniques are employed to distribute the computational load from the main CPU (Central Processing Unit) to associated GPUs (Graphical Processing Units), and other pre-defined set of accelerators. This load sharing is often done before deployment of hardware for its core computational task. Several learning techniques have evolved to optimise such load sharing. The purpose of this paper is to provide an insight into how dynamic parallelisation can be accomplished. This work takes inspiration from current learning techniques in static systems, which continue to grow more scalable, more efficient and offer better memory access and extends these in the field of dynamic load sharing, which is a fledgling field that has not used learning techniques in its fullest, yet. As a precursor, existing static parallelisation techniques are surveyed to provide a compelling case for the above. Learning techniques help evolve a robust data parallelism scheme, that allows any parallelising tool to learn incrementally.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信