Karthik Gurunathan, K. Kartikey, T. Sudarshan, KN Divyaprabha
{"title":"使用学习技术实现动态并行化的案例","authors":"Karthik Gurunathan, K. Kartikey, T. Sudarshan, KN Divyaprabha","doi":"10.1109/CSNT48778.2020.9115757","DOIUrl":null,"url":null,"abstract":"Parallelisation involves dividing computational tasks statically or dynamically. Static analyses and studies on evolution of compilation approaches show how different techniques are employed to distribute the computational load from the main CPU (Central Processing Unit) to associated GPUs (Graphical Processing Units), and other pre-defined set of accelerators. This load sharing is often done before deployment of hardware for its core computational task. Several learning techniques have evolved to optimise such load sharing. The purpose of this paper is to provide an insight into how dynamic parallelisation can be accomplished. This work takes inspiration from current learning techniques in static systems, which continue to grow more scalable, more efficient and offer better memory access and extends these in the field of dynamic load sharing, which is a fledgling field that has not used learning techniques in its fullest, yet. As a precursor, existing static parallelisation techniques are surveyed to provide a compelling case for the above. Learning techniques help evolve a robust data parallelism scheme, that allows any parallelising tool to learn incrementally.","PeriodicalId":131745,"journal":{"name":"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)","volume":"343 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Case for Dynamic Parallelisation using Learning Techniques\",\"authors\":\"Karthik Gurunathan, K. Kartikey, T. Sudarshan, KN Divyaprabha\",\"doi\":\"10.1109/CSNT48778.2020.9115757\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Parallelisation involves dividing computational tasks statically or dynamically. Static analyses and studies on evolution of compilation approaches show how different techniques are employed to distribute the computational load from the main CPU (Central Processing Unit) to associated GPUs (Graphical Processing Units), and other pre-defined set of accelerators. This load sharing is often done before deployment of hardware for its core computational task. Several learning techniques have evolved to optimise such load sharing. The purpose of this paper is to provide an insight into how dynamic parallelisation can be accomplished. This work takes inspiration from current learning techniques in static systems, which continue to grow more scalable, more efficient and offer better memory access and extends these in the field of dynamic load sharing, which is a fledgling field that has not used learning techniques in its fullest, yet. As a precursor, existing static parallelisation techniques are surveyed to provide a compelling case for the above. Learning techniques help evolve a robust data parallelism scheme, that allows any parallelising tool to learn incrementally.\",\"PeriodicalId\":131745,\"journal\":{\"name\":\"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)\",\"volume\":\"343 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSNT48778.2020.9115757\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 9th International Conference on Communication Systems and Network Technologies (CSNT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSNT48778.2020.9115757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Case for Dynamic Parallelisation using Learning Techniques
Parallelisation involves dividing computational tasks statically or dynamically. Static analyses and studies on evolution of compilation approaches show how different techniques are employed to distribute the computational load from the main CPU (Central Processing Unit) to associated GPUs (Graphical Processing Units), and other pre-defined set of accelerators. This load sharing is often done before deployment of hardware for its core computational task. Several learning techniques have evolved to optimise such load sharing. The purpose of this paper is to provide an insight into how dynamic parallelisation can be accomplished. This work takes inspiration from current learning techniques in static systems, which continue to grow more scalable, more efficient and offer better memory access and extends these in the field of dynamic load sharing, which is a fledgling field that has not used learning techniques in its fullest, yet. As a precursor, existing static parallelisation techniques are surveyed to provide a compelling case for the above. Learning techniques help evolve a robust data parallelism scheme, that allows any parallelising tool to learn incrementally.