L. Jin, Chao Wang, Lei Gong, Chongchong Xu, Yahui Hu, Luchao Tan, Xuehai Zhou
{"title":"Furion: alleviating overheads for deep learning framework on single machine (work-in-progress)","authors":"L. Jin, Chao Wang, Lei Gong, Chongchong Xu, Yahui Hu, Luchao Tan, Xuehai Zhou","doi":"10.5555/3283568.3283582","DOIUrl":null,"url":null,"abstract":"Deep learning has been successful at solving many kinds of tasks. Hardware accelerators with high performance and parallelism have become mainstream to implement deep neural networks. In order to increase hardware utilization, multiple applications will share the same compute resource. However, different applications may use different deep learning frameworks and occupy different amounts of resources. If there are no scheduling platforms that are compatible with different frameworks, resources competition will result in longer response time, run out of memory, and other errors. When the resources of the system cannot satisfy all the applications at the same time, application switching overhead will be excessive without reasonable resource management strategy.In this paper, we propose Furion - a middleware alleviates overheads for deep learning framework on a single machine. Furion schedules tasks, overlaps the execution of different computing resource, and batches unknown inputs to increase the hardware accelerator utilization. It dynamically manages memory usage for each application to alleviate the overhead of application switching and make a complex model enable implement in a low-end GPU. Our experiment proved that Furion achieves 2.2x-2.7x speedup on the GTX1060.","PeriodicalId":300268,"journal":{"name":"International Conference on Hardware/Software Codesign and System Synthesis","volume":"166 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Hardware/Software Codesign and System Synthesis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5555/3283568.3283582","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has been successful at solving many kinds of tasks. Hardware accelerators with high performance and parallelism have become mainstream to implement deep neural networks. In order to increase hardware utilization, multiple applications will share the same compute resource. However, different applications may use different deep learning frameworks and occupy different amounts of resources. If there are no scheduling platforms that are compatible with different frameworks, resources competition will result in longer response time, run out of memory, and other errors. When the resources of the system cannot satisfy all the applications at the same time, application switching overhead will be excessive without reasonable resource management strategy.In this paper, we propose Furion - a middleware alleviates overheads for deep learning framework on a single machine. Furion schedules tasks, overlaps the execution of different computing resource, and batches unknown inputs to increase the hardware accelerator utilization. It dynamically manages memory usage for each application to alleviate the overhead of application switching and make a complex model enable implement in a low-end GPU. Our experiment proved that Furion achieves 2.2x-2.7x speedup on the GTX1060.