{"title":"具有约束并行性的任务并行规划","authors":"Tsung-Wei Huang, L. Hwang","doi":"10.1109/HPEC55821.2022.9926348","DOIUrl":null,"url":null,"abstract":"Task graph programming model (TGPM) has become central to a wide range of scientific computing applications because it enables top-down optimization of parallelism that governs the macro-scale performance. Existing TGPMs focus on expressing tasks and dependencies of a workload and leave the scheduling details to a library runtime. While maximizing the task concurrency is a typical scheduling goal, many applications require task parallelism to be constrained during the graph execution. Examples are limiting the number of worker threads in a subgraph or relating a conflict between two tasks. However, mainstream TGPMs have largely ignored this important feature of constrained parallelism in a task graph. Users have no choice but to implement a separate and often sophisticated scheduling solution that is neither generalizable nor scalable. In this paper, we propose a semaphore programming model and a scheduling method both of which can be easily integrated into an existing TGPM to support constrained parallelism. We have demonstrated the effectiveness and efficiency of our approach in real applications. As an example, our semaphore model speeds up an industrial circuit placement workload up to 28%.","PeriodicalId":200071,"journal":{"name":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Task-Parallel Programming with Constrained Parallelism\",\"authors\":\"Tsung-Wei Huang, L. Hwang\",\"doi\":\"10.1109/HPEC55821.2022.9926348\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Task graph programming model (TGPM) has become central to a wide range of scientific computing applications because it enables top-down optimization of parallelism that governs the macro-scale performance. Existing TGPMs focus on expressing tasks and dependencies of a workload and leave the scheduling details to a library runtime. While maximizing the task concurrency is a typical scheduling goal, many applications require task parallelism to be constrained during the graph execution. Examples are limiting the number of worker threads in a subgraph or relating a conflict between two tasks. However, mainstream TGPMs have largely ignored this important feature of constrained parallelism in a task graph. Users have no choice but to implement a separate and often sophisticated scheduling solution that is neither generalizable nor scalable. In this paper, we propose a semaphore programming model and a scheduling method both of which can be easily integrated into an existing TGPM to support constrained parallelism. We have demonstrated the effectiveness and efficiency of our approach in real applications. As an example, our semaphore model speeds up an industrial circuit placement workload up to 28%.\",\"PeriodicalId\":200071,\"journal\":{\"name\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE High Performance Extreme Computing Conference (HPEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPEC55821.2022.9926348\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE High Performance Extreme Computing Conference (HPEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPEC55821.2022.9926348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Task-Parallel Programming with Constrained Parallelism
Task graph programming model (TGPM) has become central to a wide range of scientific computing applications because it enables top-down optimization of parallelism that governs the macro-scale performance. Existing TGPMs focus on expressing tasks and dependencies of a workload and leave the scheduling details to a library runtime. While maximizing the task concurrency is a typical scheduling goal, many applications require task parallelism to be constrained during the graph execution. Examples are limiting the number of worker threads in a subgraph or relating a conflict between two tasks. However, mainstream TGPMs have largely ignored this important feature of constrained parallelism in a task graph. Users have no choice but to implement a separate and often sophisticated scheduling solution that is neither generalizable nor scalable. In this paper, we propose a semaphore programming model and a scheduling method both of which can be easily integrated into an existing TGPM to support constrained parallelism. We have demonstrated the effectiveness and efficiency of our approach in real applications. As an example, our semaphore model speeds up an industrial circuit placement workload up to 28%.