{"title":"Static Prediction of Parallel Computation Graphs (Abstract)","authors":"Stefan K. Muller","doi":"10.1145/3597635.3598026","DOIUrl":null,"url":null,"abstract":"Many results in the theory of parallel scheduling, dating back to Brent's Theorem, are expressed in terms of the parallel dependency structure of a program as represented by a Directed Acyclic Graph (DAG). In the world of parallel and concurrent program analysis, such DAG models are also used to study deadlock, data races, and priority inversions, to name just a few examples. In all of these cases, it tends to be convenient to think of the DAG as a model of the program itself-we might say, for example, that the time to run a parallel program on P processors depends on the work and span of the program's DAG. This assumes that the DAG is a static, predictable property of the program. In reality, however, a DAG typically models the runtime relationships between threads during a particular execution of a program. To obtain the DAG, one might simulate an execution (or all possible executions) using some form of cost semantics, a dynamic semantics that produces the DAG as it executes the program. In fine-grained parallel programs, such as those that result from constructs such as fork/join, spawn/sync, async/finish, and futures, these DAGs tend to be especially dynamic and dependent on the features of a particular execution. For example, a divide-and-conquer algorithm implemented using fork/join parallelism may divide a certain number of times depending on the input size, and a program written with futures can choose to wait on threads or not wait on threads depending on conditions available only at runtime. Such programs are best represented by a (possibly infinite) family of DAGs, representing all possible executions of the program.","PeriodicalId":185981,"journal":{"name":"Proceedings of the 2023 ACM Workshop on Highlights of Parallel Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Workshop on Highlights of Parallel Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3597635.3598026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Many results in the theory of parallel scheduling, dating back to Brent's Theorem, are expressed in terms of the parallel dependency structure of a program as represented by a Directed Acyclic Graph (DAG). In the world of parallel and concurrent program analysis, such DAG models are also used to study deadlock, data races, and priority inversions, to name just a few examples. In all of these cases, it tends to be convenient to think of the DAG as a model of the program itself-we might say, for example, that the time to run a parallel program on P processors depends on the work and span of the program's DAG. This assumes that the DAG is a static, predictable property of the program. In reality, however, a DAG typically models the runtime relationships between threads during a particular execution of a program. To obtain the DAG, one might simulate an execution (or all possible executions) using some form of cost semantics, a dynamic semantics that produces the DAG as it executes the program. In fine-grained parallel programs, such as those that result from constructs such as fork/join, spawn/sync, async/finish, and futures, these DAGs tend to be especially dynamic and dependent on the features of a particular execution. For example, a divide-and-conquer algorithm implemented using fork/join parallelism may divide a certain number of times depending on the input size, and a program written with futures can choose to wait on threads or not wait on threads depending on conditions available only at runtime. Such programs are best represented by a (possibly infinite) family of DAGs, representing all possible executions of the program.