{"title":"集群计算系统的编译器控制并行无关调度方法","authors":"K. Nikolova, M. Sowa","doi":"10.1109/HPCSA.2002.1019153","DOIUrl":null,"url":null,"abstract":"We propose a hybrid parallelism-independent scheduling method, predominantly performed at compile time, which generates a machine code efficiently executable on any number of workstations or PCs in a cluster computing environment. Our scheduling algorithm called the dynamical level parallelism-independent scheduling algorithm (DLPIS) is applicable for distributed computer systems because additionally to the task scheduling, we perform message communication scheduling. It provides an explicit task synchronization mechanism guiding the task allocation and data dependency solution at run time at reduced overhead. Furthermore, we provide a mechanism allowing the self-adaptation of the machine code to the degree of parallelism of the system at run-time. Therefore our scheduling method supports the variable number of processors in the users' computing systems and the adaptive parallelism, which may occur in distributed computing systems due to computer or link failure.","PeriodicalId":111862,"journal":{"name":"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Compiler-controlled parallelism-independent scheduling method for cluster computing systems\",\"authors\":\"K. Nikolova, M. Sowa\",\"doi\":\"10.1109/HPCSA.2002.1019153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a hybrid parallelism-independent scheduling method, predominantly performed at compile time, which generates a machine code efficiently executable on any number of workstations or PCs in a cluster computing environment. Our scheduling algorithm called the dynamical level parallelism-independent scheduling algorithm (DLPIS) is applicable for distributed computer systems because additionally to the task scheduling, we perform message communication scheduling. It provides an explicit task synchronization mechanism guiding the task allocation and data dependency solution at run time at reduced overhead. Furthermore, we provide a mechanism allowing the self-adaptation of the machine code to the degree of parallelism of the system at run-time. Therefore our scheduling method supports the variable number of processors in the users' computing systems and the adaptive parallelism, which may occur in distributed computing systems due to computer or link failure.\",\"PeriodicalId\":111862,\"journal\":{\"name\":\"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-06-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCSA.2002.1019153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 16th Annual International Symposium on High Performance Computing Systems and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCSA.2002.1019153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Compiler-controlled parallelism-independent scheduling method for cluster computing systems
We propose a hybrid parallelism-independent scheduling method, predominantly performed at compile time, which generates a machine code efficiently executable on any number of workstations or PCs in a cluster computing environment. Our scheduling algorithm called the dynamical level parallelism-independent scheduling algorithm (DLPIS) is applicable for distributed computer systems because additionally to the task scheduling, we perform message communication scheduling. It provides an explicit task synchronization mechanism guiding the task allocation and data dependency solution at run time at reduced overhead. Furthermore, we provide a mechanism allowing the self-adaptation of the machine code to the degree of parallelism of the system at run-time. Therefore our scheduling method supports the variable number of processors in the users' computing systems and the adaptive parallelism, which may occur in distributed computing systems due to computer or link failure.