{"title":"Real-time Scheduling of I/O Transfers for Massively Parallel Processor Arrays","authors":"Dominik Walter, Michael Witterauf, J. Teich","doi":"10.1109/MEMOCODE51338.2020.9315179","DOIUrl":null,"url":null,"abstract":"A fundamental problem of massively parallel accelerator architectures is the management of typically small peripheral I/O buffers that decouple the accelerator from an external memory. Very often, these buffers cannot store the entire input and output data of one execution and must be updated, i.e., filled or drained, frequently. Moreover, if a processor array performs either a read on an empty bank or a write on a full bank, it must interrupt its execution immediately until the corresponding data transfer between the accelerator and an external memory has been carried out. As a consequence, the timing predictability of the array execution might be impaired. Therefore, a precise analysis of a schedule for all data transfers is inevitable. Moreover, as it is prohibitive to store all data transfers entirely within the accelerator itself, we must determine and schedule all necessary data transfers dynamically at runtime. In this paper, we present an approach to characterize all necessary data transfers and to issue them in time so that the peripheral I/O buffers never run full or empty. Here, it is shown first that a deadline for each data transfer can be derived from a given loop schedule resulting in a traditional task scheduling problem. Unfortunately, however, standard real-time scheduling techniques such as earliest deadline first (EDF) cannot be applied here, as each data transfer must not be interrupted and even existing non-preemptive variants of EDF are known to be prone to timing anomalies. As a solution, we present a strictly non-work-conserving variant of EDF together with an efficient schedulability test for periodic loop executions. In an experimental section, the scheduling approach is applied to a randomly generated set of loop programs observing that our algorithm is able to feasibly schedule 95% of the theoretically schedulable problem instances. Altogether, we provide a fully timing-predictable buffer management for massively parallel processor arrays that avoids any I/O related stalls of a processor array by construction.","PeriodicalId":212741,"journal":{"name":"2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MEMOCODE51338.2020.9315179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
A fundamental problem of massively parallel accelerator architectures is the management of typically small peripheral I/O buffers that decouple the accelerator from an external memory. Very often, these buffers cannot store the entire input and output data of one execution and must be updated, i.e., filled or drained, frequently. Moreover, if a processor array performs either a read on an empty bank or a write on a full bank, it must interrupt its execution immediately until the corresponding data transfer between the accelerator and an external memory has been carried out. As a consequence, the timing predictability of the array execution might be impaired. Therefore, a precise analysis of a schedule for all data transfers is inevitable. Moreover, as it is prohibitive to store all data transfers entirely within the accelerator itself, we must determine and schedule all necessary data transfers dynamically at runtime. In this paper, we present an approach to characterize all necessary data transfers and to issue them in time so that the peripheral I/O buffers never run full or empty. Here, it is shown first that a deadline for each data transfer can be derived from a given loop schedule resulting in a traditional task scheduling problem. Unfortunately, however, standard real-time scheduling techniques such as earliest deadline first (EDF) cannot be applied here, as each data transfer must not be interrupted and even existing non-preemptive variants of EDF are known to be prone to timing anomalies. As a solution, we present a strictly non-work-conserving variant of EDF together with an efficient schedulability test for periodic loop executions. In an experimental section, the scheduling approach is applied to a randomly generated set of loop programs observing that our algorithm is able to feasibly schedule 95% of the theoretically schedulable problem instances. Altogether, we provide a fully timing-predictable buffer management for massively parallel processor arrays that avoids any I/O related stalls of a processor array by construction.