Mario Gunzel, Harun Teper, Kuan-Hsun Chen, Georg von der Bruggen, Jian-Jia Chen
{"title":"Work-in-Progress: Evaluation Framework for Self-Suspending Schedulability Tests","authors":"Mario Gunzel, Harun Teper, Kuan-Hsun Chen, Georg von der Bruggen, Jian-Jia Chen","doi":"10.1109/rtss52674.2021.00058","DOIUrl":null,"url":null,"abstract":"Numerical simulations often play an important role when evaluating and comparing the performance of schedulability tests, as they allow to empirically demonstrate their applicability using synthesized task sets under various configurations. In order to provide a fair comparison of various schedulability tests, von der Brüggen et al. presented the first version of an evaluation framework for self-suspending task sets. In this work-in-progress, we further enhance the framework by providing more features to ease the use, e.g., Python 3 support, an improved GUI, multiprocessing, Gurobi optimization, and external task evaluation. In addition, we integrate the state-of-the-arts we are aware of into the framework. Moreover, the documentation is improved significantly to simplify the application in further research and development. To the best of our knowledge, the framework contains all suspension-aware schedulability tests for uniprocessor systems and we aim to keep it up-to-date.","PeriodicalId":102789,"journal":{"name":"2021 IEEE Real-Time Systems Symposium (RTSS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Real-Time Systems Symposium (RTSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/rtss52674.2021.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Numerical simulations often play an important role when evaluating and comparing the performance of schedulability tests, as they allow to empirically demonstrate their applicability using synthesized task sets under various configurations. In order to provide a fair comparison of various schedulability tests, von der Brüggen et al. presented the first version of an evaluation framework for self-suspending task sets. In this work-in-progress, we further enhance the framework by providing more features to ease the use, e.g., Python 3 support, an improved GUI, multiprocessing, Gurobi optimization, and external task evaluation. In addition, we integrate the state-of-the-arts we are aware of into the framework. Moreover, the documentation is improved significantly to simplify the application in further research and development. To the best of our knowledge, the framework contains all suspension-aware schedulability tests for uniprocessor systems and we aim to keep it up-to-date.
数值模拟通常在评估和比较可调度性测试的性能时发挥重要作用,因为它们允许在各种配置下使用合成任务集经验地证明它们的适用性。为了提供各种可调度性测试的公平比较,von der brggen等人提出了自挂起任务集评估框架的第一个版本。在这个正在进行的工作中,我们通过提供更多的特性来进一步增强框架以简化使用,例如,Python 3支持、改进的GUI、多处理、ruby优化和外部任务评估。此外,我们将我们所了解的最先进的技术纳入该框架。此外,还对文档进行了显著改进,以简化进一步研究和开发中的应用。据我们所知,该框架包含了所有单处理器系统的挂起感知可调度性测试,我们的目标是使其保持最新。