Work-in-Progress: Evaluation Framework for Self-Suspending Schedulability Tests

Mario Gunzel, Harun Teper, Kuan-Hsun Chen, Georg von der Bruggen, Jian-Jia Chen
{"title":"Work-in-Progress: Evaluation Framework for Self-Suspending Schedulability Tests","authors":"Mario Gunzel, Harun Teper, Kuan-Hsun Chen, Georg von der Bruggen, Jian-Jia Chen","doi":"10.1109/rtss52674.2021.00058","DOIUrl":null,"url":null,"abstract":"Numerical simulations often play an important role when evaluating and comparing the performance of schedulability tests, as they allow to empirically demonstrate their applicability using synthesized task sets under various configurations. In order to provide a fair comparison of various schedulability tests, von der Brüggen et al. presented the first version of an evaluation framework for self-suspending task sets. In this work-in-progress, we further enhance the framework by providing more features to ease the use, e.g., Python 3 support, an improved GUI, multiprocessing, Gurobi optimization, and external task evaluation. In addition, we integrate the state-of-the-arts we are aware of into the framework. Moreover, the documentation is improved significantly to simplify the application in further research and development. To the best of our knowledge, the framework contains all suspension-aware schedulability tests for uniprocessor systems and we aim to keep it up-to-date.","PeriodicalId":102789,"journal":{"name":"2021 IEEE Real-Time Systems Symposium (RTSS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Real-Time Systems Symposium (RTSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/rtss52674.2021.00058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Numerical simulations often play an important role when evaluating and comparing the performance of schedulability tests, as they allow to empirically demonstrate their applicability using synthesized task sets under various configurations. In order to provide a fair comparison of various schedulability tests, von der Brüggen et al. presented the first version of an evaluation framework for self-suspending task sets. In this work-in-progress, we further enhance the framework by providing more features to ease the use, e.g., Python 3 support, an improved GUI, multiprocessing, Gurobi optimization, and external task evaluation. In addition, we integrate the state-of-the-arts we are aware of into the framework. Moreover, the documentation is improved significantly to simplify the application in further research and development. To the best of our knowledge, the framework contains all suspension-aware schedulability tests for uniprocessor systems and we aim to keep it up-to-date.
正在进行的工作:自挂起可调度性测试的评估框架
数值模拟通常在评估和比较可调度性测试的性能时发挥重要作用,因为它们允许在各种配置下使用合成任务集经验地证明它们的适用性。为了提供各种可调度性测试的公平比较,von der brggen等人提出了自挂起任务集评估框架的第一个版本。在这个正在进行的工作中,我们通过提供更多的特性来进一步增强框架以简化使用,例如,Python 3支持、改进的GUI、多处理、ruby优化和外部任务评估。此外,我们将我们所了解的最先进的技术纳入该框架。此外,还对文档进行了显著改进,以简化进一步研究和开发中的应用。据我们所知,该框架包含了所有单处理器系统的挂起感知可调度性测试,我们的目标是使其保持最新。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信