{"title":"Certificates and witnesses for multi-objective queries in Markov decision processes","authors":"Christel Baier, Calvin Chau, Sascha Klüppelholz","doi":"10.1016/j.peva.2025.102482","DOIUrl":null,"url":null,"abstract":"<div><div>Probabilistic model checking is a technique for formally verifying the correctness of probabilistic systems w.r.t. given specifications. Typically, a model checking procedure outputs whether a specification is satisfied or not, but does not provide additional insights on the correctness of the result, thereby diminishing the trustworthiness and understandability of the verification process. In this work, we consider certifying verification algorithms that also provide an independently checkable certificate and witness in addition to the verification result. The certificate can be used to easily validate the correctness of the result and the witness provides useful diagnostic information, e.g. for debugging purposes. More specifically, we study certificates and witnesses for specifications in the form of <em>multi-objective</em> queries in Markov decision processes. We first consider multi-objective reachability and invariant queries and then extend our techniques to mean-payoff expectation and mean-payoff percentile queries. Thereby, we generalize previous works on certificates and witnesses for single reachability and invariant constraints. In essence, we derive certifying verification algorithms from known linear programming techniques and show that witnesses, both in the form of schedulers and subsystems, can be obtained from the certificates. As a proof-of-concept, we report on an implementation of our certifying verification algorithms and present experimental results, demonstrating the applicability on moderately-sized case studies.</div></div>","PeriodicalId":19964,"journal":{"name":"Performance Evaluation","volume":"168 ","pages":"Article 102482"},"PeriodicalIF":1.0000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Performance Evaluation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0166531625000161","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Probabilistic model checking is a technique for formally verifying the correctness of probabilistic systems w.r.t. given specifications. Typically, a model checking procedure outputs whether a specification is satisfied or not, but does not provide additional insights on the correctness of the result, thereby diminishing the trustworthiness and understandability of the verification process. In this work, we consider certifying verification algorithms that also provide an independently checkable certificate and witness in addition to the verification result. The certificate can be used to easily validate the correctness of the result and the witness provides useful diagnostic information, e.g. for debugging purposes. More specifically, we study certificates and witnesses for specifications in the form of multi-objective queries in Markov decision processes. We first consider multi-objective reachability and invariant queries and then extend our techniques to mean-payoff expectation and mean-payoff percentile queries. Thereby, we generalize previous works on certificates and witnesses for single reachability and invariant constraints. In essence, we derive certifying verification algorithms from known linear programming techniques and show that witnesses, both in the form of schedulers and subsystems, can be obtained from the certificates. As a proof-of-concept, we report on an implementation of our certifying verification algorithms and present experimental results, demonstrating the applicability on moderately-sized case studies.
期刊介绍:
Performance Evaluation functions as a leading journal in the area of modeling, measurement, and evaluation of performance aspects of computing and communication systems. As such, it aims to present a balanced and complete view of the entire Performance Evaluation profession. Hence, the journal is interested in papers that focus on one or more of the following dimensions:
-Define new performance evaluation tools, including measurement and monitoring tools as well as modeling and analytic techniques
-Provide new insights into the performance of computing and communication systems
-Introduce new application areas where performance evaluation tools can play an important role and creative new uses for performance evaluation tools.
More specifically, common application areas of interest include the performance of:
-Resource allocation and control methods and algorithms (e.g. routing and flow control in networks, bandwidth allocation, processor scheduling, memory management)
-System architecture, design and implementation
-Cognitive radio
-VANETs
-Social networks and media
-Energy efficient ICT
-Energy harvesting
-Data centers
-Data centric networks
-System reliability
-System tuning and capacity planning
-Wireless and sensor networks
-Autonomic and self-organizing systems
-Embedded systems
-Network science