{"title":"Analysis of second-order Markov reward models","authors":"G. Horváth, Sándor Rácz, M. Telek","doi":"10.1109/DSN.2004.1311955","DOIUrl":null,"url":null,"abstract":"This paper considers the analysis of second-order Markov reward models. In these systems the reward accumulation during state sojourns is not deterministic, but follows a Brownian motion with a state dependent drift and variance parameter. We give the differential equations that describe the density function and the moments of the accumulated reward, and show the similarities compared to the first-order (ordinary) case. A randomization based numerical method is also presented which is numerically stable, has an error bound to control the precision, and allows the efficient analysis of large models. The computational cost of the proposed procedure is practically the same as the one of the analysis of first-order reward models, while the modeling power of second-order models is clearly larger.","PeriodicalId":436323,"journal":{"name":"International Conference on Dependable Systems and Networks, 2004","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Dependable Systems and Networks, 2004","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSN.2004.1311955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
This paper considers the analysis of second-order Markov reward models. In these systems the reward accumulation during state sojourns is not deterministic, but follows a Brownian motion with a state dependent drift and variance parameter. We give the differential equations that describe the density function and the moments of the accumulated reward, and show the similarities compared to the first-order (ordinary) case. A randomization based numerical method is also presented which is numerically stable, has an error bound to control the precision, and allows the efficient analysis of large models. The computational cost of the proposed procedure is practically the same as the one of the analysis of first-order reward models, while the modeling power of second-order models is clearly larger.