Emilio Incerto, Annalisa Napolitano, M. Tribastone
{"title":"马尔可夫链规划的统计学习","authors":"Emilio Incerto, Annalisa Napolitano, M. Tribastone","doi":"10.1109/MASCOTS50786.2020.9285947","DOIUrl":null,"url":null,"abstract":"Markov chains are a useful model for the quantitative analysis of extra-functional properties of software systems such as performance, reliability, and energy consumption. However building Markov models of software systems remains a difficult task. Here we present a statistical method that learns a Markov chain directly from a program, by means of execution runs with inputs sampled by given probability distributions. Our technique is based on learning algorithms for so-called variable length Markov chains, which allow us to capture data dependency throughout execution paths by encoding part of the program history into each state of the chain. Our domain-specific adaptation exploits structural information about the program through its control-flow graph. Using a prototype implementation, we show that this approach represents a significant improvement over state-of-the-art general-purpose learning algorithms, providing accurate models in a number of benchmark programs.","PeriodicalId":272614,"journal":{"name":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Statistical Learning of Markov Chains of Programs\",\"authors\":\"Emilio Incerto, Annalisa Napolitano, M. Tribastone\",\"doi\":\"10.1109/MASCOTS50786.2020.9285947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Markov chains are a useful model for the quantitative analysis of extra-functional properties of software systems such as performance, reliability, and energy consumption. However building Markov models of software systems remains a difficult task. Here we present a statistical method that learns a Markov chain directly from a program, by means of execution runs with inputs sampled by given probability distributions. Our technique is based on learning algorithms for so-called variable length Markov chains, which allow us to capture data dependency throughout execution paths by encoding part of the program history into each state of the chain. Our domain-specific adaptation exploits structural information about the program through its control-flow graph. Using a prototype implementation, we show that this approach represents a significant improvement over state-of-the-art general-purpose learning algorithms, providing accurate models in a number of benchmark programs.\",\"PeriodicalId\":272614,\"journal\":{\"name\":\"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)\",\"volume\":\"49 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MASCOTS50786.2020.9285947\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASCOTS50786.2020.9285947","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Markov chains are a useful model for the quantitative analysis of extra-functional properties of software systems such as performance, reliability, and energy consumption. However building Markov models of software systems remains a difficult task. Here we present a statistical method that learns a Markov chain directly from a program, by means of execution runs with inputs sampled by given probability distributions. Our technique is based on learning algorithms for so-called variable length Markov chains, which allow us to capture data dependency throughout execution paths by encoding part of the program history into each state of the chain. Our domain-specific adaptation exploits structural information about the program through its control-flow graph. Using a prototype implementation, we show that this approach represents a significant improvement over state-of-the-art general-purpose learning algorithms, providing accurate models in a number of benchmark programs.