{"title":"A Reinforcement-Learning Style Algorithm for Black Box Automata","authors":"Itay Cohen, Roi Fogler, D. Peled","doi":"10.1109/MEMOCODE57689.2022.9954382","DOIUrl":null,"url":null,"abstract":"The analysis of hardware and software systems is often applied to a model of a system rather than to the system itself. Obtaining a faithful model for a system may sometimes be a complex task. For learning the regular (finite automata) structure of a black box system, Angluin's $L^{*}$ algorithm and its successors employ membership and equivalence queries. The regular positive-negative inference (RPNI) family of algorithms use a less powerful capability of collecting observations for learning, with no control on selecting the inputs. We suggest and study here an alternative approach for learning, which is based on calculating utility values, obtained as a discounted sum of rewards, in the style of reinforcement learning. The utility values are used to classify the observed input prefixes into different states, and then to construct the learned automaton structure. We show cases where this classification is not enough to separate the prefixes, and subsequently remedy the situation by exploring deeper than the current prefix: checking the consistency between descendants of the current prefix that are reached with the same sequence of inputs. We show the connection of this algorithm with the RPNI algorithm and compare between these two approaches experimentally.","PeriodicalId":157326,"journal":{"name":"2022 20th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 20th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MEMOCODE57689.2022.9954382","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The analysis of hardware and software systems is often applied to a model of a system rather than to the system itself. Obtaining a faithful model for a system may sometimes be a complex task. For learning the regular (finite automata) structure of a black box system, Angluin's $L^{*}$ algorithm and its successors employ membership and equivalence queries. The regular positive-negative inference (RPNI) family of algorithms use a less powerful capability of collecting observations for learning, with no control on selecting the inputs. We suggest and study here an alternative approach for learning, which is based on calculating utility values, obtained as a discounted sum of rewards, in the style of reinforcement learning. The utility values are used to classify the observed input prefixes into different states, and then to construct the learned automaton structure. We show cases where this classification is not enough to separate the prefixes, and subsequently remedy the situation by exploring deeper than the current prefix: checking the consistency between descendants of the current prefix that are reached with the same sequence of inputs. We show the connection of this algorithm with the RPNI algorithm and compare between these two approaches experimentally.