{"title":"Model-free guiding of Boolean control networks: Reinforcement learning and adversarial optimization","authors":"Shenglin Zhang, Yan Wang, Xiang Liu, Zhicheng Ji","doi":"10.1016/j.ins.2025.122576","DOIUrl":null,"url":null,"abstract":"<div><div>Guiding Boolean control networks (BCNs) toward desired states via control strategies is essential in practical applications. However, the model-driven paradigm faces limitations in adaptability and flexibility due to system complexity and uncertainty. This paper introduces a novel framework based on generative adversarial networks (GANs) that combines reinforcement learning (RL) and Markov decision process (FBCN_RM) to formulate control strategies over time series. Introducing maximum likelihood estimation (MLE) to handle incomplete state sequences and employing Policy Gradient (PG) for reward assessment to estimate the potential maximum reward between complex conditions and agents. During the adversarial process, control strategy is generated by GANs with state nodes inferred from model-based approaches. Furthermore, a novel interpretable prior knowledge is introduced to achieve higher accuracy and generalization in building near-truest strategy. Finally, the effectiveness of our approach is validated through two examples.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"721 ","pages":"Article 122576"},"PeriodicalIF":6.8000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0020025525007091","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Guiding Boolean control networks (BCNs) toward desired states via control strategies is essential in practical applications. However, the model-driven paradigm faces limitations in adaptability and flexibility due to system complexity and uncertainty. This paper introduces a novel framework based on generative adversarial networks (GANs) that combines reinforcement learning (RL) and Markov decision process (FBCN_RM) to formulate control strategies over time series. Introducing maximum likelihood estimation (MLE) to handle incomplete state sequences and employing Policy Gradient (PG) for reward assessment to estimate the potential maximum reward between complex conditions and agents. During the adversarial process, control strategy is generated by GANs with state nodes inferred from model-based approaches. Furthermore, a novel interpretable prior knowledge is introduced to achieve higher accuracy and generalization in building near-truest strategy. Finally, the effectiveness of our approach is validated through two examples.
期刊介绍:
Informatics and Computer Science Intelligent Systems Applications is an esteemed international journal that focuses on publishing original and creative research findings in the field of information sciences. We also feature a limited number of timely tutorial and surveying contributions.
Our journal aims to cater to a diverse audience, including researchers, developers, managers, strategic planners, graduate students, and anyone interested in staying up-to-date with cutting-edge research in information science, knowledge engineering, and intelligent systems. While readers are expected to share a common interest in information science, they come from varying backgrounds such as engineering, mathematics, statistics, physics, computer science, cell biology, molecular biology, management science, cognitive science, neurobiology, behavioral sciences, and biochemistry.