监管网络因果推断的强化学习数据获取。

Mohammad Alali, Mahdi Imani
{"title":"监管网络因果推断的强化学习数据获取。","authors":"Mohammad Alali,&nbsp;Mahdi Imani","doi":"10.23919/acc55779.2023.10155867","DOIUrl":null,"url":null,"abstract":"<p><p>Gene regulatory networks (GRNs) consist of multiple interacting genes whose activities govern various cellular processes. The limitations in genomics data and the complexity of the interactions between components often pose huge uncertainties in the models of these biological systems. Meanwhile, inferring/estimating the interactions between components of the GRNs using data acquired from the normal condition of these biological systems is a challenging or, in some cases, an impossible task. Perturbation is a well-known genomics approach that aims to excite targeted components to gather useful data from these systems. This paper models GRNs using the Boolean network with perturbation, where the network uncertainty appears in terms of unknown interactions between genes. Unlike the existing heuristics and greedy data-acquiring methods, this paper provides an optimal Bayesian formulation of the data-acquiring process in the reinforcement learning context, where the actions are perturbations, and the reward measures step-wise improvement in the inference accuracy. We develop a semi-gradient reinforcement learning method with function approximation for learning near-optimal data-acquiring policy. The obtained policy yields near-exact Bayesian optimality with respect to the entire uncertainty in the regulatory network model, and allows learning the policy offline through planning. We demonstrate the performance of the proposed framework using the well-known p53-Mdm2 negative feedback loop gene regulatory network.</p>","PeriodicalId":74510,"journal":{"name":"Proceedings of the ... American Control Conference. American Control Conference","volume":"2023 ","pages":"3957-3964"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382224/pdf/nihms-1914206.pdf","citationCount":"2","resultStr":"{\"title\":\"Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks.\",\"authors\":\"Mohammad Alali,&nbsp;Mahdi Imani\",\"doi\":\"10.23919/acc55779.2023.10155867\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Gene regulatory networks (GRNs) consist of multiple interacting genes whose activities govern various cellular processes. The limitations in genomics data and the complexity of the interactions between components often pose huge uncertainties in the models of these biological systems. Meanwhile, inferring/estimating the interactions between components of the GRNs using data acquired from the normal condition of these biological systems is a challenging or, in some cases, an impossible task. Perturbation is a well-known genomics approach that aims to excite targeted components to gather useful data from these systems. This paper models GRNs using the Boolean network with perturbation, where the network uncertainty appears in terms of unknown interactions between genes. Unlike the existing heuristics and greedy data-acquiring methods, this paper provides an optimal Bayesian formulation of the data-acquiring process in the reinforcement learning context, where the actions are perturbations, and the reward measures step-wise improvement in the inference accuracy. We develop a semi-gradient reinforcement learning method with function approximation for learning near-optimal data-acquiring policy. The obtained policy yields near-exact Bayesian optimality with respect to the entire uncertainty in the regulatory network model, and allows learning the policy offline through planning. We demonstrate the performance of the proposed framework using the well-known p53-Mdm2 negative feedback loop gene regulatory network.</p>\",\"PeriodicalId\":74510,\"journal\":{\"name\":\"Proceedings of the ... American Control Conference. American Control Conference\",\"volume\":\"2023 \",\"pages\":\"3957-3964\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382224/pdf/nihms-1914206.pdf\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... American Control Conference. American Control Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/acc55779.2023.10155867\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/7/3 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... American Control Conference. American Control Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/acc55779.2023.10155867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/7/3 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

基因调控网络(GRNs)由多个相互作用的基因组成,这些基因的活性控制着各种细胞过程。基因组学数据的局限性和成分之间相互作用的复杂性往往会给这些生物系统的模型带来巨大的不确定性。同时,使用从这些生物系统的正常条件下获得的数据推断/估计GRN的成分之间的相互作用是一项具有挑战性的任务,在某些情况下,这是一项不可能完成的任务。扰动是一种众所周知的基因组学方法,旨在激发靶向成分从这些系统中收集有用的数据。本文使用带扰动的布尔网络对GRN进行建模,其中网络的不确定性表现为基因之间的未知相互作用。与现有的启发式和贪婪数据获取方法不同,本文提供了在强化学习环境中数据获取过程的最优贝叶斯公式,其中动作是扰动,奖励措施逐步提高推理精度。我们开发了一种具有函数近似的半梯度强化学习方法来学习接近最优的数据获取策略。对于监管网络模型中的整个不确定性,所获得的策略产生了接近精确的贝叶斯最优,并允许通过规划离线学习策略。我们使用众所周知的p53-Mdm2负反馈环基因调控网络证明了所提出的框架的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement Learning Data-Acquiring for Causal Inference of Regulatory Networks.

Gene regulatory networks (GRNs) consist of multiple interacting genes whose activities govern various cellular processes. The limitations in genomics data and the complexity of the interactions between components often pose huge uncertainties in the models of these biological systems. Meanwhile, inferring/estimating the interactions between components of the GRNs using data acquired from the normal condition of these biological systems is a challenging or, in some cases, an impossible task. Perturbation is a well-known genomics approach that aims to excite targeted components to gather useful data from these systems. This paper models GRNs using the Boolean network with perturbation, where the network uncertainty appears in terms of unknown interactions between genes. Unlike the existing heuristics and greedy data-acquiring methods, this paper provides an optimal Bayesian formulation of the data-acquiring process in the reinforcement learning context, where the actions are perturbations, and the reward measures step-wise improvement in the inference accuracy. We develop a semi-gradient reinforcement learning method with function approximation for learning near-optimal data-acquiring policy. The obtained policy yields near-exact Bayesian optimality with respect to the entire uncertainty in the regulatory network model, and allows learning the policy offline through planning. We demonstrate the performance of the proposed framework using the well-known p53-Mdm2 negative feedback loop gene regulatory network.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.40
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信