Integrating inverse reinforcement learning into data-driven mechanistic computational models: a novel paradigm to decode cancer cell heterogeneity.

IF 2.3
Frontiers in systems biology Pub Date : 2024-03-08 eCollection Date: 2024-01-01 DOI:10.3389/fsysb.2024.1333760
Patrick C Kinnunen, Kenneth K Y Ho, Siddhartha Srivastava, Chengyang Huang, Wanggang Shen, Krishna Garikipati, Gary D Luker, Nikola Banovic, Xun Huan, Jennifer J Linderman, Kathryn E Luker
{"title":"Integrating inverse reinforcement learning into data-driven mechanistic computational models: a novel paradigm to decode cancer cell heterogeneity.","authors":"Patrick C Kinnunen, Kenneth K Y Ho, Siddhartha Srivastava, Chengyang Huang, Wanggang Shen, Krishna Garikipati, Gary D Luker, Nikola Banovic, Xun Huan, Jennifer J Linderman, Kathryn E Luker","doi":"10.3389/fsysb.2024.1333760","DOIUrl":null,"url":null,"abstract":"<p><p>Cellular heterogeneity is a ubiquitous aspect of biology and a major obstacle to successful cancer treatment. Several techniques have emerged to quantify heterogeneity in live cells along axes including cellular migration, morphology, growth, and signaling. Crucially, these studies reveal that cellular heterogeneity is not a result of randomness or a failure in cellular control systems, but instead is a predictable aspect of multicellular systems. We hypothesize that individual cells in complex tissues can behave as reward-maximizing agents and that differences in reward perception can explain heterogeneity. In this perspective, we introduce inverse reinforcement learning as a novel approach for analyzing cellular heterogeneity. We briefly detail experimental approaches for measuring cellular heterogeneity over time and how these experiments can generate datasets consisting of cellular states and actions. Next, we show how inverse reinforcement learning can be applied to these datasets to infer how individual cells choose different actions based on heterogeneous states. Finally, we introduce potential applications of inverse reinforcement learning to three cell biology problems. Overall, we expect inverse reinforcement learning to reveal why cells behave heterogeneously and enable identification of novel treatments based on this new understanding.</p>","PeriodicalId":73109,"journal":{"name":"Frontiers in systems biology","volume":"4 ","pages":"1333760"},"PeriodicalIF":2.3000,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12342033/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in systems biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fsysb.2024.1333760","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Cellular heterogeneity is a ubiquitous aspect of biology and a major obstacle to successful cancer treatment. Several techniques have emerged to quantify heterogeneity in live cells along axes including cellular migration, morphology, growth, and signaling. Crucially, these studies reveal that cellular heterogeneity is not a result of randomness or a failure in cellular control systems, but instead is a predictable aspect of multicellular systems. We hypothesize that individual cells in complex tissues can behave as reward-maximizing agents and that differences in reward perception can explain heterogeneity. In this perspective, we introduce inverse reinforcement learning as a novel approach for analyzing cellular heterogeneity. We briefly detail experimental approaches for measuring cellular heterogeneity over time and how these experiments can generate datasets consisting of cellular states and actions. Next, we show how inverse reinforcement learning can be applied to these datasets to infer how individual cells choose different actions based on heterogeneous states. Finally, we introduce potential applications of inverse reinforcement learning to three cell biology problems. Overall, we expect inverse reinforcement learning to reveal why cells behave heterogeneously and enable identification of novel treatments based on this new understanding.

将逆强化学习整合到数据驱动的机械计算模型中:解码癌细胞异质性的新范式。
细胞异质性是生物学中普遍存在的一个方面,也是成功治疗癌症的主要障碍。已经出现了几种技术来量化活细胞沿轴的异质性,包括细胞迁移、形态、生长和信号传导。至关重要的是,这些研究揭示了细胞异质性不是随机或细胞控制系统失败的结果,而是多细胞系统可预测的方面。我们假设复杂组织中的单个细胞可以作为奖励最大化的代理,并且奖励感知的差异可以解释异质性。从这个角度来看,我们引入了逆强化学习作为分析细胞异质性的一种新方法。我们简要介绍了测量细胞异质性的实验方法,以及这些实验如何生成由细胞状态和行为组成的数据集。接下来,我们将展示如何将逆强化学习应用于这些数据集,以推断单个细胞如何基于异构状态选择不同的动作。最后,我们介绍了逆强化学习在三个细胞生物学问题中的潜在应用。总的来说,我们期望逆强化学习能够揭示细胞行为异质性的原因,并基于这种新的理解识别出新的治疗方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信