Improving drug discovery with a hybrid deep generative model using reinforcement learning trained on a Bayesian docking approximation

IF 4.3 3区 材料科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Youjin Xiong, Yiqing Wang, Yisheng Wang, Chenmei Li, Peng Yusong, Junyu Wu, Yiqing Wang, Lingyun Gu, Christopher J. Butch
{"title":"Improving drug discovery with a hybrid deep generative model using reinforcement learning trained on a Bayesian docking approximation","authors":"Youjin Xiong,&nbsp;Yiqing Wang,&nbsp;Yisheng Wang,&nbsp;Chenmei Li,&nbsp;Peng Yusong,&nbsp;Junyu Wu,&nbsp;Yiqing Wang,&nbsp;Lingyun Gu,&nbsp;Christopher J. Butch","doi":"10.1007/s10822-023-00523-3","DOIUrl":null,"url":null,"abstract":"<div><p>Generative approaches to molecular design are an area of intense study in recent years as a method to generate new pharmaceuticals with desired properties. Often though, these types of efforts are constrained by limited experimental activity data, resulting in either models that generate molecules with poor performance or models that are overfit and produce close analogs of known molecules. In this paper, we reduce this data dependency for the generation of new chemotypes by incorporating docking scores of known and de novo molecules to expand the applicability domain of the reward function and diversify the compounds generated during reinforcement learning. Our approach employs a deep generative model initially trained using a combination of limited known drug activity and an approximate docking score provided by a second machine learned Bayes regression model, with final evaluation of high scoring compounds by a full docking simulation. This strategy results in molecules with docking scores improved by 10–20% compared to molecules of similar size, while being 130 × faster than a docking only approach on a typical GPU workstation. We also show that the increased docking scores correlate with (1) docking poses with interactions similar to known inhibitors and (2) result in higher MM-GBSA binding energies comparable to the energies of known DDR1 inhibitors, demonstrating that the Bayesian model contains sufficient information for the network to learn to efficiently interact with the binding pocket during reinforcement learning. This outcome shows that the combination of the learned latent molecular representation along with the feature-based docking regression is sufficient for reinforcement learning to infer the relationship between the molecules and the receptor binding site, which suggest that our method can be a powerful tool for the discovery of new chemotypes with potential therapeutic applications.</p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10822-023-00523-3.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"99","ListUrlMain":"https://link.springer.com/article/10.1007/s10822-023-00523-3","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Generative approaches to molecular design are an area of intense study in recent years as a method to generate new pharmaceuticals with desired properties. Often though, these types of efforts are constrained by limited experimental activity data, resulting in either models that generate molecules with poor performance or models that are overfit and produce close analogs of known molecules. In this paper, we reduce this data dependency for the generation of new chemotypes by incorporating docking scores of known and de novo molecules to expand the applicability domain of the reward function and diversify the compounds generated during reinforcement learning. Our approach employs a deep generative model initially trained using a combination of limited known drug activity and an approximate docking score provided by a second machine learned Bayes regression model, with final evaluation of high scoring compounds by a full docking simulation. This strategy results in molecules with docking scores improved by 10–20% compared to molecules of similar size, while being 130 × faster than a docking only approach on a typical GPU workstation. We also show that the increased docking scores correlate with (1) docking poses with interactions similar to known inhibitors and (2) result in higher MM-GBSA binding energies comparable to the energies of known DDR1 inhibitors, demonstrating that the Bayesian model contains sufficient information for the network to learn to efficiently interact with the binding pocket during reinforcement learning. This outcome shows that the combination of the learned latent molecular representation along with the feature-based docking regression is sufficient for reinforcement learning to infer the relationship between the molecules and the receptor binding site, which suggest that our method can be a powerful tool for the discovery of new chemotypes with potential therapeutic applications.

Abstract Image

利用基于贝叶斯对接近似训练的强化学习的混合深度生成模型改进药物发现
近年来,分子设计的生成方法作为一种产生具有期望性能的新药物的方法,受到了广泛的研究。通常,这些类型的努力受到有限的实验活动数据的限制,导致模型产生性能较差的分子或模型过拟合并产生已知分子的接近类似物。在本文中,我们通过结合已知分子和新生分子的对接分数来扩大奖励函数的适用范围,并使强化学习过程中产生的化合物多样化,从而减少了生成新化学型的数据依赖性。我们的方法采用了一个深度生成模型,该模型最初使用有限的已知药物活性和由第二个机器学习贝叶斯回归模型提供的近似对接分数的组合进行训练,并通过完整的对接模拟对高分化合物进行最终评估。这种策略的结果是,与类似大小的分子相比,具有对接分数的分子提高了10-20%,同时比典型GPU工作站上仅对接的方法快130倍。我们还发现,对接分数的增加与(1)与已知抑制剂的相互作用相似的对接姿态和(2)导致与已知DDR1抑制剂的能量相当的更高的MM-GBSA结合能相关,这表明贝叶斯模型包含足够的信息,使网络在强化学习过程中学习有效地与结合口袋相互作用。这一结果表明,将学习到的潜在分子表示与基于特征的对接回归相结合,足以用于强化学习来推断分子与受体结合位点之间的关系,这表明我们的方法可以成为发现具有潜在治疗应用的新化学型的有力工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
4.30%
发文量
567
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信