范围校正的MACE图神经网络的可转移性Δ-Machine学习潜力QM/MM应用。

IF 2.8 2区 化学 Q3 CHEMISTRY, PHYSICAL
Timothy J Giese, Jinzhe Zeng, Darrin M York
{"title":"范围校正的MACE图神经网络的可转移性Δ-Machine学习潜力QM/MM应用。","authors":"Timothy J Giese, Jinzhe Zeng, Darrin M York","doi":"10.1021/acs.jpcb.5c02006","DOIUrl":null,"url":null,"abstract":"<p><p>We previously introduced a \"range corrected\" Δ-machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993-7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6-31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train \"end-state\" models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model's accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.</p>","PeriodicalId":60,"journal":{"name":"The Journal of Physical Chemistry B","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transferability of MACE Graph Neural Network for Range Corrected Δ-Machine Learning Potential QM/MM Applications.\",\"authors\":\"Timothy J Giese, Jinzhe Zeng, Darrin M York\",\"doi\":\"10.1021/acs.jpcb.5c02006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>We previously introduced a \\\"range corrected\\\" Δ-machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993-7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6-31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train \\\"end-state\\\" models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model's accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.</p>\",\"PeriodicalId\":60,\"journal\":{\"name\":\"The Journal of Physical Chemistry B\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2025-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Physical Chemistry B\",\"FirstCategoryId\":\"1\",\"ListUrlMain\":\"https://doi.org/10.1021/acs.jpcb.5c02006\",\"RegionNum\":2,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"CHEMISTRY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Physical Chemistry B","FirstCategoryId":"1","ListUrlMain":"https://doi.org/10.1021/acs.jpcb.5c02006","RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CHEMISTRY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

我们之前介绍了一个“范围校正”Δ-machine学习势(ΔMLP),它使用深度神经网络通过校正内部QM和QM/MM相互作用能量和力来提高量子力学/分子力学(QM/MM)联合模拟的准确性[J]。化学。理论计算,2017,17(6):993-7009。目前的工作将这种方法扩展到包括图神经网络。具体而言,将该方法应用于MACE消息传递神经网络架构,并训练了一系列AM1/d + MACE模型,以再现PBE0/6-31G* QM/MM模型磷酸化酯交换反应的能量和力。通过改变训练数据量和计算未包括在参数细化中的反应的自由能面,设计了几个模型来测试AM1/d + MACE的可转移性。将可移植性与使用DeepPot-SE (DP)深度神经网络架构的AM1/d + DP模型进行比较。发现AM1/d + MACE模型即使在AM1/d + DP模型显示不准确的情况下也能再现目标自由能面。我们训练的“最终状态”模型只包括6种反应的反应物和产物状态的数据。与未校正的AM1/d曲线不同,AM1/d + MACE方法正确地再现了稳定的五配位磷中间体,即使训练不包括具有类似键模式的结构。此外,本文还探讨了定义MACE网络的消息传递机制超参数对模型精度和性能的影响。当在图形处理单元上执行ΔMLP校正时,AM1/d + MACE模拟比AM1/d QM/MM慢28%。我们的研究结果表明,MACE架构可能导致ΔMLP模型具有更好的可移植性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Transferability of MACE Graph Neural Network for Range Corrected Δ-Machine Learning Potential QM/MM Applications.

We previously introduced a "range corrected" Δ-machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993-7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6-31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train "end-state" models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model's accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.80
自引率
9.10%
发文量
965
审稿时长
1.6 months
期刊介绍: An essential criterion for acceptance of research articles in the journal is that they provide new physical insight. Please refer to the New Physical Insights virtual issue on what constitutes new physical insight. Manuscripts that are essentially reporting data or applications of data are, in general, not suitable for publication in JPC B.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信