{"title":"Transferability of MACE Graph Neural Network for Range Corrected Δ-Machine Learning Potential QM/MM Applications.","authors":"Timothy J Giese, Jinzhe Zeng, Darrin M York","doi":"10.1021/acs.jpcb.5c02006","DOIUrl":null,"url":null,"abstract":"<p><p>We previously introduced a \"range corrected\" Δ-machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993-7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6-31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train \"end-state\" models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model's accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.</p>","PeriodicalId":60,"journal":{"name":"The Journal of Physical Chemistry B","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Physical Chemistry B","FirstCategoryId":"1","ListUrlMain":"https://doi.org/10.1021/acs.jpcb.5c02006","RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"CHEMISTRY, PHYSICAL","Score":null,"Total":0}
引用次数: 0
Abstract
We previously introduced a "range corrected" Δ-machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993-7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6-31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train "end-state" models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model's accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.
期刊介绍:
An essential criterion for acceptance of research articles in the journal is that they provide new physical insight. Please refer to the New Physical Insights virtual issue on what constitutes new physical insight. Manuscripts that are essentially reporting data or applications of data are, in general, not suitable for publication in JPC B.