Samuel P. Niblett, Panagiotis Kourtis, Ioan-Bogdan Magdău, Clare P. Grey, Gábor Csányi
{"title":"Transferability of datasets between Machine-Learning Interaction Potentials","authors":"Samuel P. Niblett, Panagiotis Kourtis, Ioan-Bogdan Magdău, Clare P. Grey, Gábor Csányi","doi":"arxiv-2409.05590","DOIUrl":null,"url":null,"abstract":"With the emergence of Foundational Machine Learning Interatomic Potential\n(FMLIP) models trained on extensive datasets, transferring data between\ndifferent ML architectures has become increasingly important. In this work, we\nexamine the extent to which training data optimised for one machine-learning\nforcefield algorithm may be re-used to train different models, aiming to\naccelerate FMLIP fine-tuning and to reduce the need for costly iterative\ntraining. As a test case, we train models of an organic liquid mixture that is\ncommonly used as a solvent in rechargeable battery electrolytes, making it an\nimportant target for reactive MLIP development. We assess model performance by\nanalysing the properties of molecular dynamics trajectories, showing that this\nis a more stringent test than comparing prediction errors for fixed datasets.\nWe consider several types of training data, and several popular MLIPs - notably\nthe recent MACE architecture, a message-passing neural network designed for\nhigh efficiency and smoothness. We demonstrate that simple training sets\nconstructed without any ab initio dynamics are sufficient to produce stable\nmodels of molecular liquids. For simple neural-network architectures, further\niterative training is required to capture thermodynamic and kinetic properties\ncorrectly, but MACE performs well with extremely limited datsets. We find that\nconfigurations designed by human intuition to correct systematic model\ndeficiencies transfer effectively between algorithms, but active-learned data\nthat are generated by one MLIP do not typically benefit a different algorithm.\nFinally, we show that any training data which improve model performance also\nimprove its ability to generalise to similar unseen molecules. This suggests\nthat trajectory failure modes are connected with chemical structure rather than\nbeing entirely system-specific.","PeriodicalId":501304,"journal":{"name":"arXiv - PHYS - Chemical Physics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Chemical Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the emergence of Foundational Machine Learning Interatomic Potential
(FMLIP) models trained on extensive datasets, transferring data between
different ML architectures has become increasingly important. In this work, we
examine the extent to which training data optimised for one machine-learning
forcefield algorithm may be re-used to train different models, aiming to
accelerate FMLIP fine-tuning and to reduce the need for costly iterative
training. As a test case, we train models of an organic liquid mixture that is
commonly used as a solvent in rechargeable battery electrolytes, making it an
important target for reactive MLIP development. We assess model performance by
analysing the properties of molecular dynamics trajectories, showing that this
is a more stringent test than comparing prediction errors for fixed datasets.
We consider several types of training data, and several popular MLIPs - notably
the recent MACE architecture, a message-passing neural network designed for
high efficiency and smoothness. We demonstrate that simple training sets
constructed without any ab initio dynamics are sufficient to produce stable
models of molecular liquids. For simple neural-network architectures, further
iterative training is required to capture thermodynamic and kinetic properties
correctly, but MACE performs well with extremely limited datsets. We find that
configurations designed by human intuition to correct systematic model
deficiencies transfer effectively between algorithms, but active-learned data
that are generated by one MLIP do not typically benefit a different algorithm.
Finally, we show that any training data which improve model performance also
improve its ability to generalise to similar unseen molecules. This suggests
that trajectory failure modes are connected with chemical structure rather than
being entirely system-specific.