Transferring predictions of formation energy across lattices of increasing size

Massimiliano Lupo Pasini, Mariia Karabin, M. Eisenbach
{"title":"Transferring predictions of formation energy across lattices of increasing size","authors":"Massimiliano Lupo Pasini, Mariia Karabin, M. Eisenbach","doi":"10.1088/2632-2153/ad3d2c","DOIUrl":null,"url":null,"abstract":"\n In this study, we show the transferability of graph convolutional neural network (GCNN) predictions of the formation energy of the nickel-platinum (NiPt) solid solution alloy across atomic structures of increasing sizes. The original dataset was generated with the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) using the second nearest-neighbor modified embedded-atom method (2NN MEAM) empirical interatomic potential. Geometry optimization was performed on the initially randomly generated face centered cubic (FCC) crystal structures and the formation energy has been calculated at each step of the geometry optimization, with configurations spanning the whole compositional range. Using data from various steps of the geometry optimization, we first trained our open-source, scalable implementation of GCNN called HydraGNN on a lattice of 256 atoms, which accounts well for the short-range interactions. Using this data, we predicted the formation energy for lattices of 864 atoms and 2,048 atoms, which resulted in lower-than-expected accuracy due to the long-range interactions present in these larger lattices. We accounted for the long-range interactions by including a small amount of training data representative for those two larger sizes, whereupon the predictions of HydraGNN scaled linearly with the size of the lattice. Therefore, our strategy ensured scalability while reducing significantly the computational cost of training on larger lattice sizes.","PeriodicalId":503691,"journal":{"name":"Machine Learning: Science and Technology","volume":"62 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning: Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2632-2153/ad3d2c","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this study, we show the transferability of graph convolutional neural network (GCNN) predictions of the formation energy of the nickel-platinum (NiPt) solid solution alloy across atomic structures of increasing sizes. The original dataset was generated with the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) using the second nearest-neighbor modified embedded-atom method (2NN MEAM) empirical interatomic potential. Geometry optimization was performed on the initially randomly generated face centered cubic (FCC) crystal structures and the formation energy has been calculated at each step of the geometry optimization, with configurations spanning the whole compositional range. Using data from various steps of the geometry optimization, we first trained our open-source, scalable implementation of GCNN called HydraGNN on a lattice of 256 atoms, which accounts well for the short-range interactions. Using this data, we predicted the formation energy for lattices of 864 atoms and 2,048 atoms, which resulted in lower-than-expected accuracy due to the long-range interactions present in these larger lattices. We accounted for the long-range interactions by including a small amount of training data representative for those two larger sizes, whereupon the predictions of HydraGNN scaled linearly with the size of the lattice. Therefore, our strategy ensured scalability while reducing significantly the computational cost of training on larger lattice sizes.
在尺寸不断增大的晶格中传递形成能的预测结果
在本研究中,我们展示了图卷积神经网络(GCNN)对镍铂(NiPt)固溶体合金形成能的预测在不同尺寸的原子结构中的可转移性。原始数据集由大规模原子/分子大规模并行模拟器(LAMMPS)生成,使用第二近邻修正嵌入原子法(2NN MEAM)经验原子间势。对最初随机生成的面心立方(FCC)晶体结构进行了几何优化,并计算了几何优化每一步的形成能,其配置跨越了整个成分范围。利用几何优化各步骤的数据,我们首先在 256 个原子的晶格上训练了名为 HydraGNN 的 GCNN 开源可扩展实现,该晶格很好地考虑了短程相互作用。利用这一数据,我们预测了 864 个原子和 2048 个原子晶格的形成能,由于这些较大晶格中存在长程相互作用,预测精度低于预期。为了考虑长程相互作用,我们在这两个较大的晶格中加入了少量具有代表性的训练数据,因此 HydraGNN 的预测结果与晶格的大小成线性比例。因此,我们的策略确保了可扩展性,同时大大降低了在更大尺寸晶格上进行训练的计算成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信