近似模型拟合中的数据质量:计算化学的视角。

IF 5.7 1区 化学 Q2 CHEMISTRY, PHYSICAL
Bun Chan, William Dawson, Takahito Nakajima
{"title":"近似模型拟合中的数据质量:计算化学的视角。","authors":"Bun Chan, William Dawson, Takahito Nakajima","doi":"10.1021/acs.jctc.4c01063","DOIUrl":null,"url":null,"abstract":"<p><p>Empirical parametrization underpins many scientific methodologies including certain quantum-chemistry protocols [e.g., density functional theory (DFT), machine-learning (ML) models]. In some cases, the fitting requires a large amount of data, necessitating the use of data obtained using low-cost, and thus low-quality, means. Here we examine the effect of using low-quality data on the resulting method in the context of DFT methods. We use multiple G2/97 data sets of different qualities to fit the DFT-type methods. Encouragingly, this fitting can tolerate a relatively large proportion of low-quality fitting data, which may be attributed to the physical foundations of the DFT models and the use of a modest number of parameters. Further examination using \"ML-quality\" data shows that adding a large amount of low-quality data to a small number of high-quality ones may not offer tangible benefits. On the other hand, when the high-quality data is limited in scope, diversification by a modest amount of low-quality data improves the performance. Quantitatively, for parametrizing DFT (and perhaps also quantum-chemistry ML models), caution should be taken when more than 50% of the fitting set contains questionable data, and that the average error of the full set is more than 20 kJ mol<sup>-1</sup>. One may also follow the recently proposed transferability principles to ensure diversity in the fitting set.</p>","PeriodicalId":45,"journal":{"name":"Journal of Chemical Theory and Computation","volume":" ","pages":""},"PeriodicalIF":5.7000,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Data Quality in the Fitting of Approximate Models: A Computational Chemistry Perspective.\",\"authors\":\"Bun Chan, William Dawson, Takahito Nakajima\",\"doi\":\"10.1021/acs.jctc.4c01063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Empirical parametrization underpins many scientific methodologies including certain quantum-chemistry protocols [e.g., density functional theory (DFT), machine-learning (ML) models]. In some cases, the fitting requires a large amount of data, necessitating the use of data obtained using low-cost, and thus low-quality, means. Here we examine the effect of using low-quality data on the resulting method in the context of DFT methods. We use multiple G2/97 data sets of different qualities to fit the DFT-type methods. Encouragingly, this fitting can tolerate a relatively large proportion of low-quality fitting data, which may be attributed to the physical foundations of the DFT models and the use of a modest number of parameters. Further examination using \\\"ML-quality\\\" data shows that adding a large amount of low-quality data to a small number of high-quality ones may not offer tangible benefits. On the other hand, when the high-quality data is limited in scope, diversification by a modest amount of low-quality data improves the performance. Quantitatively, for parametrizing DFT (and perhaps also quantum-chemistry ML models), caution should be taken when more than 50% of the fitting set contains questionable data, and that the average error of the full set is more than 20 kJ mol<sup>-1</sup>. One may also follow the recently proposed transferability principles to ensure diversity in the fitting set.</p>\",\"PeriodicalId\":45,\"journal\":{\"name\":\"Journal of Chemical Theory and Computation\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2024-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Chemical Theory and Computation\",\"FirstCategoryId\":\"92\",\"ListUrlMain\":\"https://doi.org/10.1021/acs.jctc.4c01063\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"CHEMISTRY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Chemical Theory and Computation","FirstCategoryId":"92","ListUrlMain":"https://doi.org/10.1021/acs.jctc.4c01063","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

经验参数化是许多科学方法的基础,包括某些量子化学协议[如密度泛函理论(DFT)、机器学习(ML)模型]。在某些情况下,拟合需要大量数据,因此必须使用低成本、低质量的数据。在此,我们以 DFT 方法为背景,研究了使用低质量数据对所得方法的影响。我们使用多个不同质量的 G2/97 数据集来拟合 DFT 类型的方法。令人鼓舞的是,这种拟合可以容忍相对较大比例的低质量拟合数据,这可能归因于 DFT 模型的物理基础和使用了适量的参数。使用 "ML 质量 "数据进行的进一步研究表明,在少量高质量数据的基础上增加大量低质量数据可能不会带来明显的好处。另一方面,当高质量数据的范围有限时,通过适量的低质量数据进行多样化可以提高性能。从数量上讲,对于参数化 DFT 模型(或许也包括量子化学 ML 模型),当拟合集的 50%以上包含有问题的数据,且整个拟合集的平均误差超过 20 kJ mol-1 时,就应该谨慎从事了。我们还可以遵循最近提出的可转移性原则,以确保拟合集的多样性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Data Quality in the Fitting of Approximate Models: A Computational Chemistry Perspective.

Empirical parametrization underpins many scientific methodologies including certain quantum-chemistry protocols [e.g., density functional theory (DFT), machine-learning (ML) models]. In some cases, the fitting requires a large amount of data, necessitating the use of data obtained using low-cost, and thus low-quality, means. Here we examine the effect of using low-quality data on the resulting method in the context of DFT methods. We use multiple G2/97 data sets of different qualities to fit the DFT-type methods. Encouragingly, this fitting can tolerate a relatively large proportion of low-quality fitting data, which may be attributed to the physical foundations of the DFT models and the use of a modest number of parameters. Further examination using "ML-quality" data shows that adding a large amount of low-quality data to a small number of high-quality ones may not offer tangible benefits. On the other hand, when the high-quality data is limited in scope, diversification by a modest amount of low-quality data improves the performance. Quantitatively, for parametrizing DFT (and perhaps also quantum-chemistry ML models), caution should be taken when more than 50% of the fitting set contains questionable data, and that the average error of the full set is more than 20 kJ mol-1. One may also follow the recently proposed transferability principles to ensure diversity in the fitting set.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Chemical Theory and Computation
Journal of Chemical Theory and Computation 化学-物理:原子、分子和化学物理
CiteScore
9.90
自引率
16.40%
发文量
568
审稿时长
1 months
期刊介绍: The Journal of Chemical Theory and Computation invites new and original contributions with the understanding that, if accepted, they will not be published elsewhere. Papers reporting new theories, methodology, and/or important applications in quantum electronic structure, molecular dynamics, and statistical mechanics are appropriate for submission to this Journal. Specific topics include advances in or applications of ab initio quantum mechanics, density functional theory, design and properties of new materials, surface science, Monte Carlo simulations, solvation models, QM/MM calculations, biomolecular structure prediction, and molecular dynamics in the broadest sense including gas-phase dynamics, ab initio dynamics, biomolecular dynamics, and protein folding. The Journal does not consider papers that are straightforward applications of known methods including DFT and molecular dynamics. The Journal favors submissions that include advances in theory or methodology with applications to compelling problems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信