{"title":"Identifying MOFs for electrochemical energy storage via density functional theory and machine learning","authors":"Tian Sun, Zhenxiang Wang, Liang Zeng, Guang Feng","doi":"10.1038/s41524-025-01590-w","DOIUrl":"https://doi.org/10.1038/s41524-025-01590-w","url":null,"abstract":"<p>Electrochemical energy storage (EES) systems demand electrode materials with high power density, energy density, and long cycle life. Metal-organic frameworks (MOFs) are promising electrode materials, while new MOFs with high conductivity, high stability, and abundant redox-reactive sites are demanded to meet the growing needs of EES. Density Functional Theory (DFT) could calculate these properties of MOFs and provide atomic-level insights into the mechanisms, based on which machine learning (ML) can screen MOFs for EES efficiently. In this review, we first review the exploration of mechanisms based on DFT calculations. We focus on the conductivity, stability, and reactivity of MOFs in EES systems. Then, we review the steps to apply ML in screening MOFs. Establishing datasets of MOFs, extracting features from MOF structure, and applying ML in screening MOFs are discussed. Finally, the review proposes the future avenue of DFT and ML to make up the gaps in the knowledge of MOFs.</p><figure></figure>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"21 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient modelling of anharmonicity and quantum effects in PdCuH2 with machine learning potentials","authors":"Francesco Belli, Eva Zurek","doi":"10.1038/s41524-025-01553-1","DOIUrl":"https://doi.org/10.1038/s41524-025-01553-1","url":null,"abstract":"<p>Quantum nuclear effects and anharmonicity impact a wide range of functional materials and their properties. One of the most powerful techniques to model these effects is the Stochastic Self-Consistent Harmonic Approximation (SSCHA). Unfortunately, the SSCHA is extremely computationally expensive, prohibiting its routine use. We propose a protocol that pairs machine learning interatomic potentials, which can be tailored for the system at hand via active learning, with the SSCHA. Our method leverages an upscaling procedure that allows for the treatment of supercells of up to thousands of atoms with practically minimal computational effort. The protocol is applied to PdCuH<sub><i>x</i></sub> (<i>x</i> = 0−2) compounds, chosen because previous experimental studies have reported superconducting critical temperatures, <i>T</i><sub>c</sub>s, as high as 17 K at ambient pressures in an unknown hydrogenated PdCu phase. We identify a <i>P</i>4/<i>m</i><i>m</i><i>m</i> PdCuH<sub>2</sub> structure, which is shown to be dynamically stable only upon the inclusion of quantum fluctuations, as being a key contributor to the measured superconductivity. For this system, our methodology is able to reduce the computational expense for the SSCHA calculations by ~96%. The proposed protocol opens the door towards the routine inclusion of quantum nuclear motion and anharmonicity in materials discovery.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"33 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143758331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuoyuan Li, Tongqi Wen, Yuzhi Zhang, Xinzijian Liu, Chengqian Zhang, A. S. L. Subrahmanyam Pattamatta, Xiaoguo Gong, Beilin Ye, Han Wang, Linfeng Zhang, David J. Srolovitz
{"title":"APEX: an automated cloud-native material property explorer","authors":"Zhuoyuan Li, Tongqi Wen, Yuzhi Zhang, Xinzijian Liu, Chengqian Zhang, A. S. L. Subrahmanyam Pattamatta, Xiaoguo Gong, Beilin Ye, Han Wang, Linfeng Zhang, David J. Srolovitz","doi":"10.1038/s41524-025-01580-y","DOIUrl":"https://doi.org/10.1038/s41524-025-01580-y","url":null,"abstract":"<p>The ability to rapidly evaluate materials properties through atomistic simulation approaches is the foundation of many new artificial intelligence-based approaches to materials identification and design. This depends on the availability of accurate descriptions of atomic bonding and an efficient means for determining materials properties. We present an efficient, robust platform for calculating materials properties from a wide-range of atomic bonding descriptions, i.e., APEX, the Alloy Property Explorer. APEX enables the rapid evolution of interatomic potential development and optimization, which is of particular importance in fine-tuning new classes of general AI-based foundation models for applications in materials science and engineering. APEX is an open-source, extendable, cloud-native platform for material property calculations using a range of atomistic simulation methodologies that effectively manages diverse computational resources and is built upon user-friendly features including automatic results visualization, a web-based platform and a NoSQL database client. It is designed for expert and non-specialist users, lowering the barrier to entry for interdisciplinary research within an “AI for Materials” framework. We describe the foundation and use of APEX, as well as provide two examples of its application to properties of titanium and 179 metals and alloys for a wide-range of bonding descriptions.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"31 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143758333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application-oriented design of machine learning paradigms for battery science","authors":"Ying Wang","doi":"10.1038/s41524-025-01575-9","DOIUrl":"https://doi.org/10.1038/s41524-025-01575-9","url":null,"abstract":"<p>In the development of battery science, machine learning (ML) has been widely employed to predict material properties, monitor morphological variations, learn the underlying physical rules and simplify the material-discovery processes. However, the widespread adoption of ML in battery research has encountered limitations, such as the incomplete and unfocused databases, the low model accuracy and the difficulty in realizing experimental validation. It is significant to construct the dataset containing specific-domain knowledge with suitable ML models for battery research from the application-oriented perspective. We outline five key challenges in the field and highlight potential research directions that can unlock the full potential of ML in advancing battery technologies.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"75 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143758334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia-Ming Wang, Jing-Xuan Wang, Rong-Qiang He, Li Huang, Zhong-Yi Lu
{"title":"Ab initio dynamical mean field theory with natural orbitals renormalization group impurity solver","authors":"Jia-Ming Wang, Jing-Xuan Wang, Rong-Qiang He, Li Huang, Zhong-Yi Lu","doi":"10.1038/s41524-025-01586-6","DOIUrl":"https://doi.org/10.1038/s41524-025-01586-6","url":null,"abstract":"<p>In this study, we introduce a novel implementation of density functional theory integrated with single-site dynamical mean-field theory to investigate the complex properties of strongly correlated materials. This ab initio many-body computational toolkit, termed <span>Zen</span>, utilizes the <span>VASP</span> and <span>Quantum ESPRESSO</span> codes to perform first-principles calculations and generate band structures for realistic materials. The challenges associated with correlated electron systems are addressed through two distinct yet complementary quantum impurity solvers: the natural orbitals renormalization group solver for zero temperature and the hybridization expansion continuous-time quantum Monte Carlo solver for finite temperatures. To validate the performance of this toolkit, we examine three representative cases: correlated metal SrVO<sub>3</sub>, unconventional superconductor La<sub>3</sub>Ni<sub>2</sub>O<sub>7</sub>, and Mott insulator MnO. The calculated results exhibit excellent agreement with previously available experimental and theoretical findings. Thus, it is suggested that the <span>Zen</span> toolkit is proficient in accurately describing the electronic structures of <i>d</i>-electron correlated materials.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"102 4 Pt 1 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hydrogen diffusion in magnesium using machine learning potentials: a comparative study","authors":"Andrea Angeletti, Luca Leoni, Dario Massa, Luca Pasquini, Stefanos Papanikolaou, Cesare Franchini","doi":"10.1038/s41524-025-01555-z","DOIUrl":"https://doi.org/10.1038/s41524-025-01555-z","url":null,"abstract":"<p>Understanding and accurately predicting hydrogen diffusion in materials is challenging due to the complex interactions between hydrogen defects and the crystal lattice. These interactions span large length and time scales, making them difficult to address with standard ab-initio techniques. This work addresses this challenge by employing accelerated machine learning (ML) molecular dynamics simulations through active learning. We conduct a comparative study of different ML-based interatomic potential schemes, including VASP, MACE, and CHGNet, utilizing various training strategies such as on-the-fly learning, pre-trained universal models, and fine-tuning. By considering different temperatures and concentration regimes, we obtain hydrogen diffusion coefficients and activation energy values which align remarkably well with experimental results, underlining the efficacy and accuracy of ML-assisted methodologies in the context of diffusive dynamics. Particularly, our procedure significantly reduces the computational effort associated with traditional transition state calculations or ad-hoc designed interatomic potentials. The results highlight the limitations of pre-trained universal solutions for defective materials and how they can be improved by fine-tuning. Specifically, fine-tuning the models on a database produced during on-the-fly training of VASP ML force-field allows the retrieval of DFT-level accuracy at a fraction of the computational cost.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"58 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fine-tuning large language models for domain adaptation: exploration of training strategies, scaling, model merging and synergistic capabilities","authors":"Wei Lu, Rachel K. Luu, Markus J. Buehler","doi":"10.1038/s41524-025-01564-y","DOIUrl":"https://doi.org/10.1038/s41524-025-01564-y","url":null,"abstract":"<p>The advancement of Large Language Models (LLMs) for domain applications in fields such as materials science and engineering depends on the development of fine-tuning strategies that adapt models for specialized, technical capabilities. In this work, we explore the effects of Continued Pretraining (CPT), Supervised Fine-Tuning (SFT), and various preference-based optimization approaches, including Direct Preference Optimization (DPO) and Odds Ratio Preference Optimization (ORPO), on fine-tuned LLM performance. Our analysis shows how these strategies influence model outcomes and reveals that the merging of multiple fine-tuned models can lead to the emergence of capabilities that surpass the individual contributions of the parent models. We find that model merging is not merely a process of aggregation, but a transformative method that can drive substantial advancements in model capabilities characterized by highly nonlinear interactions between model parameters, resulting in new functionalities that neither parent model could achieve alone, leading to improved performance in domain-specific assessments. We study critical factors that influence the success of model merging, such as the diversity between parent models and the fine-tuning techniques employed. The insights underscore the potential of strategic model merging to unlock novel capabilities in LLMs, offering an effective tool for advancing AI systems to meet complex challenges. Experiments with different model architectures are presented, including the Llama 3.1 8B and Mistral 7B family of models, where similar behaviors are observed. Exploring whether the results hold also for much smaller models, we use a tiny LLM with 1.7 billion parameters and show that very small LLMs do not necessarily feature emergent capabilities under model merging, suggesting that model scaling may be a key component. In open-ended yet consistent chat conversations between a human and AI models, our assessment reveals detailed insights into how different model variants perform, and shows that the smallest model achieves a high intelligence score across key criteria including reasoning depth, creativity, clarity, and quantitative precision. Other experiments include the development of image generation prompts that seek to reason over disparate biological material design concepts, to create new microstructures, architectural concepts, and urban design based on biological materials-inspired construction principles. We conclude with a series of questions about scaling and emergence that could be addressed in future research.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"183 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143734275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Houliang Zhou, Benjamin Zalatan, Joan Stanescu, Martin P. Harmer, Jeffrey M. Rickman, Lifang He, Christopher J. Marvel, Brian Y. Chen
{"title":"Learning to predict rare events: the case of abnormal grain growth","authors":"Houliang Zhou, Benjamin Zalatan, Joan Stanescu, Martin P. Harmer, Jeffrey M. Rickman, Lifang He, Christopher J. Marvel, Brian Y. Chen","doi":"10.1038/s41524-025-01530-8","DOIUrl":"https://doi.org/10.1038/s41524-025-01530-8","url":null,"abstract":"<p>Abnormal grain growth (AGG) in polycrystalline microstructures, characterized by the rapid and disproportionate enlargement of a few “abnormal” grains relative to their surroundings, can lead to dramatic, often deleterious changes in the mechanical properties of materials, such as strength and toughness. Thus, the prediction and control of AGG is key to realizing robust mesoscale materials design. Unfortunately, it is challenging to predict these rare events far in advance of their onset because, at early stages, there is little to distinguish incipient abnormal grains from “normal” grains. To overcome this difficulty, we propose two machine learning approaches for predicting whether a grain will become abnormal in the future. These methods analyze grain properties derived from the spatio-temporal evolution of grain characteristics, grain-grain interactions, and a network-based analysis of these relationships. The first, PAL (<b>P</b>redicting <b>A</b>bnormality with <b>L</b>STM), analyzes grain features using a long short-term memory (LSTM) network, and the second, PAGL (<b>P</b>redicting <b>A</b>bnormality with <b>G</b>CRN and <b>L</b>STM), supplements the LSTM with a graph-based convolutional recurrent network (GCRN). We validated these methods on three distinct material scenarios with differing grain properties, observing that PAL and PAGL achieve high sensitivity and precision and, critically, that they are able to predict future abnormality long before it occurs. Finally, we consider the application of the deep learning models developed here to the prediction of rare events in different contexts.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"29 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143712834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unintuitive alloy strengthening by addition of weaker elements","authors":"Dharmendra Pant, Dilpuneet S. Aidhy","doi":"10.1038/s41524-025-01576-8","DOIUrl":"https://doi.org/10.1038/s41524-025-01576-8","url":null,"abstract":"<p>A positive correlation between strength and elastic modulus is generally observed in metallic alloys, where the addition of a stronger element such as Mo, W, or Cr increases both the strength and elastic modulus. Our density functional theory (DFT) calculations explain an opposite experimentally measured trend, i.e., the addition of a weaker element such as Ti, Hf, or Zr enhances the yield strength in specific high entropy alloys (HEAs). We show that the underlying mechanism is the lower bond stiffness of the weaker element, which causes larger local lattice distortion (LLD). Higher lattice distortion pins the movement of dislocations, causing solid solution strengthening, thereby raising the strength in body-centered cubic (BCC) refractory HEAs. We show this unintuitive behavior in Ti-based HEAs, i.e., Ti<sub>x</sub>MoNbTaW, and compare it with the conventional behavior in Mo<sub>x</sub>NbTiV<sub>0.3</sub>Zr.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"35 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143712836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Latent Ewald summation for machine learning of long-range interactions","authors":"Bingqing Cheng","doi":"10.1038/s41524-025-01577-7","DOIUrl":"https://doi.org/10.1038/s41524-025-01577-7","url":null,"abstract":"<p>Machine learning interatomic potentials (MLIPs) often neglect long-range interactions, such as electrostatic and dispersion forces. In this work, we introduce a straightforward and efficient method to account for long-range interactions by learning a hidden variable from local atomic descriptors and applying an Ewald summation to this variable. We demonstrate that in systems including charged and polar molecular dimers, bulk water, and water-vapor interface, standard short-ranged MLIPs can lead to unphysical predictions even when employing message passing. The long-range models effectively eliminate these artifacts, with only about twice the computational cost of short-range MLIPs.</p>","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"41 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143703062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}