{"title":"催化中吸附构型的多模态语言和图式学习","authors":"Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani","doi":"10.1038/s42256-024-00930-7","DOIUrl":null,"url":null,"abstract":"Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1501-1511"},"PeriodicalIF":18.8000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal language and graph learning of adsorption configuration in catalysis\",\"authors\":\"Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani\",\"doi\":\"10.1038/s42256-024-00930-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.\",\"PeriodicalId\":48533,\"journal\":{\"name\":\"Nature Machine Intelligence\",\"volume\":\"6 12\",\"pages\":\"1501-1511\"},\"PeriodicalIF\":18.8000,\"publicationDate\":\"2024-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.nature.com/articles/s42256-024-00930-7\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.nature.com/articles/s42256-024-00930-7","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Multimodal language and graph learning of adsorption configuration in catalysis
Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.
期刊介绍:
Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements.
To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects.
Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.