用大型语言模型增强通才材料智能。

IF 27.4 1区 材料科学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Wenhao Yuan,Guangyao Chen,Zhilong Wang,Fengqi You
{"title":"用大型语言模型增强通才材料智能。","authors":"Wenhao Yuan,Guangyao Chen,Zhilong Wang,Fengqi You","doi":"10.1002/adma.202502771","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) are steering the development of generalist materials intelligence (GMI), a unified framework integrating conceptual reasoning, computational modeling, and experimental validation. Central to this framework is the agent-in-the-loop paradigm, where LLM-based agents function as dynamic orchestrators, synthesizing multimodal knowledge, specialized models, and experimental robotics to enable fully autonomous discovery. Drawing from a comprehensive review of LLMs' transformative impact across representative applications in materials science, including data extraction, property prediction, structure generation, synthesis planning, and self-driven labs, this study underscores how LLMs are revolutionizing traditional tasks, catalyzing the agent-in-the-loop paradigm, and bridging the ontology-concept-computation-experiment continuum. Then the unique challenges of scaling up LLM adoption are discussed, particularly those arising from the misalignment of foundation LLMs with materials-specific knowledge, emphasizing the need to enhance adaptability, efficiency, sustainability, interpretability, and trustworthiness in the pursuit of GMI. Nonetheless, it is important to recognize that LLMs are not universally efficient. Their substantial resource demands and inconsistent performance call for careful deployment based on demonstrated task suitability. To address these realities, actionable strategies and a progressive roadmap for equitably and democratically implementing materials-aware LLMs in real-world practices are proposed.","PeriodicalId":114,"journal":{"name":"Advanced Materials","volume":"37 1","pages":"e2502771"},"PeriodicalIF":27.4000,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Empowering Generalist Material Intelligence with Large Language Models.\",\"authors\":\"Wenhao Yuan,Guangyao Chen,Zhilong Wang,Fengqi You\",\"doi\":\"10.1002/adma.202502771\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) are steering the development of generalist materials intelligence (GMI), a unified framework integrating conceptual reasoning, computational modeling, and experimental validation. Central to this framework is the agent-in-the-loop paradigm, where LLM-based agents function as dynamic orchestrators, synthesizing multimodal knowledge, specialized models, and experimental robotics to enable fully autonomous discovery. Drawing from a comprehensive review of LLMs' transformative impact across representative applications in materials science, including data extraction, property prediction, structure generation, synthesis planning, and self-driven labs, this study underscores how LLMs are revolutionizing traditional tasks, catalyzing the agent-in-the-loop paradigm, and bridging the ontology-concept-computation-experiment continuum. Then the unique challenges of scaling up LLM adoption are discussed, particularly those arising from the misalignment of foundation LLMs with materials-specific knowledge, emphasizing the need to enhance adaptability, efficiency, sustainability, interpretability, and trustworthiness in the pursuit of GMI. Nonetheless, it is important to recognize that LLMs are not universally efficient. Their substantial resource demands and inconsistent performance call for careful deployment based on demonstrated task suitability. To address these realities, actionable strategies and a progressive roadmap for equitably and democratically implementing materials-aware LLMs in real-world practices are proposed.\",\"PeriodicalId\":114,\"journal\":{\"name\":\"Advanced Materials\",\"volume\":\"37 1\",\"pages\":\"e2502771\"},\"PeriodicalIF\":27.4000,\"publicationDate\":\"2025-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced Materials\",\"FirstCategoryId\":\"88\",\"ListUrlMain\":\"https://doi.org/10.1002/adma.202502771\",\"RegionNum\":1,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Materials","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1002/adma.202502771","RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)正在引导通用材料智能(GMI)的发展,GMI是一个集成概念推理、计算建模和实验验证的统一框架。该框架的核心是循环中的代理范式,其中基于llm的代理充当动态协调器,综合多模态知识、专门模型和实验机器人,以实现完全自主的发现。通过对法学硕士在材料科学的代表性应用(包括数据提取、性能预测、结构生成、综合规划和自驱动实验室)中的变革性影响的全面回顾,本研究强调了法学硕士如何彻底改变传统任务,催化循环中的代理范式,并弥合本体-概念-计算-实验连续体。然后讨论了扩大LLM采用的独特挑战,特别是那些由基础LLM与材料特定知识不一致引起的挑战,强调在追求GMI时需要提高适应性、效率、可持续性、可解释性和可信赖性。尽管如此,重要的是要认识到法学硕士并非普遍有效。它们大量的资源需求和不一致的性能要求根据演示的任务适用性进行仔细的部署。为了解决这些现实问题,本文提出了可行的战略和渐进的路线图,以便在现实世界的实践中公平、民主地实施材料意识法学硕士。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Empowering Generalist Material Intelligence with Large Language Models.
Large language models (LLMs) are steering the development of generalist materials intelligence (GMI), a unified framework integrating conceptual reasoning, computational modeling, and experimental validation. Central to this framework is the agent-in-the-loop paradigm, where LLM-based agents function as dynamic orchestrators, synthesizing multimodal knowledge, specialized models, and experimental robotics to enable fully autonomous discovery. Drawing from a comprehensive review of LLMs' transformative impact across representative applications in materials science, including data extraction, property prediction, structure generation, synthesis planning, and self-driven labs, this study underscores how LLMs are revolutionizing traditional tasks, catalyzing the agent-in-the-loop paradigm, and bridging the ontology-concept-computation-experiment continuum. Then the unique challenges of scaling up LLM adoption are discussed, particularly those arising from the misalignment of foundation LLMs with materials-specific knowledge, emphasizing the need to enhance adaptability, efficiency, sustainability, interpretability, and trustworthiness in the pursuit of GMI. Nonetheless, it is important to recognize that LLMs are not universally efficient. Their substantial resource demands and inconsistent performance call for careful deployment based on demonstrated task suitability. To address these realities, actionable strategies and a progressive roadmap for equitably and democratically implementing materials-aware LLMs in real-world practices are proposed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Advanced Materials
Advanced Materials 工程技术-材料科学:综合
CiteScore
43.00
自引率
4.10%
发文量
2182
审稿时长
2 months
期刊介绍: Advanced Materials, one of the world's most prestigious journals and the foundation of the Advanced portfolio, is the home of choice for best-in-class materials science for more than 30 years. Following this fast-growing and interdisciplinary field, we are considering and publishing the most important discoveries on any and all materials from materials scientists, chemists, physicists, engineers as well as health and life scientists and bringing you the latest results and trends in modern materials-related research every week.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信