llm在提取金属有机骨架合成条件和生成问答数据集方面的比较[j]

IF 6.2 Q1 CHEMISTRY, MULTIDISCIPLINARY
Yuang Shi, Nakul Rampal, Chengbin Zhao, Dongrong Joe Fu, Christian Borgs, Jennifer T. Chayes and Omar M. Yaghi
{"title":"llm在提取金属有机骨架合成条件和生成问答数据集方面的比较[j]","authors":"Yuang Shi, Nakul Rampal, Chengbin Zhao, Dongrong Joe Fu, Christian Borgs, Jennifer T. Chayes and Omar M. Yaghi","doi":"10.1039/D5DD00081E","DOIUrl":null,"url":null,"abstract":"<p >Artificial intelligence, represented by large language models (LLMs), has demonstrated tremendous capabilities in natural language recognition and extraction. To further evaluate the performance of various LLMs in extracting information from academic papers, this study explores the application of LLMs in reticular chemistry, focusing on their effectiveness in generating Q&amp;A datasets and extracting synthesis conditions from scientific literature. The models evaluated include OpenAI's GPT-4 Turbo, Anthropic's Claude 3 Opus, and Google's Gemini 1.5 Pro. Key results indicate that Claude excelled in providing complete synthesis data, while Gemini outperformed others in accuracy, characterization-free compliance (obedience), and proactive structuring of responses. Although GPT-4 was less effective in quantitative metrics, it demonstrated strong logical reasoning and contextual inference capabilities. Overall, Gemini and Claude achieved the highest scores in accuracy, groundedness, and adherence to prompt requirements, making them suitable benchmarks for future studies. The findings reveal the potential of LLMs to aid in scientific research, particularly in the efficient construction of structured datasets, which can help train models, predict, and assist in the synthesis of new metal–organic frameworks (MOFs).</p>","PeriodicalId":72816,"journal":{"name":"Digital discovery","volume":" 10","pages":" 2676-2683"},"PeriodicalIF":6.2000,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://pubs.rsc.org/en/content/articlepdf/2025/dd/d5dd00081e?page=search","citationCount":"0","resultStr":"{\"title\":\"Comparison of LLMs in extracting synthesis conditions and generating Q&A datasets for metal–organic frameworks†\",\"authors\":\"Yuang Shi, Nakul Rampal, Chengbin Zhao, Dongrong Joe Fu, Christian Borgs, Jennifer T. Chayes and Omar M. Yaghi\",\"doi\":\"10.1039/D5DD00081E\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p >Artificial intelligence, represented by large language models (LLMs), has demonstrated tremendous capabilities in natural language recognition and extraction. To further evaluate the performance of various LLMs in extracting information from academic papers, this study explores the application of LLMs in reticular chemistry, focusing on their effectiveness in generating Q&amp;A datasets and extracting synthesis conditions from scientific literature. The models evaluated include OpenAI's GPT-4 Turbo, Anthropic's Claude 3 Opus, and Google's Gemini 1.5 Pro. Key results indicate that Claude excelled in providing complete synthesis data, while Gemini outperformed others in accuracy, characterization-free compliance (obedience), and proactive structuring of responses. Although GPT-4 was less effective in quantitative metrics, it demonstrated strong logical reasoning and contextual inference capabilities. Overall, Gemini and Claude achieved the highest scores in accuracy, groundedness, and adherence to prompt requirements, making them suitable benchmarks for future studies. The findings reveal the potential of LLMs to aid in scientific research, particularly in the efficient construction of structured datasets, which can help train models, predict, and assist in the synthesis of new metal–organic frameworks (MOFs).</p>\",\"PeriodicalId\":72816,\"journal\":{\"name\":\"Digital discovery\",\"volume\":\" 10\",\"pages\":\" 2676-2683\"},\"PeriodicalIF\":6.2000,\"publicationDate\":\"2025-05-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://pubs.rsc.org/en/content/articlepdf/2025/dd/d5dd00081e?page=search\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital discovery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://pubs.rsc.org/en/content/articlelanding/2025/dd/d5dd00081e\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital discovery","FirstCategoryId":"1085","ListUrlMain":"https://pubs.rsc.org/en/content/articlelanding/2025/dd/d5dd00081e","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

以大型语言模型(llm)为代表的人工智能在自然语言识别和提取方面表现出了巨大的能力。为了进一步评估各种llm在从学术论文中提取信息方面的性能,本研究探讨了llm在网状化学中的应用,重点关注它们在生成Q&;A数据集和从科学文献中提取合成条件方面的有效性。评估的模型包括OpenAI的GPT-4 Turbo, Anthropic的Claude 3 Opus和谷歌的Gemini 1.5 Pro。关键结果表明,Claude在提供完整的合成数据方面表现出色,而Gemini在准确性、无特征的依从性(服从)和积极主动的反应结构方面表现优于其他人。虽然GPT-4在定量指标上效果较差,但它表现出较强的逻辑推理和上下文推理能力。总体而言,Gemini和Claude在准确性、接地气和对提示要求的依从性方面取得了最高分,这使他们成为未来研究的合适基准。这些发现揭示了llm在科学研究中的潜力,特别是在结构化数据集的有效构建方面,它可以帮助训练模型,预测和协助合成新的金属有机框架(mof)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Comparison of LLMs in extracting synthesis conditions and generating Q&A datasets for metal–organic frameworks†

Comparison of LLMs in extracting synthesis conditions and generating Q&A datasets for metal–organic frameworks†

Artificial intelligence, represented by large language models (LLMs), has demonstrated tremendous capabilities in natural language recognition and extraction. To further evaluate the performance of various LLMs in extracting information from academic papers, this study explores the application of LLMs in reticular chemistry, focusing on their effectiveness in generating Q&A datasets and extracting synthesis conditions from scientific literature. The models evaluated include OpenAI's GPT-4 Turbo, Anthropic's Claude 3 Opus, and Google's Gemini 1.5 Pro. Key results indicate that Claude excelled in providing complete synthesis data, while Gemini outperformed others in accuracy, characterization-free compliance (obedience), and proactive structuring of responses. Although GPT-4 was less effective in quantitative metrics, it demonstrated strong logical reasoning and contextual inference capabilities. Overall, Gemini and Claude achieved the highest scores in accuracy, groundedness, and adherence to prompt requirements, making them suitable benchmarks for future studies. The findings reveal the potential of LLMs to aid in scientific research, particularly in the efficient construction of structured datasets, which can help train models, predict, and assist in the synthesis of new metal–organic frameworks (MOFs).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信