基于大型语言模型的自动文献研究与评论生成方法。

IF 16.3 1区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
National Science Review Pub Date : 2025-04-25 eCollection Date: 2025-06-01 DOI:10.1093/nsr/nwaf169
Shican Wu, Xiao Ma, Dehui Luo, Lulu Li, Xiangcheng Shi, Xin Chang, Xiaoyun Lin, Ran Luo, Chunlei Pei, Changying Du, Zhi-Jian Zhao, Jinlong Gong
{"title":"基于大型语言模型的自动文献研究与评论生成方法。","authors":"Shican Wu, Xiao Ma, Dehui Luo, Lulu Li, Xiangcheng Shi, Xin Chang, Xiaoyun Lin, Ran Luo, Chunlei Pei, Changying Du, Zhi-Jian Zhao, Jinlong Gong","doi":"10.1093/nsr/nwaf169","DOIUrl":null,"url":null,"abstract":"<p><p>Literature research, which is vital for scientific work, faces the challenge of surging information volumes that are exceeding researchers' processing capabilities. This paper describes an automated review-generation method based on large language models (LLMs) to overcome efficiency bottlenecks and reduce cognitive load. Our statistically validated evaluation framework demonstrates that the generated reviews match or exceed manual quality, offering broad applicability across research fields without requiring user domain knowledge. Applied to propane dehydrogenation catalysts, our method demonstrated two aspects: first, generating comprehensive reviews from 343 articles spanning 35 topics; and, second, evaluating data-mining capabilities by using 1041 articles for experimental catalyst property analysis. Through multilayered quality control, we effectively mitigated the hallucinations of LLMs, with expert verification confirming accuracy and citation integrity, while demonstrating hallucination risks reduced to <0.5% with 95% confidence. The released software application enables one-click review generation, enhancing research productivity and literature-recommendation efficiency while facilitating broader scientific explorations.</p>","PeriodicalId":18842,"journal":{"name":"National Science Review","volume":"12 6","pages":"nwaf169"},"PeriodicalIF":16.3000,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12125968/pdf/","citationCount":"0","resultStr":"{\"title\":\"Automated literature research and review-generation method based on large language models.\",\"authors\":\"Shican Wu, Xiao Ma, Dehui Luo, Lulu Li, Xiangcheng Shi, Xin Chang, Xiaoyun Lin, Ran Luo, Chunlei Pei, Changying Du, Zhi-Jian Zhao, Jinlong Gong\",\"doi\":\"10.1093/nsr/nwaf169\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Literature research, which is vital for scientific work, faces the challenge of surging information volumes that are exceeding researchers' processing capabilities. This paper describes an automated review-generation method based on large language models (LLMs) to overcome efficiency bottlenecks and reduce cognitive load. Our statistically validated evaluation framework demonstrates that the generated reviews match or exceed manual quality, offering broad applicability across research fields without requiring user domain knowledge. Applied to propane dehydrogenation catalysts, our method demonstrated two aspects: first, generating comprehensive reviews from 343 articles spanning 35 topics; and, second, evaluating data-mining capabilities by using 1041 articles for experimental catalyst property analysis. Through multilayered quality control, we effectively mitigated the hallucinations of LLMs, with expert verification confirming accuracy and citation integrity, while demonstrating hallucination risks reduced to <0.5% with 95% confidence. The released software application enables one-click review generation, enhancing research productivity and literature-recommendation efficiency while facilitating broader scientific explorations.</p>\",\"PeriodicalId\":18842,\"journal\":{\"name\":\"National Science Review\",\"volume\":\"12 6\",\"pages\":\"nwaf169\"},\"PeriodicalIF\":16.3000,\"publicationDate\":\"2025-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12125968/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"National Science Review\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1093/nsr/nwaf169\",\"RegionNum\":1,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/6/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"National Science Review","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1093/nsr/nwaf169","RegionNum":1,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

文献研究对科学工作至关重要,但它面临着信息量激增超出研究人员处理能力的挑战。本文提出了一种基于大型语言模型的自动评审生成方法,以克服效率瓶颈和减少认知负荷。我们经过统计验证的评估框架表明,生成的评论匹配或超过人工质量,在不需要用户领域知识的情况下提供跨研究领域的广泛适用性。应用于丙烷脱氢催化剂,我们的方法显示了两个方面:一是产生了35个主题的343篇文章的综合评论;其次,利用1041篇实验催化剂性质分析文章对数据挖掘能力进行评价。通过多层次的质量控制,我们有效地减轻了法学硕士的幻觉,专家验证确认了准确性和引文完整性,同时证明幻觉风险降低到
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated literature research and review-generation method based on large language models.

Literature research, which is vital for scientific work, faces the challenge of surging information volumes that are exceeding researchers' processing capabilities. This paper describes an automated review-generation method based on large language models (LLMs) to overcome efficiency bottlenecks and reduce cognitive load. Our statistically validated evaluation framework demonstrates that the generated reviews match or exceed manual quality, offering broad applicability across research fields without requiring user domain knowledge. Applied to propane dehydrogenation catalysts, our method demonstrated two aspects: first, generating comprehensive reviews from 343 articles spanning 35 topics; and, second, evaluating data-mining capabilities by using 1041 articles for experimental catalyst property analysis. Through multilayered quality control, we effectively mitigated the hallucinations of LLMs, with expert verification confirming accuracy and citation integrity, while demonstrating hallucination risks reduced to <0.5% with 95% confidence. The released software application enables one-click review generation, enhancing research productivity and literature-recommendation efficiency while facilitating broader scientific explorations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
National Science Review
National Science Review MULTIDISCIPLINARY SCIENCES-
CiteScore
24.10
自引率
1.90%
发文量
249
审稿时长
13 weeks
期刊介绍: National Science Review (NSR; ISSN abbreviation: Natl. Sci. Rev.) is an English-language peer-reviewed multidisciplinary open-access scientific journal published by Oxford University Press under the auspices of the Chinese Academy of Sciences.According to Journal Citation Reports, its 2021 impact factor was 23.178. National Science Review publishes both review articles and perspectives as well as original research in the form of brief communications and research articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信