Evaluating application of large language models to biomedical patent claim generation

IF 2.2 Q2 INFORMATION SCIENCE & LIBRARY SCIENCE
Feng-Chi Chen , Chia-Lin Pan , AIPlux Development Team
{"title":"Evaluating application of large language models to biomedical patent claim generation","authors":"Feng-Chi Chen ,&nbsp;Chia-Lin Pan ,&nbsp;AIPlux Development Team","doi":"10.1016/j.wpi.2025.102339","DOIUrl":null,"url":null,"abstract":"<div><div>Automatic patent claim generation is an emerging application of large language models (LLMs). However, the performances of general-purpose LLMs in this regard remain unclear. Here we empirically evaluate the effectiveness of four different LLMs (two from the LLaMA-2 family and two from the Mistral family) in generating biomedical patent claims. This allows comparisons between LLMs with different sizes and architectures. We show that these open-source LLMs fail to produce correctly styled patent claims despite their reported strengths in natural language tasks. Nevertheless, given selected training data and adequate fine-tuning, even relatively small LLMs can yield high-quality, correctly styled patent claims. Notably, one limitation of LLMs is that they lack the creativity and insights of human drafters. For such a professional task as claim drafting, LLMs should be considered as a digital assistant that requires human oversight.</div></div>","PeriodicalId":51794,"journal":{"name":"World Patent Information","volume":"80 ","pages":"Article 102339"},"PeriodicalIF":2.2000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Patent Information","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0172219025000067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Automatic patent claim generation is an emerging application of large language models (LLMs). However, the performances of general-purpose LLMs in this regard remain unclear. Here we empirically evaluate the effectiveness of four different LLMs (two from the LLaMA-2 family and two from the Mistral family) in generating biomedical patent claims. This allows comparisons between LLMs with different sizes and architectures. We show that these open-source LLMs fail to produce correctly styled patent claims despite their reported strengths in natural language tasks. Nevertheless, given selected training data and adequate fine-tuning, even relatively small LLMs can yield high-quality, correctly styled patent claims. Notably, one limitation of LLMs is that they lack the creativity and insights of human drafters. For such a professional task as claim drafting, LLMs should be considered as a digital assistant that requires human oversight.
求助全文
约1分钟内获得全文 求助全文
来源期刊
World Patent Information
World Patent Information INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
3.50
自引率
18.50%
发文量
40
期刊介绍: The aim of World Patent Information is to provide a worldwide forum for the exchange of information between people working professionally in the field of Industrial Property information and documentation and to promote the widest possible use of the associated literature. Regular features include: papers concerned with all aspects of Industrial Property information and documentation; new regulations pertinent to Industrial Property information and documentation; short reports on relevant meetings and conferences; bibliographies, together with book and literature reviews.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信