Do large language models understand patents? Enhancing patent classification through AI-generated summaries

IF 2.2 Q2 INFORMATION SCIENCE & LIBRARY SCIENCE
Naoya Yoshikawa , Ralf Krestel
{"title":"Do large language models understand patents? Enhancing patent classification through AI-generated summaries","authors":"Naoya Yoshikawa ,&nbsp;Ralf Krestel","doi":"10.1016/j.wpi.2025.102353","DOIUrl":null,"url":null,"abstract":"<div><div>Patent classification plays a crucial role in intellectual property management, but remains a challenging task due to the complexity of patent documents. This study explores a novel approach to enhance automatic patent classification by leveraging summaries generated by large language models (LLMs). Our approach involves using the GPT-3.5-turbo model to create concise summaries from different sections of patent texts, which are then used to fine-tune the RoBERTa and XLM-RoBERTa models for classification tasks. We conducted experiments on English and Japanese patent documents using two datasets: the well-established USPTO-70k and the newly developed JPO-70k, that we specifically created for this study.</div><div>Our findings show that models trained on AI-generated summaries – particularly those derived from patent claims or detailed descriptions – outperform models trained on original abstracts in both subclass-level multi-label classification and subgroup-level single-label classification. In particular, using detailed description summaries improved the micro-average F1 score for subclass-level classification by 2.9 points on the USPTO-70k and 3.0 points on the JPO-70k, compared to using original abstracts.</div><div>These results indicate that LLM-generated summaries effectively capture information relevant to patent classification from various sections of patent texts, offering a promising approach to enhance the accuracy and efficiency of patent classification across different languages.</div></div>","PeriodicalId":51794,"journal":{"name":"World Patent Information","volume":"81 ","pages":"Article 102353"},"PeriodicalIF":2.2000,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Patent Information","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0172219025000201","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Patent classification plays a crucial role in intellectual property management, but remains a challenging task due to the complexity of patent documents. This study explores a novel approach to enhance automatic patent classification by leveraging summaries generated by large language models (LLMs). Our approach involves using the GPT-3.5-turbo model to create concise summaries from different sections of patent texts, which are then used to fine-tune the RoBERTa and XLM-RoBERTa models for classification tasks. We conducted experiments on English and Japanese patent documents using two datasets: the well-established USPTO-70k and the newly developed JPO-70k, that we specifically created for this study.
Our findings show that models trained on AI-generated summaries – particularly those derived from patent claims or detailed descriptions – outperform models trained on original abstracts in both subclass-level multi-label classification and subgroup-level single-label classification. In particular, using detailed description summaries improved the micro-average F1 score for subclass-level classification by 2.9 points on the USPTO-70k and 3.0 points on the JPO-70k, compared to using original abstracts.
These results indicate that LLM-generated summaries effectively capture information relevant to patent classification from various sections of patent texts, offering a promising approach to enhance the accuracy and efficiency of patent classification across different languages.
大型语言模型能理解专利吗?通过人工智能生成摘要加强专利分类
专利分类在知识产权管理中起着至关重要的作用,但由于专利文献的复杂性,专利分类仍然是一项具有挑战性的任务。本研究探索了一种利用大型语言模型(llm)生成的摘要来增强自动专利分类的新方法。我们的方法包括使用gpt -3.5 turbo模型从专利文本的不同部分创建简洁的摘要,然后使用这些摘要对RoBERTa和XLM-RoBERTa模型进行微调,以完成分类任务。我们使用两个数据集对英语和日语专利文件进行了实验:完善的USPTO-70k和新开发的JPO-70k,这是我们专门为这项研究创建的。我们的研究结果表明,在人工智能生成的摘要上训练的模型-特别是那些来自专利权利要求或详细描述的模型-在子类级多标签分类和子组级单标签分类中都优于原始摘要训练的模型。特别是,与使用原始摘要相比,使用详细的描述摘要将USPTO-70k的子类级分类的微平均F1分数提高了2.9分,JPO-70k的微平均F1分数提高了3.0分。这些结果表明,llm生成的摘要可以有效地从专利文本的各个部分中捕获与专利分类相关的信息,为提高不同语言专利分类的准确性和效率提供了一种有希望的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
World Patent Information
World Patent Information INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
3.50
自引率
18.50%
发文量
40
期刊介绍: The aim of World Patent Information is to provide a worldwide forum for the exchange of information between people working professionally in the field of Industrial Property information and documentation and to promote the widest possible use of the associated literature. Regular features include: papers concerned with all aspects of Industrial Property information and documentation; new regulations pertinent to Industrial Property information and documentation; short reports on relevant meetings and conferences; bibliographies, together with book and literature reviews.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信