Lin Ren, Yongbin Liu, Chunping Ouyang, Ying Yu, Shuda Zhou, Yidong He, Yaping Wan
{"title":"DyLas: A dynamic label alignment strategy for large-scale multi-label text classification","authors":"Lin Ren, Yongbin Liu, Chunping Ouyang, Ying Yu, Shuda Zhou, Yidong He, Yaping Wan","doi":"10.1016/j.inffus.2025.103081","DOIUrl":null,"url":null,"abstract":"<div><div>Large-scale multi-label Text Classification (LMTC) is an advanced facet of NLP that entails assigning multiple labels to text documents from an extensive label space, often comprising thousands to millions of possible categories. This classification task is pivotal across various domains, including e-commerce product tagging, news categorization, medical code assignment, and legal document analysis, where accurate multi-label predictions drive search efficiency, recommendation systems, and regulatory compliance. However, LMTC poses significant challenges, the dynamic nature of label sets, which traditional supervised learning approaches find difficult to address due to their reliance on annotated data. In light of this challenge, this work introduces a novel approach leveraging Large Language Models (LLMs) for dynamic label alignment in LMTC tasks, based on counterfactual analysis, called DyLas (<u>Dy</u>namic <u>L</u>abel <u>A</u>lignment <u>S</u>trategy). Through a multi-step strategy, we aim to mitigate the issues arising from dynamic label sets. We evaluate the performance of LMTC on the 8 LLMs by 4 datasets and apply DyLas to 3 closed-source and 3 open-weight LLMs. Compared to the single-step approach, our method, DyLas, achieves improvements in almost all metrics across the datasets. Our method can also work well in dynamic label set environments. This work not only demonstrates the potential of LLMs to address complex classification challenges, but is also, to the best of our knowledge, the first to address dynamic label set challenges in LMTC tasks with LLMs without requiring additional model training.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103081"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156625352500154X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large-scale multi-label Text Classification (LMTC) is an advanced facet of NLP that entails assigning multiple labels to text documents from an extensive label space, often comprising thousands to millions of possible categories. This classification task is pivotal across various domains, including e-commerce product tagging, news categorization, medical code assignment, and legal document analysis, where accurate multi-label predictions drive search efficiency, recommendation systems, and regulatory compliance. However, LMTC poses significant challenges, the dynamic nature of label sets, which traditional supervised learning approaches find difficult to address due to their reliance on annotated data. In light of this challenge, this work introduces a novel approach leveraging Large Language Models (LLMs) for dynamic label alignment in LMTC tasks, based on counterfactual analysis, called DyLas (Dynamic Label Alignment Strategy). Through a multi-step strategy, we aim to mitigate the issues arising from dynamic label sets. We evaluate the performance of LMTC on the 8 LLMs by 4 datasets and apply DyLas to 3 closed-source and 3 open-weight LLMs. Compared to the single-step approach, our method, DyLas, achieves improvements in almost all metrics across the datasets. Our method can also work well in dynamic label set environments. This work not only demonstrates the potential of LLMs to address complex classification challenges, but is also, to the best of our knowledge, the first to address dynamic label set challenges in LMTC tasks with LLMs without requiring additional model training.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.