{"title":"通过提示进行非对称短文聚类","authors":"Zhi Wang, Yi Zhu, Yun Li, Jipeng Qiang, Yunhao Yuan, Chaowei Zhang","doi":"10.1007/s00354-024-00244-7","DOIUrl":null,"url":null,"abstract":"<p>Short-text clustering, which has attracted much attention with the rapid development of social media in recent decades, is a great challenge due to the feature sparsity, high ambiguity, and massive quantity. Recently, pre-trained language models (PLMs)-based methods have achieved fairly good results on this task. However, two main problems still hang in the air: (1) the significant gap of objective forms in pretraining and fine-tuning, which restricts taking full advantage of knowledge in PLMs. (2) Most existing methods require a post-processing operation for clustering label learning, potentially leading to label estimation errors for different data distributions. To address these problems, in this paper, we propose an Asymmetric Short-Text Clustering via Prompt (short for ASTCP), the features learned with our ASTCP are denser and constricted for clustering. Specifically, a subset text of the corpus is first selected by an asymmetric prompt-tuning network, which aims to obtain predicted label as a clustering center. Then, by the propagation of predicted-label information, a fine-tuned model is designed for representation learning. Thus, a clustering module, such as K-means, is built to directly output clustering labels on top of these representations. Extensive experiments conducted on three datasets have demonstrated that our ASTCP can significantly and consistently outperform other SOTA clustering methods. The source code is available at https://github.com/zhuyi_yzu/ASTCP.</p>","PeriodicalId":54726,"journal":{"name":"New Generation Computing","volume":"34 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Asymmetric Short-Text Clustering via Prompt\",\"authors\":\"Zhi Wang, Yi Zhu, Yun Li, Jipeng Qiang, Yunhao Yuan, Chaowei Zhang\",\"doi\":\"10.1007/s00354-024-00244-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Short-text clustering, which has attracted much attention with the rapid development of social media in recent decades, is a great challenge due to the feature sparsity, high ambiguity, and massive quantity. Recently, pre-trained language models (PLMs)-based methods have achieved fairly good results on this task. However, two main problems still hang in the air: (1) the significant gap of objective forms in pretraining and fine-tuning, which restricts taking full advantage of knowledge in PLMs. (2) Most existing methods require a post-processing operation for clustering label learning, potentially leading to label estimation errors for different data distributions. To address these problems, in this paper, we propose an Asymmetric Short-Text Clustering via Prompt (short for ASTCP), the features learned with our ASTCP are denser and constricted for clustering. Specifically, a subset text of the corpus is first selected by an asymmetric prompt-tuning network, which aims to obtain predicted label as a clustering center. Then, by the propagation of predicted-label information, a fine-tuned model is designed for representation learning. Thus, a clustering module, such as K-means, is built to directly output clustering labels on top of these representations. Extensive experiments conducted on three datasets have demonstrated that our ASTCP can significantly and consistently outperform other SOTA clustering methods. The source code is available at https://github.com/zhuyi_yzu/ASTCP.</p>\",\"PeriodicalId\":54726,\"journal\":{\"name\":\"New Generation Computing\",\"volume\":\"34 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-02-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Generation Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00354-024-00244-7\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Generation Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00354-024-00244-7","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Short-text clustering, which has attracted much attention with the rapid development of social media in recent decades, is a great challenge due to the feature sparsity, high ambiguity, and massive quantity. Recently, pre-trained language models (PLMs)-based methods have achieved fairly good results on this task. However, two main problems still hang in the air: (1) the significant gap of objective forms in pretraining and fine-tuning, which restricts taking full advantage of knowledge in PLMs. (2) Most existing methods require a post-processing operation for clustering label learning, potentially leading to label estimation errors for different data distributions. To address these problems, in this paper, we propose an Asymmetric Short-Text Clustering via Prompt (short for ASTCP), the features learned with our ASTCP are denser and constricted for clustering. Specifically, a subset text of the corpus is first selected by an asymmetric prompt-tuning network, which aims to obtain predicted label as a clustering center. Then, by the propagation of predicted-label information, a fine-tuned model is designed for representation learning. Thus, a clustering module, such as K-means, is built to directly output clustering labels on top of these representations. Extensive experiments conducted on three datasets have demonstrated that our ASTCP can significantly and consistently outperform other SOTA clustering methods. The source code is available at https://github.com/zhuyi_yzu/ASTCP.
期刊介绍:
The journal is specially intended to support the development of new computational and cognitive paradigms stemming from the cross-fertilization of various research fields. These fields include, but are not limited to, programming (logic, constraint, functional, object-oriented), distributed/parallel computing, knowledge-based systems, agent-oriented systems, and cognitive aspects of human embodied knowledge. It also encourages theoretical and/or practical papers concerning all types of learning, knowledge discovery, evolutionary mechanisms, human cognition and learning, and emergent systems that can lead to key technologies enabling us to build more complex and intelligent systems. The editorial board hopes that New Generation Computing will work as a catalyst among active researchers with broad interests by ensuring a smooth publication process.