{"title":"Human-interpretable clustering of short text using large language models.","authors":"Justin K Miller, Tristram J Alexander","doi":"10.1098/rsos.241692","DOIUrl":null,"url":null,"abstract":"<p><p>Clustering short text is a difficult problem, owing to the low word co-occurrence between short text documents. This work shows that large language models (LLMs) can overcome the limitations of traditional clustering approaches by generating embeddings that capture the semantic nuances of short text. In this study, clusters are found in the embedding space using Gaussian mixture modelling. The resulting clusters are found to be more distinctive and more human-interpretable than clusters produced using the popular methods of doc2vec and latent Dirichlet allocation. The success of the clustering approach is quantified using human reviewers and through the use of a generative LLM. The generative LLM shows good agreement with the human reviewers and is suggested as a means to bridge the 'validation gap' which often exists between cluster production and cluster interpretation. The comparison between LLM coding and human coding reveals intrinsic biases in each, challenging the conventional reliance on human coding as the definitive standard for cluster validation.</p>","PeriodicalId":21525,"journal":{"name":"Royal Society Open Science","volume":"12 1","pages":"241692"},"PeriodicalIF":2.9000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750404/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Royal Society Open Science","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1098/rsos.241692","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Clustering short text is a difficult problem, owing to the low word co-occurrence between short text documents. This work shows that large language models (LLMs) can overcome the limitations of traditional clustering approaches by generating embeddings that capture the semantic nuances of short text. In this study, clusters are found in the embedding space using Gaussian mixture modelling. The resulting clusters are found to be more distinctive and more human-interpretable than clusters produced using the popular methods of doc2vec and latent Dirichlet allocation. The success of the clustering approach is quantified using human reviewers and through the use of a generative LLM. The generative LLM shows good agreement with the human reviewers and is suggested as a means to bridge the 'validation gap' which often exists between cluster production and cluster interpretation. The comparison between LLM coding and human coding reveals intrinsic biases in each, challenging the conventional reliance on human coding as the definitive standard for cluster validation.
期刊介绍:
Royal Society Open Science is a new open journal publishing high-quality original research across the entire range of science on the basis of objective peer-review.
The journal covers the entire range of science and mathematics and will allow the Society to publish all the high-quality work it receives without the usual restrictions on scope, length or impact.