{"title":"Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution","authors":"Milad Alshomary, Narutatsu Ri, Marianna Apidianaki, Ajay Patel, Smaranda Muresan, Kathleen McKeown","doi":"arxiv-2409.07072","DOIUrl":null,"url":null,"abstract":"Recent state-of-the-art authorship attribution methods learn authorship\nrepresentations of texts in a latent, non-interpretable space, hindering their\nusability in real-world applications. Our work proposes a novel approach to\ninterpreting these learned embeddings by identifying representative points in\nthe latent space and utilizing LLMs to generate informative natural language\ndescriptions of the writing style of each point. We evaluate the alignment of\nour interpretable space with the latent one and find that it achieves the best\nprediction agreement compared to other baselines. Additionally, we conduct a\nhuman evaluation to assess the quality of these style descriptions, validating\ntheir utility as explanations for the latent space. Finally, we investigate\nwhether human performance on the challenging AA task improves when aided by our\nsystem's explanations, finding an average improvement of around +20% in\naccuracy.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent state-of-the-art authorship attribution methods learn authorship
representations of texts in a latent, non-interpretable space, hindering their
usability in real-world applications. Our work proposes a novel approach to
interpreting these learned embeddings by identifying representative points in
the latent space and utilizing LLMs to generate informative natural language
descriptions of the writing style of each point. We evaluate the alignment of
our interpretable space with the latent one and find that it achieves the best
prediction agreement compared to other baselines. Additionally, we conduct a
human evaluation to assess the quality of these style descriptions, validating
their utility as explanations for the latent space. Finally, we investigate
whether human performance on the challenging AA task improves when aided by our
system's explanations, finding an average improvement of around +20% in
accuracy.