{"title":"用于文体分析和可解释作者归属的潜空间解释法","authors":"Milad Alshomary, Narutatsu Ri, Marianna Apidianaki, Ajay Patel, Smaranda Muresan, Kathleen McKeown","doi":"arxiv-2409.07072","DOIUrl":null,"url":null,"abstract":"Recent state-of-the-art authorship attribution methods learn authorship\nrepresentations of texts in a latent, non-interpretable space, hindering their\nusability in real-world applications. Our work proposes a novel approach to\ninterpreting these learned embeddings by identifying representative points in\nthe latent space and utilizing LLMs to generate informative natural language\ndescriptions of the writing style of each point. We evaluate the alignment of\nour interpretable space with the latent one and find that it achieves the best\nprediction agreement compared to other baselines. Additionally, we conduct a\nhuman evaluation to assess the quality of these style descriptions, validating\ntheir utility as explanations for the latent space. Finally, we investigate\nwhether human performance on the challenging AA task improves when aided by our\nsystem's explanations, finding an average improvement of around +20% in\naccuracy.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution\",\"authors\":\"Milad Alshomary, Narutatsu Ri, Marianna Apidianaki, Ajay Patel, Smaranda Muresan, Kathleen McKeown\",\"doi\":\"arxiv-2409.07072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent state-of-the-art authorship attribution methods learn authorship\\nrepresentations of texts in a latent, non-interpretable space, hindering their\\nusability in real-world applications. Our work proposes a novel approach to\\ninterpreting these learned embeddings by identifying representative points in\\nthe latent space and utilizing LLMs to generate informative natural language\\ndescriptions of the writing style of each point. We evaluate the alignment of\\nour interpretable space with the latent one and find that it achieves the best\\nprediction agreement compared to other baselines. Additionally, we conduct a\\nhuman evaluation to assess the quality of these style descriptions, validating\\ntheir utility as explanations for the latent space. Finally, we investigate\\nwhether human performance on the challenging AA task improves when aided by our\\nsystem's explanations, finding an average improvement of around +20% in\\naccuracy.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07072\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
最近最先进的作者归属方法是在一个不可解释的潜在空间中学习文本的作者归属表述,这阻碍了它们在现实世界中的应用。我们的工作提出了一种新颖的方法来解释这些学习到的嵌入,即识别潜在空间中的代表性点,并利用 LLM 生成对每个点的写作风格的翔实的自然语言描述。我们评估了我们的可解释空间与潜在空间的对齐情况,发现与其他基线相比,它实现了最好的预测一致性。此外,我们还进行了人工评估,以评估这些风格描述的质量,验证它们作为潜在空间解释的实用性。最后,我们研究了在我们系统的解释帮助下,人类在具有挑战性的 AA 任务中的表现是否有所改善,结果发现平均改善了约 +20% 的不准确性。
Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution
Recent state-of-the-art authorship attribution methods learn authorship
representations of texts in a latent, non-interpretable space, hindering their
usability in real-world applications. Our work proposes a novel approach to
interpreting these learned embeddings by identifying representative points in
the latent space and utilizing LLMs to generate informative natural language
descriptions of the writing style of each point. We evaluate the alignment of
our interpretable space with the latent one and find that it achieves the best
prediction agreement compared to other baselines. Additionally, we conduct a
human evaluation to assess the quality of these style descriptions, validating
their utility as explanations for the latent space. Finally, we investigate
whether human performance on the challenging AA task improves when aided by our
system's explanations, finding an average improvement of around +20% in
accuracy.