利用潜在空间正则化和交互式语义探测压缩和解释单词嵌入

IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Haoyu Li, Junpeng Wang, Yan-luan Zheng, Liang Wang, Wei Zhang, Han-Wei Shen
{"title":"利用潜在空间正则化和交互式语义探测压缩和解释单词嵌入","authors":"Haoyu Li, Junpeng Wang, Yan-luan Zheng, Liang Wang, Wei Zhang, Han-Wei Shen","doi":"10.1177/14738716221130338","DOIUrl":null,"url":null,"abstract":"Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.","PeriodicalId":50360,"journal":{"name":"Information Visualization","volume":"22 1","pages":"52 - 68"},"PeriodicalIF":1.8000,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing\",\"authors\":\"Haoyu Li, Junpeng Wang, Yan-luan Zheng, Liang Wang, Wei Zhang, Han-Wei Shen\",\"doi\":\"10.1177/14738716221130338\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.\",\"PeriodicalId\":50360,\"journal\":{\"name\":\"Information Visualization\",\"volume\":\"22 1\",\"pages\":\"52 - 68\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2022-10-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Visualization\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1177/14738716221130338\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Visualization","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/14738716221130338","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 2

摘要

单词嵌入是机器学习模型生成的单词的高维(HD)数字表示,已被用于不同的自然语言处理任务,例如两种语言之间的翻译。最近,越来越多的趋势是将HD嵌入转换为潜在空间(例如,通过自动编码器)以用于进一步的任务,利用潜在表示可能带来的各种优点。为了保持嵌入的质量,这些工作通常将嵌入映射到更高维的潜在空间中,使本已复杂的嵌入变得更不可解释,并消耗更多的存储空间。在这项工作中,我们借用了βVAE的思想来正则化HD潜在空间。我们的正则化隐含地将来自HD潜在空间的信息压缩到低维空间中,从而压缩嵌入。我们还表明,我们的正则化潜在空间的每个维度在语义上都更显著,并通过交互式地探究维度中用户提出的语义的编码级别来验证我们的断言。最后,我们设计了一个可视化分析系统来监控正则化过程,探索HD潜在空间,并解释潜在维度的语义。我们通过定量和定性评估验证了嵌入正则化和解释方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Compressing and interpreting word embeddings with latent space regularization and interactive semantics probing
Word embedding, a high-dimensional (HD) numerical representation of words generated by machine learning models, has been used for different natural language processing tasks, for example, translation between two languages. Recently, there has been an increasing trend of transforming the HD embeddings into a latent space (e.g. via autoencoders) for further tasks, exploiting various merits the latent representations could bring. To preserve the embeddings’ quality, these works often map the embeddings into an even higher-dimensional latent space, making the already complicated embeddings even less interpretable and consuming more storage space. In this work, we borrow the idea of β VAE to regularize the HD latent space. Our regularization implicitly condenses information from the HD latent space into a much lower-dimensional space, thus compressing the embeddings. We also show that each dimension of our regularized latent space is more semantically salient, and validate our assertion by interactively probing the encoding-level of user-proposed semantics in the dimensions. To the end, we design a visual analytics system to monitor the regularization process, explore the HD latent space, and interpret latent dimensions’ semantics. We validate the effectiveness of our embedding regularization and interpretation approach through both quantitative and qualitative evaluations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Visualization
Information Visualization COMPUTER SCIENCE, SOFTWARE ENGINEERING-
CiteScore
5.40
自引率
0.00%
发文量
16
审稿时长
>12 weeks
期刊介绍: Information Visualization is essential reading for researchers and practitioners of information visualization and is of interest to computer scientists and data analysts working on related specialisms. This journal is an international, peer-reviewed journal publishing articles on fundamental research and applications of information visualization. The journal acts as a dedicated forum for the theories, methodologies, techniques and evaluations of information visualization and its applications. The journal is a core vehicle for developing a generic research agenda for the field by identifying and developing the unique and significant aspects of information visualization. Emphasis is placed on interdisciplinary material and on the close connection between theory and practice. This journal is a member of the Committee on Publication Ethics (COPE).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信