{"title":"通过视觉关系学习和视觉偏好学习的可视化推荐","authors":"Daomin Ji, Hui Luo, Z. Bao","doi":"10.1109/ICDE55515.2023.00145","DOIUrl":null,"url":null,"abstract":"Visualization recommendation (VisRec) is to automatically generate the most relevant visualization for a table of interest to a user. In this paper, we present a novel machine learning-based VisRec method, VisFormer, which solves VisRec in three stages: 1) Table representation learning, which is to learn accurate column-level representations for a table. To achieve it, we resort to Transformer, a powerful language model that can learn accurate word embeddings by modeling context. Specifically, we propose a hierarchical Transformer-based architecture to learn expressive column representations by capturing two types of context, intra-column context and cross-column context; 2) Visual Relation Learning, which is to capture column relations. To achieve it, we regard each visualization as a relation tuple with a special relation, visual relation, between the columns. Then for each visual relation, we use a neural network to evaluate the corresponding visualizations; 3) Visual Preference Learning, which is to extract visual preference features that can affect users’ decision from a visualization. To achieve so, we use a Convolution Neural Network to extract such features and explore how to use them to refine the recommendation results. We conduct experiments to compare with three state-of-the-art ML-based methods on a large real-world dataset, Plotly community feed. The experimental results show that compared with the most competitive baseline, the relative improvements of VisFormer on Recall@1, Recall@2, and Recall@3 are 8.8%, 20.6%, and 21.0%, respectively.","PeriodicalId":434744,"journal":{"name":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visualization Recommendation Through Visual Relation Learning and Visual Preference Learning\",\"authors\":\"Daomin Ji, Hui Luo, Z. Bao\",\"doi\":\"10.1109/ICDE55515.2023.00145\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Visualization recommendation (VisRec) is to automatically generate the most relevant visualization for a table of interest to a user. In this paper, we present a novel machine learning-based VisRec method, VisFormer, which solves VisRec in three stages: 1) Table representation learning, which is to learn accurate column-level representations for a table. To achieve it, we resort to Transformer, a powerful language model that can learn accurate word embeddings by modeling context. Specifically, we propose a hierarchical Transformer-based architecture to learn expressive column representations by capturing two types of context, intra-column context and cross-column context; 2) Visual Relation Learning, which is to capture column relations. To achieve it, we regard each visualization as a relation tuple with a special relation, visual relation, between the columns. Then for each visual relation, we use a neural network to evaluate the corresponding visualizations; 3) Visual Preference Learning, which is to extract visual preference features that can affect users’ decision from a visualization. To achieve so, we use a Convolution Neural Network to extract such features and explore how to use them to refine the recommendation results. We conduct experiments to compare with three state-of-the-art ML-based methods on a large real-world dataset, Plotly community feed. The experimental results show that compared with the most competitive baseline, the relative improvements of VisFormer on Recall@1, Recall@2, and Recall@3 are 8.8%, 20.6%, and 21.0%, respectively.\",\"PeriodicalId\":434744,\"journal\":{\"name\":\"2023 IEEE 39th International Conference on Data Engineering (ICDE)\",\"volume\":\"174 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 39th International Conference on Data Engineering (ICDE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDE55515.2023.00145\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 39th International Conference on Data Engineering (ICDE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDE55515.2023.00145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Visualization Recommendation Through Visual Relation Learning and Visual Preference Learning
Visualization recommendation (VisRec) is to automatically generate the most relevant visualization for a table of interest to a user. In this paper, we present a novel machine learning-based VisRec method, VisFormer, which solves VisRec in three stages: 1) Table representation learning, which is to learn accurate column-level representations for a table. To achieve it, we resort to Transformer, a powerful language model that can learn accurate word embeddings by modeling context. Specifically, we propose a hierarchical Transformer-based architecture to learn expressive column representations by capturing two types of context, intra-column context and cross-column context; 2) Visual Relation Learning, which is to capture column relations. To achieve it, we regard each visualization as a relation tuple with a special relation, visual relation, between the columns. Then for each visual relation, we use a neural network to evaluate the corresponding visualizations; 3) Visual Preference Learning, which is to extract visual preference features that can affect users’ decision from a visualization. To achieve so, we use a Convolution Neural Network to extract such features and explore how to use them to refine the recommendation results. We conduct experiments to compare with three state-of-the-art ML-based methods on a large real-world dataset, Plotly community feed. The experimental results show that compared with the most competitive baseline, the relative improvements of VisFormer on Recall@1, Recall@2, and Recall@3 are 8.8%, 20.6%, and 21.0%, respectively.