Can Liu, Liwenhan Xie, Yun Han, Datong Wei, Xiaoru Yuan
{"title":"自动标题:一种从可视化中自动生成自然语言描述的方法","authors":"Can Liu, Liwenhan Xie, Yun Han, Datong Wei, Xiaoru Yuan","doi":"10.1109/PacificVis48177.2020.1043","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel approach to generate captions for visualization charts automatically. In the proposed method, visual marks and visual channels, together with the associated text information in the original charts, are first extracted and identified with a multilayer perceptron classifier. Meanwhile, data information can also be retrieved by parsing visual marks with extracted mapping relationships. Then a 1-D convolutional residual network is employed to analyze the relationship between visual elements, and recognize significant features of the visualization charts, with both data and visual information as input. In the final step, the full description of the visual charts can be generated through a template-based approach. The generated captions can effectively cover the main visual features of the visual charts and support major feature types in commons charts. We further demonstrate the effectiveness of our approach through several cases.","PeriodicalId":322092,"journal":{"name":"2020 IEEE Pacific Visualization Symposium (PacificVis)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":"{\"title\":\"AutoCaption: An Approach to Generate Natural Language Description from Visualization Automatically\",\"authors\":\"Can Liu, Liwenhan Xie, Yun Han, Datong Wei, Xiaoru Yuan\",\"doi\":\"10.1109/PacificVis48177.2020.1043\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose a novel approach to generate captions for visualization charts automatically. In the proposed method, visual marks and visual channels, together with the associated text information in the original charts, are first extracted and identified with a multilayer perceptron classifier. Meanwhile, data information can also be retrieved by parsing visual marks with extracted mapping relationships. Then a 1-D convolutional residual network is employed to analyze the relationship between visual elements, and recognize significant features of the visualization charts, with both data and visual information as input. In the final step, the full description of the visual charts can be generated through a template-based approach. The generated captions can effectively cover the main visual features of the visual charts and support major feature types in commons charts. We further demonstrate the effectiveness of our approach through several cases.\",\"PeriodicalId\":322092,\"journal\":{\"name\":\"2020 IEEE Pacific Visualization Symposium (PacificVis)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Pacific Visualization Symposium (PacificVis)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PacificVis48177.2020.1043\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Pacific Visualization Symposium (PacificVis)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PacificVis48177.2020.1043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AutoCaption: An Approach to Generate Natural Language Description from Visualization Automatically
In this paper, we propose a novel approach to generate captions for visualization charts automatically. In the proposed method, visual marks and visual channels, together with the associated text information in the original charts, are first extracted and identified with a multilayer perceptron classifier. Meanwhile, data information can also be retrieved by parsing visual marks with extracted mapping relationships. Then a 1-D convolutional residual network is employed to analyze the relationship between visual elements, and recognize significant features of the visualization charts, with both data and visual information as input. In the final step, the full description of the visual charts can be generated through a template-based approach. The generated captions can effectively cover the main visual features of the visual charts and support major feature types in commons charts. We further demonstrate the effectiveness of our approach through several cases.