Visual Informatics最新文献

筛选
英文 中文
Color and Shape efficiency for outlier detection from automated to user evaluation 颜色和形状的效率异常检测从自动到用户评估
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.001
Loann Giovannangeli, Romain Bourqui, Romain Giot, David Auber
{"title":"Color and Shape efficiency for outlier detection from automated to user evaluation","authors":"Loann Giovannangeli,&nbsp;Romain Bourqui,&nbsp;Romain Giot,&nbsp;David Auber","doi":"10.1016/j.visinf.2022.03.001","DOIUrl":"10.1016/j.visinf.2022.03.001","url":null,"abstract":"<div><p>The design of efficient representations is well established as a fruitful way to explore and analyze complex or large data. In these representations, data are encoded with various visual attributes depending on the needs of the representation itself. To make coherent design choices about visual attributes, the visual search field proposes guidelines based on the human brain’s perception of features. However, information visualization representations frequently need to depict more data than the amount these guidelines have been validated on. Since, the information visualization community has extended these guidelines to a wider parameter space.</p><p>This paper contributes to this theme by extending visual search theories to an information visualization context. We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractors. Stimuli are defined by color and shape features for the purpose of visually encoding categorical data. The experimental protocol is made of a parameters space reduction step (<em>i.e.</em>, sub-sampling) based on a machine learning model, and a user evaluation to validate hypotheses and measure capacity limits. The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier. When redundantly encoded, the display heterogeneity has no effect on the task. When encoded with one attribute, the difficulty depends on that attribute heterogeneity until its capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded with two attributes simultaneously, performances drop drastically even with minor heterogeneity.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 25-40"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000146/pdfft?md5=3a4ee1c7cac8f90eeb5e72a02337dd27&pid=1-s2.0-S2468502X22000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124374144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MDISN: Learning multiscale deformed implicit fields from single images MDISN:从单个图像中学习多尺度变形隐式场
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.003
Yujie Wang , Yixin Zhuang , Yunzhe Liu , Baoquan Chen
{"title":"MDISN: Learning multiscale deformed implicit fields from single images","authors":"Yujie Wang ,&nbsp;Yixin Zhuang ,&nbsp;Yunzhe Liu ,&nbsp;Baoquan Chen","doi":"10.1016/j.visinf.2022.03.003","DOIUrl":"10.1016/j.visinf.2022.03.003","url":null,"abstract":"<div><p>We present a multiscale deformed implicit surface network (MDISN) to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image. The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image. And with multi-resolution feature maps, the implicit field is refined progressively, such that lower resolutions outline the main object components, and higher resolutions reveal fine-grained geometric details. To better explore the changes in feature maps, we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 41-49"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200016X/pdfft?md5=7a2c3ab7456139b67e5be7c06fdac2f5&pid=1-s2.0-S2468502X2200016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122912047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A machine learning approach for predicting human shortest path task performance 预测人类最短路径任务性能的机器学习方法
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.001
Shijun Cai , Seok-Hee Hong , Xiaobo Xia , Tongliang Liu , Weidong Huang
{"title":"A machine learning approach for predicting human shortest path task performance","authors":"Shijun Cai ,&nbsp;Seok-Hee Hong ,&nbsp;Xiaobo Xia ,&nbsp;Tongliang Liu ,&nbsp;Weidong Huang","doi":"10.1016/j.visinf.2022.04.001","DOIUrl":"10.1016/j.visinf.2022.04.001","url":null,"abstract":"<div><p>Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings. In this paper, we present the first machine learning approach to predict human shortest path task performance, including accuracy, response time, and mental effort.</p><p>To predict the shortest path task performance, we utilize correlated quality metrics and the ground truth data from the shortest path experiments. Specifically, we introduce <em>path faithfulness metrics</em> and show strong correlations with the shortest path task performance. Moreover, to mitigate the problem of insufficient ground truth training data, we use the transfer learning method to pre-train our deep model, exploiting the correlated quality metrics.</p><p>Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance. In particular, model MSP achieves an MSE (i.e., test mean square error) of 0.7243 (i.e., data range from −17.27 to 1.81) for prediction.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 50-61"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000183/pdfft?md5=8b220940e42fe9792587af3d422a3e28&pid=1-s2.0-S2468502X22000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115328161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives of visualization onboarding and guidance in VA 可视化入职和VA指导的观点
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.005
Christina Stoiber , Davide Ceneda , Markus Wagner , Victor Schetinger , Theresia Gschwandtner , Marc Streit , Silvia Miksch , Wolfgang Aigner
{"title":"Perspectives of visualization onboarding and guidance in VA","authors":"Christina Stoiber ,&nbsp;Davide Ceneda ,&nbsp;Markus Wagner ,&nbsp;Victor Schetinger ,&nbsp;Theresia Gschwandtner ,&nbsp;Marc Streit ,&nbsp;Silvia Miksch ,&nbsp;Wolfgang Aigner","doi":"10.1016/j.visinf.2022.02.005","DOIUrl":"10.1016/j.visinf.2022.02.005","url":null,"abstract":"<div><p>A typical problem in Visual Analytics (VA) is that users are highly trained experts in their application domains, but have mostly no experience in using VA systems. Thus, users often have difficulties interpreting and working with visual representations. To overcome these problems, user assistance can be incorporated into VA systems to guide experts through the analysis while closing their knowledge gaps. Different types of user assistance can be applied to extend the power of VA, enhance the user’s experience, and broaden the audience for VA. Although different approaches to visualization onboarding and guidance in VA already exist, there is a lack of research on how to design and integrate them in effective and efficient ways. Therefore, we aim at putting together the pieces of the mosaic to form a coherent whole. Based on the Knowledge-Assisted Visual Analytics model, we contribute a conceptual model of user assistance for VA by integrating the process of visualization onboarding and guidance as the two main approaches in this direction. As a result, we clarify and discuss the commonalities and differences between visualization onboarding and guidance, and discuss how they benefit from the integration of knowledge extraction and exploration. Finally, we discuss our descriptive model by applying it to VA tools integrating visualization onboarding and guidance, and showing how they should be utilized in different phases of the analysis in order to be effective and accepted by the user.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 68-83"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000134/pdfft?md5=97576331780f4f0a3f95026d4dff62bd&pid=1-s2.0-S2468502X22000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Computing for Chinese Cultural Heritage 中国文化遗产计算
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2021.12.006
Meng Li , Yun Wang , Ying-Qing Xu
{"title":"Computing for Chinese Cultural Heritage","authors":"Meng Li ,&nbsp;Yun Wang ,&nbsp;Ying-Qing Xu","doi":"10.1016/j.visinf.2021.12.006","DOIUrl":"10.1016/j.visinf.2021.12.006","url":null,"abstract":"<div><p>Implementing computational methods for preservation, inheritance, and promotion of Cultural Heritage (CH) has become a research trend across the world since the 1990s. In China, generations of scholars have dedicated themselves to studying the country’s rich CH resources; there are great potential and opportunities in the field of computational research on specific cultural artefacts or artforms. Based on previous works, this paper proposes a systematic framework for Chinese Cultural Heritage Computing that consists of three conceptual levels which are Chinese CH protection and development strategy, computing process, and computable cultural ecosystem. The computing process includes three modules: (1) data acquisition and processing, (2) digital modeling and database construction, and (3) data application and promotion. The modules demonstrate the computing approaches corresponding to different phases of Chinese CH protection and development, from digital preservation and inheritance to presentation and promotion. The computing results can become the basis for the generation of cultural genes and eventually the formation of computable cultural ecosystem Case studies on the Mogao caves in Dunhuang and the art of Guqin, recognized as world’s important tangible and intangible cultural heritage, are carried out to elaborate the computing process and methods within the framework. With continuous advances in data collection, processing, and display technologies, the framework can provide constructive reference for building up future research roadmaps in Chinese CH computing and related fields, for sustainable protection and development of Chinese CH in the digital age.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 1-13"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000644/pdfft?md5=2fe78f965cb3cdd3953c49170f0417be&pid=1-s2.0-S2468502X21000644-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Reconfiguration of the brain during aesthetic experience on Chinese calligraphy—Using brain complex networks 中国书法审美体验中的脑重构——利用脑复杂网络
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.002
Rui Li , Xiaofei Jia , Changle Zhou , Junsong Zhang
{"title":"Reconfiguration of the brain during aesthetic experience on Chinese calligraphy—Using brain complex networks","authors":"Rui Li ,&nbsp;Xiaofei Jia ,&nbsp;Changle Zhou ,&nbsp;Junsong Zhang","doi":"10.1016/j.visinf.2022.02.002","DOIUrl":"10.1016/j.visinf.2022.02.002","url":null,"abstract":"<div><p>Chinese calligraphy, as a well-known performing art form, occupies an important role in the intangible cultural heritage of China. Previous studies focused on the psychophysiological benefits of Chinese calligraphy. Little attention has been paid to its aesthetic attributes and effectiveness on the cognitive process. To complement our understanding of Chinese calligraphy, this study investigated the aesthetic experience of Chinese cursive-style calligraphy using brain functional network analysis. Subjects stayed on the coach and rested for several minutes. Then, they were requested to appreciate artwork of cursive-style calligraphy. Results showed that (1) changes in functional connectivity between fronto-occipital, fronto-parietal, bilateral parietal, and central–occipital areas are prominent for calligraphy condition, (2) brain functional network showed an increased normalized cluster coefficient for calligraphy condition in alpha2 and gamma bands. These results demonstrate that the brain functional network undergoes a dynamic reconfiguration during the aesthetic experience of Chinese calligraphy. Providing evidence that the aesthetic experience of Chinese calligraphy has several similarities with western art while retaining its unique characters as an eastern traditional art form.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 35-46"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000109/pdfft?md5=2fa49e9936c3269ce56e4e50f14e1166&pid=1-s2.0-S2468502X22000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129049644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AFExplorer: Visual analysis and interactive selection of audio features AFExplorer:可视分析和交互式选择音频功能
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.003
Lei Wang, Guodao Sun, Yunchao Wang, Ji Ma, Xiaomin Zhao, Ronghua Liang
{"title":"AFExplorer: Visual analysis and interactive selection of audio features","authors":"Lei Wang,&nbsp;Guodao Sun,&nbsp;Yunchao Wang,&nbsp;Ji Ma,&nbsp;Xiaomin Zhao,&nbsp;Ronghua Liang","doi":"10.1016/j.visinf.2022.02.003","DOIUrl":"10.1016/j.visinf.2022.02.003","url":null,"abstract":"<div><p>Acoustic quality detection is vital in the manufactured products quality control field since it represents the conditions of machines or products. Recent work employed machine learning models in manufactured audio data to detect anomalous patterns. A major challenge is how to select applicable audio features to meliorate model’s accuracy and precision. To relax this challenge, we extract and analyze three audio feature types including Time Domain Feature, Frequency Domain Feature, and Cepstrum Feature to help identify the potential linear and non-linear relationships. In addition, we design a visual analysis system, namely AFExplorer, to assist data scientists in extracting audio features and selecting potential feature combinations. AFExplorer integrates four main views to present detailed distribution and relevance of the audio features, which helps users observe the impact of features visually in the feature selection. We perform the case study with AFExplore according to the ToyADMOS and MIMII Dataset to demonstrate the usability and effectiveness of the proposed system.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 47-55"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000110/pdfft?md5=2e19336a69c58e5911898665e895ab79&pid=1-s2.0-S2468502X22000110-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129071883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A restoration method using dual generate adversarial networks for Chinese ancient characters 一种基于对偶生成对抗网络的汉字复原方法
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.001
Benpeng Su , Xuxing Liu , Weize Gao , Ye Yang , Shanxiong Chen
{"title":"A restoration method using dual generate adversarial networks for Chinese ancient characters","authors":"Benpeng Su ,&nbsp;Xuxing Liu ,&nbsp;Weize Gao ,&nbsp;Ye Yang ,&nbsp;Shanxiong Chen","doi":"10.1016/j.visinf.2022.02.001","DOIUrl":"10.1016/j.visinf.2022.02.001","url":null,"abstract":"<div><p>Ancient books that record the history of different periods are precious for human civilization. But the protection of them is facing serious problems such as aging. It is significant to repair the damaged characters in ancient books and restore their original textures. The requirement of the restoration of the damaged character is keeping the stroke shape correct and the font style consistent. In order to solve these problems, this paper proposes a new restoration method based on generative adversarial networks. We use the shape restoration network to complete the stroke shape recovery and the font style recovery. The texture repair network is responsible for reconstructing texture details. In order to improve the accuracy of the generator in the shape restoration network, we use the adversarial feature loss (AFL), which can update the generator and discriminator synchronously to replace the traditional perceptual loss. Meanwhile, the font style loss is proposed to maintain the stylistic consistency for the whole character. Our model is evaluated on the datasets Yi and Qing, and shows that it outperforms current state-of-the-art techniques quantitatively and qualitatively. In particular, the Structural Similarity has increased by 8.0% and 6.7% respectively on the two datasets.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 26-34"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000092/pdfft?md5=d3ed2a6a34178c2af83ce73f8cd4a7d0&pid=1-s2.0-S2468502X22000092-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124418754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Metaverse: Perspectives from graphics, interactions and visualization 虚拟世界:来自图形、交互和可视化的视角
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.03.002
Yuheng Zhao , Jinjing Jiang , Yi Chen , Richen Liu , Yalong Yang , Xiangyang Xue , Siming Chen
{"title":"Metaverse: Perspectives from graphics, interactions and visualization","authors":"Yuheng Zhao ,&nbsp;Jinjing Jiang ,&nbsp;Yi Chen ,&nbsp;Richen Liu ,&nbsp;Yalong Yang ,&nbsp;Xiangyang Xue ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2022.03.002","DOIUrl":"10.1016/j.visinf.2022.03.002","url":null,"abstract":"<div><p>The metaverse is a visual world that blends the physical world and digital world. At present, the development of the metaverse is still in the early stage, and there lacks a framework for the visual construction and exploration of the metaverse. In this paper, we propose a framework that summarizes how graphics, interaction, and visualization techniques support the visual construction of the metaverse and user-centric exploration. We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline. We propose a taxonomy of interaction technologies based on interaction tasks, user actions, feedback and various sensory channels, and a taxonomy of visualization techniques that assist user awareness. Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse. We hope this paper can provide a stepping stone for further research in the area of graphics, interaction and visualization in the metaverse.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 56-67"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000158/pdfft?md5=1995fb00e264296cfe9e1788841486de&pid=1-s2.0-S2468502X22000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122684892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
A learning-based approach for efficient visualization construction 基于学习的高效可视化构建方法
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.01.001
Yongjian Sun , Jie Li , Siming Chen , Gennady Andrienko , Natalia Andrienko , Kang Zhang
{"title":"A learning-based approach for efficient visualization construction","authors":"Yongjian Sun ,&nbsp;Jie Li ,&nbsp;Siming Chen ,&nbsp;Gennady Andrienko ,&nbsp;Natalia Andrienko ,&nbsp;Kang Zhang","doi":"10.1016/j.visinf.2022.01.001","DOIUrl":"10.1016/j.visinf.2022.01.001","url":null,"abstract":"<div><p>We propose an approach to underpin interactive visual exploration of large data volumes by training Learned Visualization Index (LVI). Knowing in advance the data, the aggregation functions that are used for visualization, the visual encoding, and available interactive operations for data selection, LVI allows to avoid time-consuming data retrieval and processing of raw data in response to user’s interactions. Instead, LVI directly predicts aggregates of interest for the user’s data selection. We demonstrate the efficiency of the proposed approach in application to two use cases of spatio-temporal data at different scales.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 14-25"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000080/pdfft?md5=16523953bb5f7df328c6c78d0aaff5fa&pid=1-s2.0-S2468502X22000080-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信