Visual InformaticsPub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.10.002
Nikolaus Piccolotto , Markus Bögl , Theresia Gschwandtner , Christoph Muehlmann , Klaus Nordhausen , Peter Filzmoser , Silvia Miksch
{"title":"TBSSvis: Visual analytics for Temporal Blind Source Separation","authors":"Nikolaus Piccolotto , Markus Bögl , Theresia Gschwandtner , Christoph Muehlmann , Klaus Nordhausen , Peter Filzmoser , Silvia Miksch","doi":"10.1016/j.visinf.2022.10.002","DOIUrl":"10.1016/j.visinf.2022.10.002","url":null,"abstract":"<div><p>Temporal Blind Source Separation (TBSS) is used to obtain the true underlying processes from noisy temporal multivariate data, such as electrocardiograms. TBSS has similarities to Principal Component Analysis (PCA) as it separates the input data into univariate components and is applicable to suitable datasets from various domains, such as medicine, finance, or civil engineering. Despite TBSS’s broad applicability, the involved tasks are not well supported in current tools, which offer only text-based interactions and single static images. Analysts are limited in analyzing and comparing obtained results, which consist of diverse data such as matrices and sets of time series. Additionally, parameter settings have a big impact on separation performance, but as a consequence of improper tooling, analysts currently do not consider the whole parameter space. We propose to solve these problems by applying visual analytics (VA) principles. Our primary contribution is a design study for TBSS, which so far has not been explored by the visualization community. We developed a task abstraction and visualization design in a user-centered design process. Task-specific assembling of well-established visualization techniques and algorithms to gain insights in the TBSS processes is our secondary contribution. We present TBSSvis, an interactive web-based VA prototype, which we evaluated extensively in two interviews with five TBSS experts. Feedback and observations from these interviews show that TBSSvis supports the actual workflow and combination of interactive visualizations that facilitate the tasks involved in analyzing TBSS results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 51-66"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22001103/pdfft?md5=e16a9a59f900c2b2e1e6e50729e1b03e&pid=1-s2.0-S2468502X22001103-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.06.002
Weixin Zhao , Guijuan Wang , Zhong Wang , Liang Liu , Xu Wei , Yadong Wu
{"title":"A uncertainty visual analytics approach for bus travel time","authors":"Weixin Zhao , Guijuan Wang , Zhong Wang , Liang Liu , Xu Wei , Yadong Wu","doi":"10.1016/j.visinf.2022.06.002","DOIUrl":"10.1016/j.visinf.2022.06.002","url":null,"abstract":"<div><p>Bus travel time is uncertain due to the dynamic change in the environment. Passenger analyzing bus travel time uncertainty has significant implications for understanding bus running errors and reducing travel risks. To quantify the uncertainty of the bus travel time prediction model, a visual analysis method about the bus travel time uncertainty is proposed in this paper, which can intuitively obtain uncertain information of bus travel time through visual graphs. Firstly, a Bayesian encoder–decoder deep neural network (BEDDNN) model is proposed to predict the bus travel time. The BEDDNN model outputs results with distributional properties to calculate the prediction model uncertainty degree and provide the estimation of the bus travel time uncertainty. Second, an interactive uncertainty visualization system is developed to analyze the time uncertainty associated with bus stations and lines. The prediction model and the visualization model are organically combined to better demonstrate the prediction results and uncertainties. Finally, the model evaluation results based on actual bus data illustrate the effectiveness of the model. The results of the case study and user evaluation show that the visualization system in this paper has a positive impact on the effectiveness of conveying uncertain information and on user perception and decision making.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 1-11"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000638/pdfft?md5=ccdb87f99aecb534c2895ffeed825848&pid=1-s2.0-S2468502X22000638-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.08.001
Zhongyun Bao, Gang Fu, Lian Duan, Chunxia Xiao
{"title":"Interactive lighting editing system for single indoor low-light scene images with corresponding depth maps","authors":"Zhongyun Bao, Gang Fu, Lian Duan, Chunxia Xiao","doi":"10.1016/j.visinf.2022.08.001","DOIUrl":"10.1016/j.visinf.2022.08.001","url":null,"abstract":"<div><p>We propose a novel interactive lighting editing system for lighting a single indoor RGB image based on spherical harmonic lighting. It allows users to intuitively edit illumination and relight the complicated low-light indoor scene. Our method not only achieves plausible global relighting but also enhances the local details of the complicated scene according to the spatially-varying spherical harmonic lighting, which only requires a single RGB image along with a corresponding depth map. To this end, we first present a joint optimization algorithm, which is based on the geometric optimization of the depth map and intrinsic image decomposition avoiding texture-copy, for refining the depth map and obtaining the shading map. Then we propose a lighting estimation method based on spherical harmonic lighting, which not only achieves the global illumination estimation of the scene, but also further enhances local details of the complicated scene. Finally, we use a simple and intuitive interactive method to edit the environment lighting map to adjust lighting and relight the scene. Through extensive experimental results, we demonstrate that our proposed approach is simple and intuitive for relighting the low-light indoor scene, and achieve state-of-the-art results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 90-99"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000882/pdfft?md5=9c9150254f62643a645f9ca15efd2ffd&pid=1-s2.0-S2468502X22000882-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130964210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.09.002
Gefei Zhang, Zihao Zhu, Sujia Zhu, Ronghua Liang, Guodao Sun
{"title":"Towards a better understanding of the role of visualization in online learning: A review","authors":"Gefei Zhang, Zihao Zhu, Sujia Zhu, Ronghua Liang, Guodao Sun","doi":"10.1016/j.visinf.2022.09.002","DOIUrl":"10.1016/j.visinf.2022.09.002","url":null,"abstract":"<div><p>With the popularity of online learning in recent decades, MOOCs (Massive Open Online Courses) are increasingly pervasive and widely used in many areas. Visualizing online learning is particularly important because it helps to analyze learner performance, evaluate the effectiveness of online learning platforms, and predict dropout risks. Due to the large-scale, high-dimensional, and heterogeneous characteristics of the data obtained from online learning, it is difficult to find hidden information. In this paper, we review and classify the existing literature for online learning to better understand the role of visualization in online learning. Our taxonomy is based on four categorizations of online learning tasks: behavior analysis, behavior prediction, learning pattern exploration and assisted learning. Based on our review of relevant literature over the past decade, we also identify several remaining research challenges and future research work.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 22-33"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000924/pdfft?md5=6b07edcfd3ec7f98bc46d186255d7604&pid=1-s2.0-S2468502X22000924-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122494430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A survey of visual analytics techniques for online education","authors":"Xiaoyan Kui, Naiming Liu, Qiang Liu, Jingwei Liu, Xiaoqian Zeng, Chao Zhang","doi":"10.1016/j.visinf.2022.07.004","DOIUrl":"10.1016/j.visinf.2022.07.004","url":null,"abstract":"<div><p>Visual analytics techniques are widely utilized to facilitate the exploration of online educational data. To help researchers better understand the necessity and the efficiency of these techniques in online education, we systematically review related works of the past decade to provide a comprehensive view of the use of visualization in online education problems. We establish a taxonomy based on the analysis goal and classify the existing visual analytics techniques into four categories: learning behavior analysis, learning content analysis, analysis of interactions among students, and prediction and recommendation. The use of visual analytics techniques is summarized in each category to show their benefits in different analysis tasks. At last, we discuss the future research opportunities and challenges in the utilization of visual analytics techniques for online education.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 67-77"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000870/pdfft?md5=9da41107a6cadbfebb837a6957330648&pid=1-s2.0-S2468502X22000870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121273106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.09.001
Yunchao Wang, Zihao Zhu, Lei Wang, Guodao Sun, Ronghua Liang
{"title":"Visualization and visual analysis of multimedia data in manufacturing: A survey","authors":"Yunchao Wang, Zihao Zhu, Lei Wang, Guodao Sun, Ronghua Liang","doi":"10.1016/j.visinf.2022.09.001","DOIUrl":"10.1016/j.visinf.2022.09.001","url":null,"abstract":"<div><p>With the development of production technology and social needs, sectors of manufacturing are constantly improving. The use of sensors and computers has made it increasingly convenient to collect multimedia data in manufacturing. Targeted, rapid, and detailed analysis based on the type of multimedia data can make timely decisions at different stages of the entire manufacturing process. Visualization and visual analytics are frequently adopted in multimedia data analysis of manufacturing because of their powerful ability to understand, present, and analyze data intuitively and interactively. In this paper, we present a literature review of visualization and visual analytics specifically for manufacturing multimedia data. We classify existing research according to visualization techniques, interaction analysis methods, and application areas. We discuss the differences when visualization and visual analytics are applied to different types of multimedia data in the context of particular examples of manufacturing research projects. Finally, we summarize the existing challenges and prospective research directions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 12-21"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000912/pdfft?md5=7a4420f6c48211e2a2b1aa7571c6e640&pid=1-s2.0-S2468502X22000912-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131576961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.07.003
An-An Liu , Xiaowen Wang , Ning Xu , Junbo Guo , Guoqing Jin , Quan Zhang , Yejun Tang , Shenyuan Zhang
{"title":"A review of feature fusion-based media popularity prediction methods","authors":"An-An Liu , Xiaowen Wang , Ning Xu , Junbo Guo , Guoqing Jin , Quan Zhang , Yejun Tang , Shenyuan Zhang","doi":"10.1016/j.visinf.2022.07.003","DOIUrl":"10.1016/j.visinf.2022.07.003","url":null,"abstract":"<div><p>With the popularization of social media, the way of information transmission has changed, and the prediction of information popularity based on social media platforms has attracted extensive attention. Feature fusion-based media popularity prediction methods focus on the multi-modal features of social media, which aim at exploring the key factors affecting media popularity. Meanwhile, the methods make up for the deficiency in feature utilization of traditional methods based on information propagation processes. In this paper, we review feature fusion-based media popularity prediction methods from the perspective of feature extraction and predictive model construction. Before that, we analyze the influencing factors of media popularity to provide intuitive understanding. We further argue about the advantages and disadvantages of existing methods and datasets to highlight the future directions. Finally, we discuss the applications of popularity prediction. To the best of our knowledge, this is the first survey reporting feature fusion-based media popularity prediction methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 78-89"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000869/pdfft?md5=3f5928b7e56ee9c39a226fe68dbcb36d&pid=1-s2.0-S2468502X22000869-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-09-01DOI: 10.1016/j.visinf.2022.05.003
Shu-Yu Chen , Jia-Qi Zhang , You-You Zhao , Paul L. Rosin , Yu-Kun Lai , Lin Gao
{"title":"A review of image and video colorization: From analogies to deep learning","authors":"Shu-Yu Chen , Jia-Qi Zhang , You-You Zhao , Paul L. Rosin , Yu-Kun Lai , Lin Gao","doi":"10.1016/j.visinf.2022.05.003","DOIUrl":"10.1016/j.visinf.2022.05.003","url":null,"abstract":"<div><p>Image colorization is a classic and important topic in computer graphics, where the aim is to add color to a monochromatic input image to produce a colorful result. In this survey, we present the history of colorization research in chronological order and summarize popular algorithms in this field. Early work on colorization mostly focused on developing techniques to improve the colorization quality. In the last few years, researchers have considered more possibilities such as combining colorization with NLP (natural language processing) and focused more on industrial applications. To better control the color, various types of color control are designed, such as providing reference images or color-scribbles. We have created a taxonomy of the colorization methods according to the input type, divided into grayscale, sketch-based and hybrid. The pros and cons are discussed for each algorithm, and they are compared according to their main characteristics. Finally, we discuss how deep learning, and in particular Generative Adversarial Networks (GANs), has changed this field.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 3","pages":"Pages 51-68"},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000389/pdfft?md5=16a081f691f2d75368094f26919578af&pid=1-s2.0-S2468502X22000389-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114109871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-09-01DOI: 10.1016/j.visinf.2022.07.002
Shenghui Cheng , Joachim Giesen , Tianyi Huang , Philipp Lucas , Klaus Mueller
{"title":"Identifying the skeptics and the undecided through visual cluster analysis of local network geometry","authors":"Shenghui Cheng , Joachim Giesen , Tianyi Huang , Philipp Lucas , Klaus Mueller","doi":"10.1016/j.visinf.2022.07.002","DOIUrl":"10.1016/j.visinf.2022.07.002","url":null,"abstract":"<div><p>By skeptics and undecided we refer to nodes in clustered social networks that cannot be assigned easily to any of the clusters. Such nodes are typically found either at the interface between clusters (the undecided) or at their boundaries (the skeptics). Identifying these nodes is relevant in marketing applications like voter targeting, because the persons represented by such nodes are often more likely to be affected in marketing campaigns than nodes deeply within clusters. So far this identification task is not as well studied as other network analysis tasks like clustering, identifying central nodes, and detecting motifs. We approach this task by deriving novel geometric features from the network structure that naturally lend themselves to an interactive visual approach for identifying interface and boundary nodes.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 3","pages":"Pages 11-22"},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000651/pdfft?md5=7d16b3905d9547534a383f084916110d&pid=1-s2.0-S2468502X22000651-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129040730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2022-09-01DOI: 10.1016/j.visinf.2022.06.001
Changlin Li , Mengqi Cao , Xiaolin Wen , Haotian Zhu , Shangsong Liu , Xinyi Zhang , Min Zhu
{"title":"MDIVis: Visual analytics of multiple destination images on tourism user generated content","authors":"Changlin Li , Mengqi Cao , Xiaolin Wen , Haotian Zhu , Shangsong Liu , Xinyi Zhang , Min Zhu","doi":"10.1016/j.visinf.2022.06.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.06.001","url":null,"abstract":"<div><p>Abundant tourism user-generated content (UGC) contains a wealth of cognitive and emotional information, providing valuable data for building destination images that depict tourists’ experiences and appraisal of the destinations during the tours. In particular, multiple destination images can assist tourism managers in exploring the commonalities and differences to investigate the elements of interest of tourists and improve the competitiveness of the destinations. However, existing methods usually focus on the image of a single destination, and they are not adequate to analyze and visualize UGC to extract valuable information and knowledge. Therefore, we discuss requirements with tourism experts and present MDIVis, a multi-level interactive visual analytics system that allows analysts to comprehend and analyze the cognitive themes and emotional experiences of multiple destination images for comparison. Specifically, we design a novel sentiment matrix view to summarize multiple destination images and improve two classic views to analyze the time-series pattern and compare the detailed information of images. Finally, we demonstrate the utility of MDIVis through three case studies with domain experts on real-world data, and the usability and effectiveness are confirmed through expert interviews.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 3","pages":"Pages 1-10"},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000419/pdfft?md5=b795f3316fcfff3cd7b997f4dbfa5e4e&pid=1-s2.0-S2468502X22000419-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91620075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}