Visual Informatics最新文献

筛选
英文 中文
Visual analytics of multivariate networks with representation learning and composite variable construction 基于表示学习和复合变量构造的多元网络可视化分析
3区 计算机科学
Visual Informatics Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.06.004
Hsiao-Ying Lu, Takanori Fujiwara, Ming-Yi Chang, Yang-chih Fu, Anders Ynnerman, Kwan-Liu Ma
{"title":"Visual analytics of multivariate networks with representation learning and composite variable construction","authors":"Hsiao-Ying Lu, Takanori Fujiwara, Ming-Yi Chang, Yang-chih Fu, Anders Ynnerman, Kwan-Liu Ma","doi":"10.1016/j.visinf.2023.06.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.004","url":null,"abstract":"Multivariate networks are commonly found in real-world data-driven applications. Uncovering and understanding the relations of interest in multivariate networks is not a trivial task. This paper presents a visual analytics workflow for studying multivariate networks to extract associations between different structural and semantic characteristics of the networks (e.g., what are the combinations of attributes largely relating to the density of a social network?). The workflow consists of a neural-network-based learning phase to classify the data based on the chosen input and output attributes, a dimensionality reduction and optimization phase to produce a simplified set of results for examination, and finally an interpreting phase conducted by the user through an interactive visualization interface. A key part of our design is a composite variable construction step that remodels nonlinear features obtained by neural networks into linear features that are intuitive to interpret. We demonstrate the capabilities of this workflow with multiple case studies on networks derived from social media usage and also evaluate the workflow through an expert interview.","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136178267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseCL: A simple framework for self-supervised dense visual pre-training DenseCL:一个用于自监督密集视觉预训练的简单框架
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.09.003
Xinlong Wang , Rufeng Zhang , Chunhua Shen , Tao Kong
{"title":"DenseCL: A simple framework for self-supervised dense visual pre-training","authors":"Xinlong Wang ,&nbsp;Rufeng Zhang ,&nbsp;Chunhua Shen ,&nbsp;Tao Kong","doi":"10.1016/j.visinf.2022.09.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.09.003","url":null,"abstract":"<div><p>Self-supervised learning aims to learn a universal feature representation without labels. To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning framework that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. Specifically, we present dense contrastive learning (DenseCL), which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Compared to the supervised ImageNet pre-training and other self-supervised learning methods, our self-supervised DenseCL pre-training demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation. Specifically, our approach significantly outperforms the strong MoCo-v2 by 2.0% AP on PASCAL VOC object detection, 1.1% AP on COCO object detection, 0.9% AP on COCO instance segmentation, 3.0% mIoU on PASCAL VOC semantic segmentation and 1.8% mIoU on Cityscapes semantic segmentation. The improvements are up to 3.5% AP and 8.8% mIoU over MoCo-v2, and 6.1% AP and 6.1% mIoU over supervised counterpart with frozen-backbone evaluation protocol.</p><p>Code and models are available at: <span>https://git.io/DenseCL</span><svg><path></path></svg></p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 30-40"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse RGB-D images create a real thing: A flexible voxel based 3D reconstruction pipeline for single object 稀疏的RGB-D图像创建了一个真实的东西:一个灵活的基于体素的单个对象的3D重建管道
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.12.002
Fei Luo , Yongqiong Zhu , Yanping Fu , Huajian Zhou , Zezheng Chen , Chunxia Xiao
{"title":"Sparse RGB-D images create a real thing: A flexible voxel based 3D reconstruction pipeline for single object","authors":"Fei Luo ,&nbsp;Yongqiong Zhu ,&nbsp;Yanping Fu ,&nbsp;Huajian Zhou ,&nbsp;Zezheng Chen ,&nbsp;Chunxia Xiao","doi":"10.1016/j.visinf.2022.12.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.12.002","url":null,"abstract":"<div><p>Reconstructing 3D models for single objects with complex backgrounds has wide applications like 3D printing, AR/VR, and so on. It is necessary to consider the tradeoff between capturing data at low cost and getting high-quality reconstruction results. In this work, we propose a voxel-based modeling pipeline with sparse RGB-D images to effectively and efficiently reconstruct a single real object without the geometrical post-processing operation on background removal. First, referring to the idea of VisualHull, useless and inconsistent voxels of a targeted object are clipped. It helps focus on the target object and rectify the voxel projection information. Second, a modified TSDF calculation and voxel filling operations are proposed to alleviate the problem of depth missing in the depth images. They can improve TSDF value completeness for voxels on the surface of the object. After the mesh is generated by the MarchingCube, texture mapping is optimized with view selection, color optimization, and camera parameters fine-tuning. Experiments on Kinect capturing dataset, TUM public dataset, and virtual environment dataset validate the effectiveness and flexibility of our proposed pipeline.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 66-76"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The use of facial expressions in measuring students’ interaction with distance learning environments during the COVID-19 crisis 在COVID-19危机期间,使用面部表情来衡量学生与远程学习环境的互动
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.10.001
Waleed Maqableh , Faisal Y. Alzyoud , Jamal Zraqou
{"title":"The use of facial expressions in measuring students’ interaction with distance learning environments during the COVID-19 crisis","authors":"Waleed Maqableh ,&nbsp;Faisal Y. Alzyoud ,&nbsp;Jamal Zraqou","doi":"10.1016/j.visinf.2022.10.001","DOIUrl":"10.1016/j.visinf.2022.10.001","url":null,"abstract":"<div><p>Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries. The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems. Digital learning includes blended learning, online learning, and personalized learning which mainly depends on the use of new technologies and strategies, so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases. Despite the tremendous benefits of digital learning, there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students. Therefore, many attempts have been made to improve the learning outcomes through the following strategies: collaboration, teacher convenience, personalized learning, cost and time savings through professional development, and modeling. In this study, facial expressions and heart rates are used to measure the effectiveness of digital learning systems and the level of learners’ engagement in learning environments. The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness. The results of this research can be used to develop a digital learning environment.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 1-17"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9595381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9359944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PCP-Ed: Parallel coordinate plots for ensemble data 集成数据的平行坐标图
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.10.003
Elif E. Firat , Ben Swallow , Robert S. Laramee
{"title":"PCP-Ed: Parallel coordinate plots for ensemble data","authors":"Elif E. Firat ,&nbsp;Ben Swallow ,&nbsp;Robert S. Laramee","doi":"10.1016/j.visinf.2022.10.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.10.003","url":null,"abstract":"<div><p>The Parallel Coordinate Plot (PCP) is a complex visual design commonly used for the analysis of high-dimensional data. Increasing data size and complexity may make it challenging to decipher and uncover trends and outliers in a confined space. A dense PCP image resulting from overlapping edges may cause patterns to be covered. We develop techniques aimed at exploring the relationship between data dimensions to uncover trends in dense PCPs. We introduce correlation glyphs in the PCP view to reveal the strength of the correlation between adjacent axis pairs as well as an interactive glyph lens to uncover links between data dimensions by investigating dense areas of edge intersections. We also present a subtraction operator to identify differences between two similar multivariate data sets and relationship-guided dimensionality reduction by collapsing axis pairs. We finally present a case study of our techniques applied to ensemble data and provide feedback from a domain expert in epidemiology.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 56-65"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying, exploring, and interpreting time series shapes in multivariate time intervals 识别,探索和解释多变量时间间隔的时间序列形状
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2023.01.001
Gota Shirato , Natalia Andrienko , Gennady Andrienko
{"title":"Identifying, exploring, and interpreting time series shapes in multivariate time intervals","authors":"Gota Shirato ,&nbsp;Natalia Andrienko ,&nbsp;Gennady Andrienko","doi":"10.1016/j.visinf.2023.01.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.001","url":null,"abstract":"<div><p>We introduce a concept of <em>episode</em> referring to a time interval in the development of a dynamic phenomenon that is characterized by multiple time-variant attributes. A data structure representing a single episode is a multivariate time series. To analyse collections of episodes, we propose an approach that is based on recognition of particular <em>patterns</em> in the temporal variation of the variables within episodes. Each episode is thus represented by a combination of patterns. Using this representation, we apply visual analytics techniques to fulfil a set of analysis tasks, such as investigation of the temporal distribution of the patterns, frequencies of transitions between the patterns in episode sequences, and co-occurrences of patterns of different variables within same episodes. We demonstrate our approach on two examples using real-world data, namely, dynamics of human mobility indicators during the COVID-19 pandemic and characteristics of football team movements during episodes of ball turnover.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 77-91"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TCMFVis: A visual analytics system toward bridging together traditional Chinese medicine and modern medicine TCMFVis:一个连接传统中医和现代医学的可视化分析系统
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.11.001
Yichao Jin , Fuli Zhu , Jianhua Li , Lei Ma
{"title":"TCMFVis: A visual analytics system toward bridging together traditional Chinese medicine and modern medicine","authors":"Yichao Jin ,&nbsp;Fuli Zhu ,&nbsp;Jianhua Li ,&nbsp;Lei Ma","doi":"10.1016/j.visinf.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.11.001","url":null,"abstract":"<div><p>Although traditional Chinese medicine (TCM) and modern medicine (MM) have considerably different treatment philosophies, they both make important contributions to human health care. TCM physicians usually treat diseases using TCM formula (TCMF), which is a combination of specific herbs, based on the holistic philosophy of TCM, whereas MM physicians treat diseases using chemical drugs that interact with specific biological molecules. The difference between the holistic view of TCM and the atomistic view of MM hinders their combination. Tools that are able to bridge together TCM and MM are essential for promoting the combination of these disciplines. In this paper, we present TCMFVis, a visual analytics system that would help domain experts explore the potential use of TCMFs in MM at the molecular level. TCMFVis deals with two significant challenges, namely, (<em>i</em>) intuitively obtaining valuable insights from heterogeneous data involved in TCMFs and (<em>ii</em>) efficiently identifying the common features among a cluster of TCMFs. In this study, a four-level (herb-ingredient-target-disease) visual analytics framework was designed to facilitate the analysis of heterogeneous data in a proper workflow. Several set visualization techniques were first introduced into the system to facilitate the identification of common features among TCMFs. Case studies on two groups of TCMFs clustered by function were conducted by domain experts to evaluate TCMFVis. The results of these case studies demonstrate the usability and scalability of the system.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 41-55"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users VISHIEN-MAAT:用于向非技术用户解释暹罗神经网络概念的滚动可视化设计
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2023.01.004
Noptanit Chotisarn , Sarun Gulyanon , Tianye Zhang , Wei Chen
{"title":"VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users","authors":"Noptanit Chotisarn ,&nbsp;Sarun Gulyanon ,&nbsp;Tianye Zhang ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2023.01.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.004","url":null,"abstract":"<div><p>The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader’s pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users’ perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 18-29"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comparative evaluations of visualization onboarding methods 可视化入职方法的比较评价
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.07.001
Christina Stoiber , Conny Walchshofer , Margit Pohl , Benjamin Potzmann , Florian Grassinger , Holger Stitz , Marc Streit , Wolfgang Aigner
{"title":"Comparative evaluations of visualization onboarding methods","authors":"Christina Stoiber ,&nbsp;Conny Walchshofer ,&nbsp;Margit Pohl ,&nbsp;Benjamin Potzmann ,&nbsp;Florian Grassinger ,&nbsp;Holger Stitz ,&nbsp;Marc Streit ,&nbsp;Wolfgang Aigner","doi":"10.1016/j.visinf.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.07.001","url":null,"abstract":"<div><p>Comprehending and exploring large and complex data is becoming increasingly important for a diverse population of users in a wide range of application domains. Visualization has proven to be well-suited in supporting this endeavor by tapping into the power of human visual perception. However, non-experts in the field of visual data analysis often have problems with correctly reading and interpreting information from visualization idioms that are new to them. To support novices in learning how to use new digital technologies, the concept of onboarding has been successfully applied in other fields and first approaches also exist in the visualization domain. However, empirical evidence on the effectiveness of such approaches is scarce. Therefore, we conducted three studies with Amazon Mechanical Turk (MTurk) workers and students investigating visualization onboarding at different levels: (1) Firstly, we explored the effect of visualization onboarding, using an interactive step-by-step guide, on user performance for four increasingly complex visualization techniques with time-oriented data: a bar chart, a horizon graph, a change matrix, and a parallel coordinates plot. We performed a between-subject experiment with 596 participants in total. The results showed that there are no significant differences between the answer correctness of the questions with and without onboarding. Particularly, participants commented that for highly familiar visualization types no onboarding is needed. However, for the most unfamiliar visualization type — the parallel coordinates plot — performance improvement can be observed with onboarding. (2) Thus, we performed a second study with MTurk workers and the parallel coordinates plot to assess if there is a difference in user performances on different visualization onboarding types: step-by-step, scrollytelling tutorial, and video tutorial. The study revealed that the video tutorial was ranked as the most positive on average, based on a sentiment analysis, followed by the scrollytelling tutorial and the interactive step-by-step guide. (3) As videos are a traditional method to support users, we decided to use the scrollytelling approach as a less prevalent way and explore it in more detail. Therefore, for our third study, we gathered data towards users’ experience in using the in-situ scrollytelling for the VA tool Netflower. The results of the evaluation with students showed that they preferred scrollytelling over the tutorial integrated in the Netflower landing page. Moreover, for all three studies we explored the effect of task difficulty. In summary, the in-situ scrollytelling approach works well for integrating onboarding in a visualization tool. Additionally, a video tutorial can help to introduce interaction techniques of visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 34-50"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200064X/pdfft?md5=e2f5584a6bf4d23f6409411537794eb2&pid=1-s2.0-S2468502X2200064X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137152772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TBSSvis: Visual analytics for Temporal Blind Source Separation TBSSvis:时间盲源分离的可视化分析
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.10.002
Nikolaus Piccolotto , Markus Bögl , Theresia Gschwandtner , Christoph Muehlmann , Klaus Nordhausen , Peter Filzmoser , Silvia Miksch
{"title":"TBSSvis: Visual analytics for Temporal Blind Source Separation","authors":"Nikolaus Piccolotto ,&nbsp;Markus Bögl ,&nbsp;Theresia Gschwandtner ,&nbsp;Christoph Muehlmann ,&nbsp;Klaus Nordhausen ,&nbsp;Peter Filzmoser ,&nbsp;Silvia Miksch","doi":"10.1016/j.visinf.2022.10.002","DOIUrl":"10.1016/j.visinf.2022.10.002","url":null,"abstract":"<div><p>Temporal Blind Source Separation (TBSS) is used to obtain the true underlying processes from noisy temporal multivariate data, such as electrocardiograms. TBSS has similarities to Principal Component Analysis (PCA) as it separates the input data into univariate components and is applicable to suitable datasets from various domains, such as medicine, finance, or civil engineering. Despite TBSS’s broad applicability, the involved tasks are not well supported in current tools, which offer only text-based interactions and single static images. Analysts are limited in analyzing and comparing obtained results, which consist of diverse data such as matrices and sets of time series. Additionally, parameter settings have a big impact on separation performance, but as a consequence of improper tooling, analysts currently do not consider the whole parameter space. We propose to solve these problems by applying visual analytics (VA) principles. Our primary contribution is a design study for TBSS, which so far has not been explored by the visualization community. We developed a task abstraction and visualization design in a user-centered design process. Task-specific assembling of well-established visualization techniques and algorithms to gain insights in the TBSS processes is our secondary contribution. We present TBSSvis, an interactive web-based VA prototype, which we evaluated extensively in two interviews with five TBSS experts. Feedback and observations from these interviews show that TBSSvis supports the actual workflow and combination of interactive visualizations that facilitate the tasks involved in analyzing TBSS results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 51-66"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22001103/pdfft?md5=e16a9a59f900c2b2e1e6e50729e1b03e&pid=1-s2.0-S2468502X22001103-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信