Visual InformaticsPub Date : 2025-01-16DOI: 10.1016/j.visinf.2025.01.001
Zikun Deng , Haoming Chen , Qing-Long Lu , Zicheng Su , Tobias Schreck , Jie Bao , Yi Cai
{"title":"Visual comparative analytics of multimodal transportation","authors":"Zikun Deng , Haoming Chen , Qing-Long Lu , Zicheng Su , Tobias Schreck , Jie Bao , Yi Cai","doi":"10.1016/j.visinf.2025.01.001","DOIUrl":"10.1016/j.visinf.2025.01.001","url":null,"abstract":"<div><div>Contemporary urban transportation systems frequently depend on a variety of modes to provide residents with travel services. Understanding a multimodal transportation system is pivotal for devising well-informed planning; however, it is also inherently challenging for traffic analysts and planners. This challenge stems from the necessity of evaluating and contrasting the quality of transportation services across multiple modes. Existing methods are constrained in offering comprehensive insights into the system, primarily due to the inadequacy of multimodal traffic data necessary for fair comparisons and their inability to equip analysts and planners with the means for exploration and reasoned analysis within the urban spatial context. To this end, we first acquire sufficient multimodal trips leveraging well-established navigation platforms that can estimate the routes with the least travel time given an origin and a destination (an OD pair). We also propose TraDyssey, a visual analytics system that enables analysts and planners to evaluate and compare multiple modes by exploring acquired massive multimodal trips. TraDyssey follows a streamlined query-and-explore workflow supported by user-friendly and effective interactive visualizations. Specifically, a revisited difference-aware parallel coordinate plot (PCP) is designed for overall mode comparisons based on multimodal trips. Trip groups can be flexibly queried on the PCP based on differential features across modes. The queried trips are then organized and presented on a geographic map by OD pairs, forming a group-OD-trip hierarchy of visual exploration. Domain experts gained valuable insights into transportation planning through real-world case studies using TraDyssey.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 18-30"},"PeriodicalIF":3.8,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-20DOI: 10.1016/j.visinf.2024.12.001
T. Chlubna , T. Milet , P. Zemčík
{"title":"Out-of-focus artifacts mitigation and autofocus methods for 3D displays","authors":"T. Chlubna , T. Milet , P. Zemčík","doi":"10.1016/j.visinf.2024.12.001","DOIUrl":"10.1016/j.visinf.2024.12.001","url":null,"abstract":"<div><div>This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 31-42"},"PeriodicalIF":3.8,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transforming cinematography lighting education in the metaverse","authors":"Xian Xu , Wai Tong , Zheng Wei , Meng Xia , Lik-Hang Lee , Huamin Qu","doi":"10.1016/j.visinf.2024.11.003","DOIUrl":"10.1016/j.visinf.2024.11.003","url":null,"abstract":"<div><div>Lighting education is a foundational component of cinematography education. However, many art schools do not have expensive soundstages for traditional cinematography lessons. Migrating physical setups to virtual experiences is a potential solution driven by metaverse initiatives. Yet there is still a lack of knowledge on the design of a VR system for teaching cinematography. We first analyzed the educational needs for cinematography lighting education by conducting interviews with six cinematography professionals from academia and industry. Accordingly, we presented <em>Art Mirror</em>, a VR soundstage for teachers and students to emulate cinematography lighting in virtual scenarios. We evaluated <em>Art Mirror</em> from the aspects of usability, realism, presence, sense of agency, and collaboration. Sixteen participants were invited to take a cinematography lighting course and assess the design elements of <em>Art Mirror</em>. Our results demonstrate that <em>Art Mirror</em> is usable and useful for cinematography lighting education, which sheds light on the design of VR cinematography education.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 1-17"},"PeriodicalIF":3.8,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ArtEyer: Enriching GPT-based agents with contextual data visualizations for fine art authentication","authors":"Tan Tang , Yanhong Wu , Junming Gao , Kejia Ruan , Yanjie Zhang , Shuainan Ye , Yingcai Wu , Xiaojiao Chen","doi":"10.1016/j.visinf.2024.11.001","DOIUrl":"10.1016/j.visinf.2024.11.001","url":null,"abstract":"<div><div>Fine art authentication plays a significant role in protecting cultural heritage and ensuring the integrity of artworks. Traditional authentication methods require professionals to collect many reference materials and conduct detailed analyses. To ease the difficulty, we collaborate with domain experts to develop a GPT-based agent, namely ArtEyer, that offers accurate attributions, determines the origin and authorship, and executes visual analytics. Despite the convenience of the conversational user interface, novice users may still face challenges due to the hallucination issue and the steep learning curve associated with prompting. To face these obstacles, we propose a novel solution that places interactive data visualizations into the conversations. We create contextual visualizations from an external domain-dependent database to ensure data trustworthiness and allow users to provide precise instructions to the agent by interacting directly with these visualizations, thus overcoming the vagueness inherent in natural language-based prompting. We evaluate ArtEyer through an in-lab user study and demonstrate its usage with a real-world case.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 48-59"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-01DOI: 10.1016/j.visinf.2024.11.002
Júlio Castro Lopes, Rui Pedro Lopes
{"title":"Computer Vision in Augmented, Virtual, Mixed and Extended Reality environments—A bibliometric review","authors":"Júlio Castro Lopes, Rui Pedro Lopes","doi":"10.1016/j.visinf.2024.11.002","DOIUrl":"10.1016/j.visinf.2024.11.002","url":null,"abstract":"<div><div>This work describes a bibliometric analysis of the literature on the use of computer vision algorithms in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and Extended Reality (XR) environments. The analysis aims to highlight the evolution, trends, and effects of research in this field. This review provides an overview of immersive technologies and their applications, as well as the role of computer vision algorithms in enabling these technologies and the potential benefits of using such algorithms. This study identifies important authors, institutions, and research themes by using bibliometric indicators such as citation counts, co-citation analysis, and network analysis. The analysis also identifies gaps and opportunities for additional research in this area, as well as a critical assessment of the quality and relevance of the publications.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 13-22"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-01DOI: 10.1016/j.visinf.2024.10.004
Jie Liu, Jie Li, Jielong Kuang
{"title":"Generative model-assisted sample selection for interest-driven progressive visual analytics","authors":"Jie Liu, Jie Li, Jielong Kuang","doi":"10.1016/j.visinf.2024.10.004","DOIUrl":"10.1016/j.visinf.2024.10.004","url":null,"abstract":"<div><div>We propose interest-driven progressive visual analytics. The core idea is to filter samples with features of interest to analysts from the given dataset for analysis. The approach relies on a generative model (GM) trained using the given dataset as the training set. The GM characteristics make it convenient to find ideal generated samples from its latent space. Then, we filter the original samples similar to the ideal generated ones to explore patterns. Our research involves two methods for achieving and applying the idea. First, we give a method to explore ideal samples from a GM’s latent space. Second, we integrate the method into a system to form an embedding-based analytical workflow. Patterns found on open datasets in case studies, results of quantitative experiments, and positive feedback from experts illustrate the general usability and effectiveness of the approach.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 97-108"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-01DOI: 10.1016/j.visinf.2024.10.002
Yang Zhang, Jie Li, Xu Chao
{"title":"ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery","authors":"Yang Zhang, Jie Li, Xu Chao","doi":"10.1016/j.visinf.2024.10.002","DOIUrl":"10.1016/j.visinf.2024.10.002","url":null,"abstract":"<div><div>In recent years, AI-driven drug development has emerged as a prominent research topic in computer chemistry. A key focus is the application of generative models for molecule synthesis, which create extensive virtual libraries of chemical molecules based on latent spaces. However, locating molecules with desirable properties within the vast latent spaces remains a significant challenge. Large regions of invalid samples in the latent space, called “dead zones”, can impede the exploration efficiency. The process is always time-consuming and repetitive. Therefore, we aim to propose a visualization system to help experts identify potential molecules with desirable properties as they wander in the latent space. Specifically, we conducted a literature survey about the application of generative networks in drug synthesis to summarize the tasks and followed this with expert interviews to determine their requirements. Based on the above requirements, we introduce ChemNav, an interactive visual tool for navigating latent space for desirable molecules search. ChemNav incorporates a heuristic latent space interpolation path search algorithm to enhance the efficiency of valid molecule generation, and a similar sample search algorithm to accelerate the discovery of similar molecules. Evaluations of ChemNav through two case studies, a user study, and experiments demonstrated its effectiveness in inspiring researchers to explore the latent space for chemical molecule discovery.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 60-70"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-01DOI: 10.1016/j.visinf.2024.09.006
Magnus Nylin , Jonas Lundberg , Magnus Bång , Kostiantyn Kucher
{"title":"Glyph design for communication initiation in real-time human-automation collaboration","authors":"Magnus Nylin , Jonas Lundberg , Magnus Bång , Kostiantyn Kucher","doi":"10.1016/j.visinf.2024.09.006","DOIUrl":"10.1016/j.visinf.2024.09.006","url":null,"abstract":"<div><div>Initiating communication and conveying critical information to the human operator is a key problem in human-automation collaboration. This problem is particularly pronounced in time-constrained safety critical domains such as in Air Traffic Management. A visual representation should aid operators understanding <em>why</em> the system initiates the communication, <em>when</em> the operator must act, and the <em>consequences of not responding</em> to the cue. Data <em>glyphs</em> can be used to present multidimensional data, including temporal data in a compact format to facilitate this type of communication. In this paper, we propose a glyph design for communication initialization for highly automated systems in Air Traffic Management, Vessel Traffic Service, and Train Traffic Management. The design was assessed by experts in these domains in three workshop sessions. The results showed that the number of glyphs to be presented simultaneously and the type of situation were domain-specific glyph design aspects that needed to be adjusted for each work domain. The results also showed that the core of the glyph design could be reused between domains, and that the operators could successfully interpret the temporal data representations. We discuss similarities and differences in the applicability of the glyph design between the different domains, and finally, we provide some suggestions for future work based on the results from this study.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 23-35"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-01DOI: 10.1016/j.visinf.2024.10.003
Fang Zhu , Xufei Zhu , Xumeng Wang , Yuxin Ma , Jieqiong Zhao
{"title":"ATVis: Understanding and diagnosing adversarial training processes through visual analytics","authors":"Fang Zhu , Xufei Zhu , Xumeng Wang , Yuxin Ma , Jieqiong Zhao","doi":"10.1016/j.visinf.2024.10.003","DOIUrl":"10.1016/j.visinf.2024.10.003","url":null,"abstract":"<div><div>Adversarial training has emerged as a major strategy against adversarial perturbations in deep neural networks, which mitigates the issue of exploiting model vulnerabilities to generate incorrect predictions. Despite enhancing robustness, adversarial training often results in a trade-off with standard accuracy on normal data, a phenomenon that remains a contentious issue. In addition, the opaque nature of deep neural network models renders it more difficult to inspect and diagnose how adversarial training processes evolve. This paper introduces ATVis, a visual analytics framework for examining and diagnosing adversarial training processes. Through multi-level visualization design, ATVis enables the examination of model robustness from various granularity, facilitating a detailed understanding of the dynamics in the training epochs. The framework reveals the complex relationship between adversarial robustness and standard accuracy, which further offers insights into the mechanisms that drive the trade-offs observed in adversarial training. The effectiveness of the framework is demonstrated through case studies.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 71-84"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2024-12-01DOI: 10.1016/j.visinf.2024.10.005
João Moreira , Daniel Mendes , Daniel Gonçalves
{"title":"Incidental visualizations: How complexity factors influence task performance","authors":"João Moreira , Daniel Mendes , Daniel Gonçalves","doi":"10.1016/j.visinf.2024.10.005","DOIUrl":"10.1016/j.visinf.2024.10.005","url":null,"abstract":"<div><div>Incidental visualizations convey information to a person during an ongoing primary task, without the person consciously searching for or requesting that information. They differ from glanceable visualizations by not being people’s main focus, and from ambient visualizations by not being embedded in the environment. Instead, they are presented as secondary information that can be observed without a person losing focus on their current task. However, despite extensive research on glanceable and ambient visualizations, the topic of incidental visualizations is yet a novel topic in current research. To bridge this gap, we conducted an empirical user study presenting participants with an incidental visualization while performing a primary task. We aimed to understand how complexity contributory factors — task complexity, output complexity, and pressure — affected primary task performance and incidental visualization accuracy. Our findings showed that incidental visualizations effectively conveyed information without disrupting the primary task, but working memory limitations should be considered. Additionally, output and pressure significantly influenced the primary task’s results. In conclusion, our study provides insights into the perception accuracy and performance impact of incidental visualizations in relation to complexity factors.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 85-96"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}