Visual InformaticsPub Date : 2025-04-07DOI: 10.1016/j.visinf.2025.100237
Lianen Ji, Ziyi Wang, Shirong Qiu, Guang Yang, Sufang Zhang
{"title":"Visual analysis of multi-subject association patterns in high-dimensional time-varying student performance data","authors":"Lianen Ji, Ziyi Wang, Shirong Qiu, Guang Yang, Sufang Zhang","doi":"10.1016/j.visinf.2025.100237","DOIUrl":"10.1016/j.visinf.2025.100237","url":null,"abstract":"<div><div>Exploring the association patterns of student performance in depth can help administrators and teachers optimize the curriculum structure and teaching plans more specifically to improve teaching effectiveness in a college undergraduate major. However, these high-dimensional time-varying student performance data involve multiple associated subjects, such as student, course, and teacher, which exhibit complex interrelationships in academic semesters, knowledge categories, and student groups. This makes it challenging to conduct a comprehensive analysis of association patterns. To this end, we construct a visual analysis framework, called MAPVis, to support multi-method and multi-level interactive exploration of the association patterns in student performance. MAPVis consists of two stages: in the first stage, we extract students’ learning patterns and further introduce mutual information to explore the distribution of learning patterns; in the second stage, various learning patterns and subject attributes are integrated based on a hierarchical apriori algorithm to achieve a multi-subject interactive exploration of the association patterns among students, courses, and teachers. Finally, we conduct a case study using real student performance data to verify the applicability and effectiveness of MAPVis.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100237"},"PeriodicalIF":3.8,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-03-25DOI: 10.1016/j.visinf.2025.100235
Lishuang Zhan , Rongting Li , Rui Cao , Juncong Lin , Shihui Guo
{"title":"VisMocap: Interactive visualization and analysis for multi-source motion capture data","authors":"Lishuang Zhan , Rongting Li , Rui Cao , Juncong Lin , Shihui Guo","doi":"10.1016/j.visinf.2025.100235","DOIUrl":"10.1016/j.visinf.2025.100235","url":null,"abstract":"<div><div>With the rapid advancement of artificial intelligence, research on enabling computers to assist humans in achieving intelligent augmentation—thereby enhancing the accuracy and efficiency of information perception and processing—has been steadily evolving. Among these developments, innovations in human motion capture technology have been emerging rapidly, leading to an increasing diversity in motion capture data types. This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion. Additionally, motion capture data often suffer from significant noise, acquisition delays, and asynchrony, making their effective processing and visualization a critical challenge. In this paper, we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs. Time synchronization and error analysis between the two data types were conducted, individual actions from continuous motion sequences were segmented, and the processed results were presented through a concise and intuitive visualization interface. Finally, we evaluated various system metrics, including the accuracy of time synchronization, data fitting error from fabric resistance to joint angles, precision of motion segmentation, and user feedback.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100235"},"PeriodicalIF":3.8,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-03-21DOI: 10.1016/j.visinf.2025.100234
Lei Peng , Ziyue Lin , Natalia Andrienko , Gennady Andrienko , Siming Chen
{"title":"Contextualized visual analytics for multivariate events","authors":"Lei Peng , Ziyue Lin , Natalia Andrienko , Gennady Andrienko , Siming Chen","doi":"10.1016/j.visinf.2025.100234","DOIUrl":"10.1016/j.visinf.2025.100234","url":null,"abstract":"<div><div>For event analysis, the information from both before and after the event can be crucial in certain scenarios. By incorporating a contextualized perspective in event analysis, analysts can gain deeper insights from the events. We propose a contextualized visual analysis framework which enables the identification and interpretation of temporal patterns within and across multivariate events. The framework consists of a design of visual representation for multivariate event contexts, a data processing workflow to support the visualization, and a context-centered visual analysis system to facilitate the interactive exploration of temporal patterns. To demonstrate the applicability and effectiveness of our framework, we present case studies using real-world datasets from two different domains and an expert study conducted with experienced data analysts.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100234"},"PeriodicalIF":3.8,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-03-19DOI: 10.1016/j.visinf.2025.03.002
Xiwen Cai , Kai Xiong , Zhongsu Luo , Di Weng , Shuainan Ye , Yingcai Wu
{"title":"CodeLin: An in situ visualization method for understanding data transformation scripts","authors":"Xiwen Cai , Kai Xiong , Zhongsu Luo , Di Weng , Shuainan Ye , Yingcai Wu","doi":"10.1016/j.visinf.2025.03.002","DOIUrl":"10.1016/j.visinf.2025.03.002","url":null,"abstract":"<div><div>Understanding data transformation scripts is an essential task for data analysts who write code to process data. However, this can be challenging, especially when encountering unfamiliar scripts. Comments can help users understand data transformation code, but well-written comments are not always present. Visualization methods have been proposed to help analysts understand data transformations, but they generally require a separate view, which may distract users and entail efforts for connecting visualizations and code. In this work, we explore the use of in situ program visualization to help data analysts understand data transformation scripts. We present CodeLin, a new visualization method that combines word-sized glyphs for presenting transformation semantics and a lineage graph for presenting data lineage in an in situ manner. Through a use case, code pattern demonstrations, and a preliminary user study, we demonstrate the effectiveness and usability of CodeLin. We further discuss how visualization can help users understand data transformation code.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100233"},"PeriodicalIF":3.8,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143851644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Key-isovalue selection and hierarchical exploration visualization of weather forecast ensembles","authors":"Feng Zhou, Hao Hu, Fengjie Wang, Jiamin Zhu, Wenwen Gao, Min Zhu","doi":"10.1016/j.visinf.2025.02.001","DOIUrl":"10.1016/j.visinf.2025.02.001","url":null,"abstract":"<div><div>Weather forecast ensembles are commonly used to assess the uncertainty and confidence of weather predictions. Conventional methods in meteorology often employ ensemble mean and standard deviation plots, as well as spaghetti plots, to visualize ensemble data. However, these methods suffer from significant information loss and visual clutter. In this paper, we propose a new approach for uncertainty visualization of weather forecast ensembles, including isovalue selection based on information loss and hierarchical visualization that integrates visual abstraction and detail preservation. Our approach uses non-uniform downsampling to select key-isovalues and provides an interactive visualization method based on hierarchical clustering. Firstly, we sample key-isovalues by contour probability similarity and determine the optimal sampling number using an information loss curve. Then, the corresponding isocontours are presented to guide users in selecting key-isovalues. Once the isovalue is chosen, we perform agglomerative hierarchical clustering on the isocontours based on signed distance fields and generate visual abstractions for each isocontour cluster to avoid visual clutter. We link a bubble tree to the visual abstractions to explore the details of isocontour clusters at different levels. We demonstrate the utility of our approach through two case studies with meteorological experts on real-world data. We further validate its effectiveness by quantitatively assessing information loss and visual clutter. Additionally, we confirm its usability through expert evaluation.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 58-70"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-03-01DOI: 10.1016/j.visinf.2025.03.001
Zihan Zhou, Minfeng Zhu, Wei Chen
{"title":"A human-centric perspective on interpretability in large language models","authors":"Zihan Zhou, Minfeng Zhu, Wei Chen","doi":"10.1016/j.visinf.2025.03.001","DOIUrl":"10.1016/j.visinf.2025.03.001","url":null,"abstract":"","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages A1-A3"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ClayVolume: A progressive refinement interaction system for immersive visualization","authors":"Zhenyuan Wang , Qing Zhao , Yue Zhang , Jinhui Zhang , Guihua Shan , Xiao Zhou , Dong Tian","doi":"10.1016/j.visinf.2025.01.003","DOIUrl":"10.1016/j.visinf.2025.01.003","url":null,"abstract":"<div><div>Immersive visualization has become an important tool for discovering hidden patterns and obtaining insights from data. Target acquisition in immersive visualization is a fundamental step in visual analysis. However, limited visual encoding attributes and the presence of stacking and occlusion in immersive environments pose challenges in discovering valuable targets and making unambiguous selections. In this paper, we present ClayVolume, an interactive system designed for immersive visualization. It comprises metaphorical tools for customizing regions of interest (ROIs) and multiple views that serve as interactive and analytical mediums. ClayVolume empowers analysts to efficiently acquire valuable targets through a progressive refinement of interactive methods, enabling further extraction of insights. We evaluate ClayVolume in the scenario of immersive visualization of network data and perform a comparative analysis of its performance against other techniques in target selection tasks. The results indicate that ClayVolume enables flexible target selection in immersive visualization and provides fast target discovery and localization capabilities.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 71-83"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-03-01DOI: 10.1016/j.visinf.2025.02.002
Aline Menin , Clément Quere , Jorge Wagner , Sonia Chardonnel , Paule-Annick Davoine , Wolfgang Stuerzlinger , Carla Maria Dal Sasso Freitas , Luciana Nedel , Marco Winckler
{"title":"What about thematic information? An analysis of the multidimensional visualization of individual mobility","authors":"Aline Menin , Clément Quere , Jorge Wagner , Sonia Chardonnel , Paule-Annick Davoine , Wolfgang Stuerzlinger , Carla Maria Dal Sasso Freitas , Luciana Nedel , Marco Winckler","doi":"10.1016/j.visinf.2025.02.002","DOIUrl":"10.1016/j.visinf.2025.02.002","url":null,"abstract":"<div><div>This paper reviews the literature on the visualization of individual mobility data, with a focus on thematic integration. It emphasizes the importance of visualization in understanding mobility patterns within a population and how it helps mobility experts address domain-specific questions. We analyze 38 papers published between 2010 and 2024 in GIS and VIS venues that describe visualizations of multidimensional data related to individual movements in urban environments, concentrating on individual mobility rather than traffic data. Our primary aim is to report advances in interactive visualization for individual mobility analysis, particularly regarding the representation of thematic information about people’s motivations for mobility. Our findings indicate that the thematic dimension is only partially represented in the literature, despite its critical significance in transportation. This gap often stems from the challenge of identifying data sources that inherently provide this information, necessitating visualization designers and developers to navigate multiple, heterogeneous data sources. We identify the strengths and limitations of existing visualizations and suggest potential research directions for the field.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 99-115"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-03-01DOI: 10.1016/j.visinf.2025.02.003
Bingyuan Wang , Qing Shi , Xiaohan Wang , You Zhou , Wei Zeng , Zeyu Wang
{"title":"EmotionLens: Interactive visual exploration of the circumplex emotion space in literary works via affective word clouds","authors":"Bingyuan Wang , Qing Shi , Xiaohan Wang , You Zhou , Wei Zeng , Zeyu Wang","doi":"10.1016/j.visinf.2025.02.003","DOIUrl":"10.1016/j.visinf.2025.02.003","url":null,"abstract":"<div><div>Emotion (e.g., valence and arousal) is an important factor in literature (e.g., poetry and prose), and has rich values for plotting the life and knowledge of historical figures and appreciating the aesthetics of literary works. Currently, digital humanities and computational literature apply data statistics extensively in emotion analysis but lack visual analytics for efficient exploration. To fill the gap, we propose a user-centric approach that integrates advanced machine learning models and intuitive visualization for emotion analysis in literature. We make three main contributions. First, we consolidate a new emotion dataset of literary works in different periods, literary genres, and language contexts, augmented with fine-grained valence and arousal labels. Next, we design an interactive visual analytic system named <em>EmotionLens</em>, which allows users to perform multi-granularity (e.g., individual, group, society) and multi-faceted (e.g., distribution, chronology, correlation) analyses of literary emotions, supporting both exploratory and confirmatory approaches in digital humanities. Specifically, we introduce a novel affective word cloud with augmented word weight, position, and color, to facilitate literary text analysis from an emotional perspective. To validate the usability and effectiveness of <em>EmotionLens</em>, we provide two consecutive case studies, two user studies, and interviews with experts from different domains. Our results show that <em>EmotionLens</em> bridges literary text, emotion, and various other attributes, enables efficient knowledge discovery in massive data, and facilitates raising and validating domain-specific hypotheses in literature.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 84-98"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-02-22DOI: 10.1016/j.visinf.2025.01.002
Tomás Alves , Carlota Dias , Daniel Gonçalves , Sandra Gama
{"title":"Leveraging personality as a proxy of perceived transparency in hierarchical visualizations","authors":"Tomás Alves , Carlota Dias , Daniel Gonçalves , Sandra Gama","doi":"10.1016/j.visinf.2025.01.002","DOIUrl":"10.1016/j.visinf.2025.01.002","url":null,"abstract":"<div><div>Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research, especially since trust models how users build on the knowledge and use it. This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity, coverage, and look and feel dimensions. Additionally, we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process. Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors. Regarding personality, the propensity to trust affects how they judge the clarity of a hierarchical chart. Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization. Specifically, we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 43-57"},"PeriodicalIF":3.8,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}