Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100257
Yonghao Chen, Tan Tang, Xiaojiao Chen, Yueying Li, Qinghua Liu, Xiaosong Wang
{"title":"VirtuNarrator: Crafting museum narratives via spatial layout in creating customized virtual museums","authors":"Yonghao Chen, Tan Tang, Xiaojiao Chen, Yueying Li, Qinghua Liu, Xiaosong Wang","doi":"10.1016/j.visinf.2025.100257","DOIUrl":"10.1016/j.visinf.2025.100257","url":null,"abstract":"<div><div>Curation in museums involves not only presenting exhibits for visitors but also deeply shaping a systematic narrative experience through deliberate spatial layout design of the museum space. In contrast, the dynamic nature of virtual reality (VR) environments establishes virtual museums as a more potent space for both layout optimization and narrative construction, particularly when integrating visitors’ diverse preferences to optimize the virtual museum and convey narratives. Therefore, we first collaborated with experienced curators to conduct a formative study to understand the workflow of curation and summarize the museum narratives that weave exhibits, galleries, and museum architecture into a compelling story. We then proposed a museum spatial layout framework that clarified three narrative levels (exhibit level, gallery level, and architecture level) to support the controllable spatial layout of the museum’s elements. Based on that, we developed VirtuNarrator, a proof-of-concept prototype designed to assist visitors in choosing different narrative themes, filtering exhibits, creating and adjusting galleries, and freely connecting them. The evaluation results validated that visitors received a more systematic museum narrative experience and perceptions of multi-perspective narrative design in VirtuNarrator. We also provided insights into VR-based museum narrative enhancement beyond spatial layout design.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100257"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing game dynamics at a specific time: Influence of the players’ poses for tactical analyses in padel","authors":"Mohammadreza Javadiha , Carlos Andujar , Enrique Lacasa , Gota Shirato , Natalia Andrienko , Gennady Andrienko","doi":"10.1016/j.visinf.2025.100256","DOIUrl":"10.1016/j.visinf.2025.100256","url":null,"abstract":"<div><div>Tactical elements are crucial in team sports. The analysis of hypothetical game situations greatly benefits from positional diagrams showing where the players are. These diagrams often show the layout of the players through simple symbols, which provide no information about their poses. This paper investigates if the visualization of player poses is beneficial for tactical understanding of positional diagrams in padel. We propose a realistic, cartoon-like representation of the players and discuss its integration into a typical positional diagram. To overcome the cost of generating player representations depicting their pose, we propose a method to generate such representations from minimal user input. We conducted a user study to evaluate the effectiveness of our pose-aware diagrams. The tasks for the study were designed to encompass the main in-game scenarios in padel, which include the ballholder at the net with opponents defending, the reverse situation, and transitions between these two states. We found that our representation is preferred over a symbolic one that only indicates player orientation. The proposed method enables coaches to produce such representations within a matter of seconds, thereby significantly facilitating the creation of detailed and easily analyzable depictions of game situations.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100256"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100243
Yurun Yang , Xinjing Yi , Yingqiang Jin , Sen Li , Kang Ma , Shuhan Liu , Dazhen Deng , Di Weng , Yingcai Wu
{"title":"PVeSight: Dimensionality reduction-based anomaly detection and visual analysis of photovoltaic strings","authors":"Yurun Yang , Xinjing Yi , Yingqiang Jin , Sen Li , Kang Ma , Shuhan Liu , Dazhen Deng , Di Weng , Yingcai Wu","doi":"10.1016/j.visinf.2025.100243","DOIUrl":"10.1016/j.visinf.2025.100243","url":null,"abstract":"<div><div>Efficient and accurate detection of anomalies in photovoltaic (PV) strings is essential for ensuring the normal operation of PV power stations. Most existing studies focus on developing automated anomaly detection models based on temporal abnormalities in PV strings. However, since analyzing anomalies often requires domain knowledge, existing automated methods have significant limitations in assisting experts to understand the causes and impact of these anomalies. In close collaboration with domain experts, this work has summarized the specific user requirements for PV string anomaly detection and designed PVeSight, an interactive visual analysis system to help experts discover and analyze anomalies in PV strings. We use dimensionality reduction techniques to generate string pattern map. These maps are used for anomaly detection, classifying anomalies, comparative analysis between strings, and hierarchical analysis under inverters and combiner boxes. This helps experts trace the causes of anomalies and acquire valuable insights into anomalous PV strings. Through case studies and expert evaluation, we verified the usability and effectiveness of PVeSight for PV string anomaly detection.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100243"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100244
Te Li , Junming Ke , Zhen Wen , Yuchen Wu , Junhua Lu , Biao Zhu , Minfeng Zhu , Wei Chen
{"title":"STEP-LINK: STEP-by-Step Tutorial Editing with Programmable LINKages","authors":"Te Li , Junming Ke , Zhen Wen , Yuchen Wu , Junhua Lu , Biao Zhu , Minfeng Zhu , Wei Chen","doi":"10.1016/j.visinf.2025.100244","DOIUrl":"10.1016/j.visinf.2025.100244","url":null,"abstract":"<div><div>Programming tutorials serve a crucial role in teaching coding and programming techniques. Creating high-quality programming tutorials remains a laborious task. Authors devote effort to writing step-by-step solutions, creating examples, and editing existing tutorials. We explore the potential of using the text-code connection to improve the authoring experience of programming tutorials. We proposed a mixed-initiative approach to infer, establish, and maintain the latent text-code connections. With a series of interactions, the STEP-LINK (<em><strong>STEP-</strong>by-Step Tutorial Editing with Programmable <strong>LINK</strong>ages</em>) prototype leverages text-code connections to assist users in authoring tutorials. The results of our experiment demonstrate the effectiveness of our system in supporting users in the authoring of step-by-step code explanations, the creation of examples, and the iteration of tutorials.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100244"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100242
Sebastian Künzel, Daniel Weiskopf
{"title":"Dye advection without the blur: ML-based flow visualization","authors":"Sebastian Künzel, Daniel Weiskopf","doi":"10.1016/j.visinf.2025.100242","DOIUrl":"10.1016/j.visinf.2025.100242","url":null,"abstract":"<div><div>Semi-Lagrangian texture advection (SLTA) enables efficient visualization of 2D and 3D unsteady flow. The major drawback of SLTA-based visualizations is numerical diffusion caused by iterative texture interpolation. We focus on reducing numerical diffusion in techniques that use textures sparsely populated by solid blobs, such as typically in dye advection. A ReLU-based model architecture is the foundation of our ML-based approach. Multiple model configurations are trained to learn a performant interpolation model that reduces numerical diffusion. Our evaluation investigates the models’ ability to generalize concerning the flow and length of the advection process. The model with the best tradeoff between the computational effort to compute, quality of the result, and generality of application is found to be single-layer ReLU-based. This model is further analyzed and explained in-depth and improved using symmetry constraints. Additionally, a metamodel is fitted to predict single-layer ReLU model parameters for advection processes of any length. The metamodel removes the need for any prior training when applying our technique to a new scenario. Additionally, we show that our model is compatible with Back and Forth Error Compensation and Correction to improve the quality of the advection result further. We demonstrate that our model shows excellent diffusion reduction properties in typical examples of 3D steady and unsteady flow visualization. Finally, we utilize the strong diffusion reduction capabilities of our model to compute dye advection with exponential decay, a novel method that we introduce to visualize the extent and evolution of unsteadiness in both 2D and 3D unsteady flow.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100242"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144933920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100241
Zilin Li, Weihan Zhang, Jun Tao
{"title":"FlowLLM: Large language model driven flow visualization","authors":"Zilin Li, Weihan Zhang, Jun Tao","doi":"10.1016/j.visinf.2025.100241","DOIUrl":"10.1016/j.visinf.2025.100241","url":null,"abstract":"<div><div>Flow visualization is an essential tool for domain experts to understand and analyze flow fields intuitively. In the past decades, various interactive techniques were developed to customize flow visualization for exploration. However, these techniques usually use specifically designed graphical interfaces, requiring considerable learning and usage effort. Recently, FlowNL Huang et al., (2023) introduces a natural language interface to reduce the effort, but it still struggles with natural language ambiguities due to the lack of domain knowledge and provides limited ability to understand the context in dialogues. To address these issues, we propose an explorative flow visualization powered by a large language model that interacts with users. Our approach leverages an extensive dataset of flow-related queries to train the model, enhancing its ability to interpret a wide range of natural language expressions and maintain context over multi-turn interactions. Additionally, we introduce an advanced dialogue management system that supports interactive continuous communication between users and the system. Our empirical evaluations demonstrate significant improvements in user engagement and accuracy of flow structure extraction. These enhancements are crucial for expanding the applicability of flow visualization systems in real-world scenarios, where effective and intuitive user interfaces are paramount.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100241"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145046546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"InferEdit: An instruction-based system with a multimodal LLM for complex multi-target image editing","authors":"Zhiyong Huang, Yali She, MengLi Xiang, TuoJun Ding","doi":"10.1016/j.visinf.2025.100265","DOIUrl":"10.1016/j.visinf.2025.100265","url":null,"abstract":"<div><div>To address the limitations of existing instruction-based image editing methods in handling complex Multi-target instructions and maintaining semantic consistency, we present InferEdit, a training-free image editing system driven by a Multimodal Large Language Model (MLLM). The system parses complex multi-target instructions into sequential subtasks and performs editing iteratively through target localization and semantic reasoning. Furthermore, to adaptively select the most suitable editing models, we construct the evaluation dataset InferDataset to evaluate various editing models on three types of tasks: object removal, object replacement, and local editing. Based on a comprehensive scoring mechanism, we build Binary Search Trees (BSTs) for different editing types to facilitate model scheduling. Experiments demonstrate that InferEdit outperforms existing methods in handling complex instructions while maintaining semantic consistency and visual quality.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100265"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100260
Zichen Cheng , Ziyue Lin , Yihang Yang , Zhongyu Wei , Siming Chen
{"title":"Interactive simulation and visual analysis of social media event dynamics with LLM-based multi-agent modeling","authors":"Zichen Cheng , Ziyue Lin , Yihang Yang , Zhongyu Wei , Siming Chen","doi":"10.1016/j.visinf.2025.100260","DOIUrl":"10.1016/j.visinf.2025.100260","url":null,"abstract":"<div><div>With the increasing role of social media in information dissemination, effectively simulating and analyzing public event dynamics has become a key research focus. We present an interactive visual analysis system for simulating social media events using multi-agent models powered by large language models (LLMs). By modeling agents with diverse characteristics, the system explores how agents perceive information, adjust their emotions and stances, provide feedback, and influence the trajectory of events. The system integrates real-time interactive simulation with multi-perspective visualization, enabling users to investigate event trajectories and key influencing factors under varied configurations. Theoretical work standardizes agent attributes and interaction mechanisms, supporting realistic simulation of social media behaviors. Evaluation through indicators and case studies demonstrates the system’s effectiveness and adaptability, offering a novel tool for public event analysis across open social platforms.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100260"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100259
Zhan Wang , Qian Zhu , David Yip , Fugee Tsung , Wei Zeng
{"title":"CineFolio: Cinematography-guided camera planning for immersive narrative visualization","authors":"Zhan Wang , Qian Zhu , David Yip , Fugee Tsung , Wei Zeng","doi":"10.1016/j.visinf.2025.100259","DOIUrl":"10.1016/j.visinf.2025.100259","url":null,"abstract":"<div><div>Narrative visualization facilitates data presentation and communicates insights, while virtual reality can further enhance immersive and engaging experiences. The combination of these two research interests shows the potential to revolutionize the way data is presented and understood. Within the realm of narrative visualization, empirical evidence has particularly highlighted the importance of camera planning. However, existing works primarily rely on user-intensive manipulation of the camera, with little effort put into automating the process. To fill the gap, this paper proposes <em>CineFolio</em>, a semi-automated camera planning method to reduce manual effort and enhance user experience in immersive narrative visualization. <em>CineFolio</em> combines cinematic theories with graphics criteria, considering both information delivery and aesthetic enjoyment to ensure a comfortable and engaging experience. Specifically, we parametrize the considerations into optimizable camera properties and solve it as a constraint satisfaction problem (CSP) to realize common camera types for narrative visualization, namely <em>overview camera</em> for absorbing the scale, <em>focus camera</em> for detailed views, <em>moving camera</em> for animated transitions, and <em>user-controlled camera</em> allowing users to provide inputs to camera planning. We demonstrate the feasibility of our approach with cases of various data and chart types. To further evaluate our approach, we conducted a within-subject user study, comparing our automated method with manual camera control, and the results confirm both effectiveness of the guided navigation and expressiveness of the cinematic design for narrative visualization.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100259"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual InformaticsPub Date : 2025-09-01DOI: 10.1016/j.visinf.2025.100261
Yueqiao Chen , Jiang Wu , Yingcai Wu , Dongyu Liu
{"title":"T-Foresight: Interpret moving strategies based on context-aware trajectory prediction","authors":"Yueqiao Chen , Jiang Wu , Yingcai Wu , Dongyu Liu","doi":"10.1016/j.visinf.2025.100261","DOIUrl":"10.1016/j.visinf.2025.100261","url":null,"abstract":"<div><div>Trajectory prediction and interpretation are crucial in various domains for optimizing movements in complex environments. However, understanding how diverse contextual factors—environmental, physical, and social—influence moving strategies is challenging due to their multifaceted nature, which complicates quantification and the derivation of actionable insights. We introduce an interpretable analytics workflow that addresses these challenges by innovatively leveraging ensemble learning for context-aware trajectory prediction. Multiple base predictors simulate diverse moving strategies, while a decision-making model assesses the suitability of each predictor in specific contexts. This approach quantifies the impact of contextual factors by interpreting the decision-making model’s predictions and reveals possible moving strategies through the aggregation of base predictors’ outputs. The workflow comes with T-Foresight, an interactive visualization interface that empowers stakeholders to explore predictions, interpret contextual influences, and devise and compare moving strategies effectively. We evaluate our approach in the domain of eSports, specifically MOBA games. Through case studies with professional analysts, we demonstrate T-Foresight’s effectiveness in illustrating player moving strategies and providing insights into top-tier tactics. A user study further confirms its usefulness in helping average players uncover and understand advanced strategies.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 3","pages":"Article 100261"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}