Ruishan Wu, Zhuoyang Li, Charles Perin, Sheelagh Carpendale, Can Liu
{"title":"Design Exploration of AI-Assisted Personal Affective Physicalization.","authors":"Ruishan Wu, Zhuoyang Li, Charles Perin, Sheelagh Carpendale, Can Liu","doi":"10.1109/MCG.2025.3614686","DOIUrl":"https://doi.org/10.1109/MCG.2025.3614686","url":null,"abstract":"<p><p>Personal Affective Physicalization is the process by which individuals express emotions through tangible forms to record, reflect on, and communicate. Yet such physical data representations can be challenging to design due to the abstract nature of emotions. Given the shown potential of AI in detecting emotion and assisting design, we explore opportunities in AI-assisted design of personal affective physicalization using a Research-through-Design method. We developed PhEmotion, a tool for embedding LLM-extracted emotion values from human-AI conversations into parametric design of physical artifacts. A lab study was conducted with 14 participants creating these artifacts based on their personal emotions, with and without AI support. We observed nuances and variations in participants' creative strategies, meaning-making processes and their perceptions of AI support in this context. We found key tensions in AI-human co-creation that provide a nuanced agenda for future research in AI-assisted personal affective physicalization.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank Elavsky, Marita Vindedal, Ted Gies, Patrick Carrington, Dominik Moritz, Oystein Moseng
{"title":"Towards softerware: Enabling personalization of interactive data representations for users with disabilities.","authors":"Frank Elavsky, Marita Vindedal, Ted Gies, Patrick Carrington, Dominik Moritz, Oystein Moseng","doi":"10.1109/MCG.2025.3609294","DOIUrl":"https://doi.org/10.1109/MCG.2025.3609294","url":null,"abstract":"<p><p>Accessible design for some may still produce barriers for others. This tension, called access friction, creates challenges for both designers and end-users with disabilities. To address this, we present the concept of softerware, a system design approach that provides end users with agency to meaningfully customize and adapt interfaces to their needs. To apply softerware to visualization, we assembled 195 data visualization customization options centered on the barriers we expect users with disabilities will experience. We built a prototype that applies a subset of these options and interviewed practitioners for feedback. Lastly, we conducted a design probe study with blind and low vision accessibility professionals to learn more about their challenges and visions for softerware. We observed access frictions between our participant's designs and they expressed that for softerware's success, current and future systems must be designed with accessible defaults, interoperability, persistence, and respect for a user's perceived effort-to-outcome ratio.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145056047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vaishali Dhanoa, Anton Wolter, Gabriela Molina Leon, Hans-Jorg Schulz, Niklas Elmqvist
{"title":"Agentic Visualization: Extracting Agent-based Design Patterns from Visualization Systems.","authors":"Vaishali Dhanoa, Anton Wolter, Gabriela Molina Leon, Hans-Jorg Schulz, Niklas Elmqvist","doi":"10.1109/MCG.2025.3607741","DOIUrl":"https://doi.org/10.1109/MCG.2025.3607741","url":null,"abstract":"<p><p>Autonomous agents powered by Large Language Models are transforming AI, creating an imperative for the visualization area. However, our field's focus on a human in the sensemaking loop raises critical questions about autonomy, delegation, and coordination for such agentic visualization that preserve human agency while amplifying analytical capabilities. This paper addresses these questions by reinterpreting existing visualization systems with semi-automated or fully automatic AI components through an agentic lens. Based on this analysis, we extract a collection of design patterns for agentic visualization, including agentic roles, communication, and coordination. These patterns provide a foundation for future agentic visualization systems that effectively harness AI agents while maintaining human insight and control.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145031020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uzair Shah, Sara Jashari, Muhammad Tukur, Mowafa Househ, Jens Schneider, Giovanni Pintore, Enrico Gobbetti, Marco Agus
{"title":"Virtual Staging of Indoor Panoramic Images via Multi-task Learning and Inverse Rendering.","authors":"Uzair Shah, Sara Jashari, Muhammad Tukur, Mowafa Househ, Jens Schneider, Giovanni Pintore, Enrico Gobbetti, Marco Agus","doi":"10.1109/MCG.2025.3605806","DOIUrl":"https://doi.org/10.1109/MCG.2025.3605806","url":null,"abstract":"<p><p>Capturing indoor environments with 360° images provides a cost-effective method for creating immersive content. However, virtual staging - removing existing furniture and inserting new objects with realistic lighting - remains challenging. We present VISPI (Virtual Staging Pipeline for Single Indoor Panoramic Images), a framework that enables interactive restaging of indoor scenes from a single panoramic image. Our approach combines multi-task deep learning with real-time rendering to extract geometric, semantic, and material information from cluttered scenes. The system includes: i) a vision transformer that simultaneously predicts depth, normals, semantics, albedo, and material properties; ii) spherical Gaussian lighting estimation; iii) real-time editing for interactive object placement; iv) stereoscopic Multi-Center-Of-Projection generation for Head Mounted Display exploration. The framework processes input through two pathways: extracting clutter-free representations for virtual staging and estimating material properties including metallic and roughness signals. We evaluate VISPI on Structured3D and FutureHouse datasets, demonstrating applications in real estate visualization, interior design, and virtual environment creation.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144994462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Yu, Longdu Liu, Shuangmin Chen, Shiqing Xin, Changhe Tu
{"title":"Tooth Completion and Reconstruction in Digital Orthodontics.","authors":"Hao Yu, Longdu Liu, Shuangmin Chen, Shiqing Xin, Changhe Tu","doi":"10.1109/MCG.2025.3605266","DOIUrl":"https://doi.org/10.1109/MCG.2025.3605266","url":null,"abstract":"<p><p>In the field of digital orthodontics, dental models with complete roots are essential digital assets, particularly for visualization and treatment planning. However, intraoral scans typically capture only dental crowns, leaving roots missing. In this paper, we introduce a meticulously designed algorithmic pipeline to complete dental models while preserving crown geometry and mesh topology. Our pipeline begins with learning-based point cloud completion applied to existing dental crowns. We then reconstruct a complete tooth model, encompassing both the crown and root, to guide subsequent processing steps. Next, we restore the crown's original geometry and mesh topology using a strong Delaunay meshing structure; the correctness of this approach has been thoroughly established in existing literature. Finally, we optimize the transition region between crown and root using bi-harmonic smoothing. A key advantage of our approach is that the completed tooth model accurately maintains the geometry and mesh topology of the original crown, while also ensuring high-quality triangulation of dental roots.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wandrille Duchemin, Takanori Fujiwara, Hollister W Herhold, Elias Elmquist, David S Thaler, William Harcourt-Smith, Emma Broman, Alexander Bock, Brian P Abbott, Jacqueline K Faherty
{"title":"A Cosmic View of Life on Earth: Hierarchical Visualization of Biological Data Using Astronomical Software.","authors":"Wandrille Duchemin, Takanori Fujiwara, Hollister W Herhold, Elias Elmquist, David S Thaler, William Harcourt-Smith, Emma Broman, Alexander Bock, Brian P Abbott, Jacqueline K Faherty","doi":"10.1109/MCG.2025.3591713","DOIUrl":"10.1109/MCG.2025.3591713","url":null,"abstract":"<p><p>A goal of data visualization is to advance the understanding of multiparameter, large-scale datasets. In astrophysics, scientists map celestial objects to understand the hierarchical structure of the universe. In biology, genetic sequences and biological characteristics uncover evolutionary relationships and patterns (e.g., variation within species and ecological associations). Our highly interdisciplinary project entitled \"A Cosmic View of Life on Earth\" adapts an immersive astrophysics visualization platform called OpenSpace to contextualize diverse biological data. Dimensionality reduction techniques harmonize biological information to create spatial representations in which data are interactively explored on flat screens and planetarium domes. Visualizations are enriched with geographic metadata, 3-D scans of specimens, and species-specific sonifications (e.g., bird songs). The \"Cosmic View\" project eases the dissemination of stories related to biological domains (e.g., insects, birds, mammals, and human migrations) and facilitates scientific discovery.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"93-106"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakub Vasicek, Dafni Skiadopoulou, Ksenia G Kuznetsova, Lukas Kall, Marc Vaudel, Stefan Bruckner
{"title":"ProHap Explorer: Visualizing Haplotypes in Proteogenomic Datasets.","authors":"Jakub Vasicek, Dafni Skiadopoulou, Ksenia G Kuznetsova, Lukas Kall, Marc Vaudel, Stefan Bruckner","doi":"10.1109/MCG.2025.3581736","DOIUrl":"10.1109/MCG.2025.3581736","url":null,"abstract":"<p><p>In mass spectrometry-based proteomics, experts usually project data onto a single set of reference sequences, overlooking the influence of common haplotypes (combinations of genetic variants inherited together from a parent). We recently introduced ProHap, a tool for generating customized protein haplotype databases. Here, we present ProHap Explorer, a visualization interface designed to investigate the influence of common haplotypes on the human proteome. It enables users to explore haplotypes, their effects on protein sequences, and the identification of noncanonical peptides in public mass spectrometry datasets. The design builds on well-established representations in biological sequence analysis, ensuring familiarity for domain experts while integrating novel interactive elements tailored to proteogenomic data exploration. User interviews with proteomics experts confirmed the tool's utility, highlighting its ability to reveal whether haplotypes affect proteins of interest. By facilitating the intuitive exploration of proteogenomic variation, ProHap Explorer supports research in personalized medicine and the development of targeted therapies.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"64-77"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144337278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mai Elshehaly, Radu Jianu, Aidan Slingsby, Gennady Andrienko, Natalia Andrienko, Theresa-Marie Rhyne
{"title":"Designing for Collaboration: Visualization to Enable Human-LLM Analytical Partnership.","authors":"Mai Elshehaly, Radu Jianu, Aidan Slingsby, Gennady Andrienko, Natalia Andrienko, Theresa-Marie Rhyne","doi":"10.1109/MCG.2025.3583451","DOIUrl":"https://doi.org/10.1109/MCG.2025.3583451","url":null,"abstract":"<p><p>Visualization artifacts have long served as anchors for collaboration and knowledge transfer in data analysis. While effective for human-human collaboration, little is known about their role in capturing and externalizing knowledge when working with large language models (LLMs). Despite the growing role of LLMs in analytics, their linear text-based workflows limit the ability to structure artifacts into useful and traceable representations of the analytical process. We argue that dynamic visual representations of evolving analysis-organizing artifacts and provenance into semantic structures, such as idea development and shifts in inquiry-are critical for effective human-LLM workflows. We demonstrate the current opportunities and limitations of using LLMs to track, structure, and visualize analytic processes, and propose a research agenda to leverage rapid advances in LLM capabilities. Our goal is to present a compelling argument for maximizing the role of visualization as a catalyst for more structured, transparent, and insightful human-LLM analytical interactions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 5","pages":"107-116"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi Zhang, Yu Dong, Yang Wang, Yuetong Han, Guihua Shan, Bixia Tang
{"title":"AuraGenome: An LLM-Powered Framework for On-the-Fly Reusable and Scalable Circular Genome Visualizations.","authors":"Chi Zhang, Yu Dong, Yang Wang, Yuetong Han, Guihua Shan, Bixia Tang","doi":"10.1109/MCG.2025.3581560","DOIUrl":"10.1109/MCG.2025.3581560","url":null,"abstract":"<p><p>Circular genome visualizations are essential for exploring structural variants and gene regulation. However, existing tools often require complex scripting and manual configuration, making the process time-consuming, error-prone, and difficult to learn. To address these challenges, we introduce AuraGenome, a large language model (LLM)-powered framework for rapid, reusable, and scalable generation of multilayered circular genome visualizations. AuraGenome combines a semantic-driven multiagent workflow with an interactive visual analytics system. The workflow employs seven specialized LLM-driven agents, each assigned distinct roles, such as intent recognition, layout planning, and code generation, to transform raw genomic data into tailored visualizations. The system supports multiple coordinated views tailored for genomic data, offering ring, radial, and chord-based layouts to represent multilayered circular genome visualizations. In addition to enabling interactions and configuration reuse, the system supports real-time refinement and high-quality report export. We validate its effectiveness through two case studies and a comprehensive user study. AuraGenome is available at https://github.com/Darius18/AuraGenome.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"78-92"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144337277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Peherstorfer, Sophia Ulonska, Bianca Burger, Simone Lucato, Bader Al-Hamdan, Marvin Kleinlehner, Till F M Andlauer, Katja Buhler
{"title":"Circuit Mining in Transcriptomics Data.","authors":"Tobias Peherstorfer, Sophia Ulonska, Bianca Burger, Simone Lucato, Bader Al-Hamdan, Marvin Kleinlehner, Till F M Andlauer, Katja Buhler","doi":"10.1109/MCG.2025.3594562","DOIUrl":"10.1109/MCG.2025.3594562","url":null,"abstract":"<p><p>A central goal in neuropharmacological research is to alter brain function by targeting genes whose expression is specific to the corresponding brain circuit. Identifying such genes in large spatially resolved transcriptomics data requires the expertise of bioinformaticians for handling data complexity and to perform statistical tests. This time-consuming process is often decoupled from the routine workflow of neuroscientists, inhibiting fast target discovery. Here, we present a visual analytics approach to mining expression data in the context of meso-scale brain circuits for potential target genes tailored to domain experts with limited technical background. We support several workflows for interactive definition and refinement of circuits in the human or mouse brain, and combine spatial indexing with an alternative formulation of sample variance to enable differential gene expression analysis in arbitrary brain circuits at runtime. A user study highlights the usefulness, benefits, and future potential of our work.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"35-48"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144762356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}