Tobias Peherstorfer, Sophia Ulonska, Bianca Burger, Simone Lucato, Bader Al-Hamdan, Marvin Kleinlehner, Till F M Andlauer, Katja Buhler
{"title":"Circuit Mining in Transcriptomics Data.","authors":"Tobias Peherstorfer, Sophia Ulonska, Bianca Burger, Simone Lucato, Bader Al-Hamdan, Marvin Kleinlehner, Till F M Andlauer, Katja Buhler","doi":"10.1109/MCG.2025.3594562","DOIUrl":"10.1109/MCG.2025.3594562","url":null,"abstract":"<p><p>A central goal in neuropharmacological research is to alter brain function by targeting genes whose expression is specific to the corresponding brain circuit. Identifying such genes in large spatially resolved transcriptomics data requires the expertise of bioinformaticians for handling data complexity and to perform statistical tests. This time-consuming process is often decoupled from the routine workflow of neuroscientists, inhibiting fast target discovery. Here, we present a visual analytics approach to mining expression data in the context of meso-scale brain circuits for potential target genes tailored to domain experts with limited technical background. We support several workflows for interactive definition and refinement of circuits in the human or mouse brain, and combine spatial indexing with an alternative formulation of sample variance to enable differential gene expression analysis in arbitrary brain circuits at runtime. A user study highlights the usefulness, benefits, and future potential of our work.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"35-48"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144762356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging Theory and Practice: A Multiphase Study of GenAI-Assisted Visualization Learning.","authors":"Mak Ahmad, Kwan-Liu Ma, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra","doi":"10.1109/MCG.2025.3553396","DOIUrl":"10.1109/MCG.2025.3553396","url":null,"abstract":"<p><p>Understanding how students learn visualization skills is becoming increasingly crucial as generative AI transforms technical education. We present a systematic study examining how structured exposure to large language models via Observable's AI Assist platform impacts data visualization education through a multiphase investigation across two universities. Our mixed-methods approach with 65 graduate students (32 data science and 33 computer science) revealed that structured generative AI exposure following constructivist learning principles enabled sustained engagement and tool adoption while maintaining pedagogical rigor. Through a structured multiphase study incorporating preassessments, intervention observations, detailed assignment reflections, and postintervention evaluation within the academic term constraints, we identified specific patterns in how students integrate generative AI into their visualization workflows. The results from our mixed-methods analysis suggest potential strategies for adapting visualization education to an AI-augmented future while preserving essential learning outcomes. We contribute practical frameworks for integrating generative AI tools into visualization curricula and evidence-based insights on scaffolding student learning with AI assistance, with initial evidence of sustained impact over a three-week period following instruction.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 5","pages":"147-156"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark S Keller, Eric Morth, Thomas C Smits, Simon Warchol, Grace Guo, Qianwen Wang, Robert Krueger, Hanspeter Pfister, Nils Gehlenborg
{"title":"The State of Single-Cell Atlas Data Visualization in the Biological Literature.","authors":"Mark S Keller, Eric Morth, Thomas C Smits, Simon Warchol, Grace Guo, Qianwen Wang, Robert Krueger, Hanspeter Pfister, Nils Gehlenborg","doi":"10.1109/MCG.2025.3583979","DOIUrl":"10.1109/MCG.2025.3583979","url":null,"abstract":"<p><p>Recent advancements have enabled tissue samples to be profiled at the unprecedented level of detail of a single cell. Analysis of these data has enabled discoveries that are relevant to understanding disease and developing therapeutics. Large-scale profiling efforts are underway, which aim to generate \"atlas\" resources that catalog cellular archetypes, including biomarkers and spatial locations. While the problem of cellular data visualization is not new, the size, resolution, and heterogeneity of single-cell atlas datasets present challenges and opportunities. We survey the usage of visualization to interpret single-cell atlas datasets by assessing over 1800 figure panels from 45 biological publications. We intend for this report to serve as a foundational resource for the visualization community as atlas-scale single-cell datasets are emerging rapidly with aims of advancing our understanding of biological function in health and disease.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"18-34"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144512838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Barbora Kozlikova, Daniel Archambault, Johannes Dreesman, Andreas Kerren, Biagio Lucini, Cagatay Turkay, Melanie Tory, Daniel Keefe, Cindy Xiong Bearfield
{"title":"Embarrassingly Agile-Data Visualization Methodology in Emergency Responses.","authors":"Barbora Kozlikova, Daniel Archambault, Johannes Dreesman, Andreas Kerren, Biagio Lucini, Cagatay Turkay, Melanie Tory, Daniel Keefe, Cindy Xiong Bearfield","doi":"10.1109/MCG.2025.3595342","DOIUrl":"https://doi.org/10.1109/MCG.2025.3595342","url":null,"abstract":"<p><p>The pandemic had broad reaching impacts on how we do many things, including the way that we design and implement visualizations. In this article, we reflect on how visualization design changed in an emergency response. Based on these reflections, we present modifications to design methodologies for visualizations to accommodate an emergency response and its working conditions.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 5","pages":"138-146"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joao S Ferreira, Pierre Fromholz, Hari Shaji, James R Wootton, Mike Potel
{"title":"Level Generation With Quantum Reservoir Computing.","authors":"Joao S Ferreira, Pierre Fromholz, Hari Shaji, James R Wootton, Mike Potel","doi":"10.1109/MCG.2025.3591956","DOIUrl":"https://doi.org/10.1109/MCG.2025.3591956","url":null,"abstract":"<p><p>Reservoir computing is a form of machine learning particularly suited for time-series analysis, including forecasting predictions. We take an implementation of quantum reservoir computing that was initially designed to generate variants of musical scores and adapt it to create levels of Super Mario Bros. Motivated by our analysis of these levels, we develop a new Roblox obstacle course game (known as an \"obby\") where the courses can be generated in real time on superconducting qubit hardware and investigate some of the constraints placed by such real-time generation.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"45 5","pages":"117-126"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaume Ros, Alessio Arleo, Rafael Giordano Viegas, Vitor B P Leite, Fernando V Paulovich
{"title":"Challenges and Opportunities for the Visualization of Protein Energy Landscapes.","authors":"Jaume Ros, Alessio Arleo, Rafael Giordano Viegas, Vitor B P Leite, Fernando V Paulovich","doi":"10.1109/MCG.2025.3592983","DOIUrl":"10.1109/MCG.2025.3592983","url":null,"abstract":"<p><p>Protein folding is the process by which proteins go from a linear chain of amino acids to a 3-D structure that determines their biological function. Although recent advances in protein 3-D structure prediction can directly determine the folded protein's final shape, the process by which this happens is complex and not very well understood. Part of the study of protein folding focuses on the analysis of their \"energy landscape,\" defined by the molecule's energy as a function of its structure. The data are mostly obtained through atomic-level computer simulations and are very high-dimensional, making them difficult to interpret. Visualization can be a powerful tool to support researchers studying the energy landscape of proteins; however, we noticed that they are not widely adopted by the scientific community. We present the main methods currently used and the challenges they face, as well as future opportunities for visualization in this field.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"49-63"},"PeriodicalIF":1.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuheng Shao, Shiyi Liu, Gongyan Chen, Ruofei Ma, Xingbo Wang, Quan Li
{"title":"FashionCook: A Visual Analytics System for Human-AI Collaboration in Fashion E-Commerce Design.","authors":"Yuheng Shao, Shiyi Liu, Gongyan Chen, Ruofei Ma, Xingbo Wang, Quan Li","doi":"10.1109/MCG.2025.3597849","DOIUrl":"https://doi.org/10.1109/MCG.2025.3597849","url":null,"abstract":"<p><p>Fashion e-commerce design requires the integration of creativity, functionality, and responsiveness to user preferences. While AI offers valuable support, generative models often miss the nuances of user experience, and task-specific models, although more accurate, lack transparency and real-world adaptability-especially with complex multimodal data. These issues reduce designers' trust and hinder effective AI integration. To address this, we present FashionCook, a visual analytics system designed to support human-AI collaboration in the context of fashion e-commerce. The system bridges communication among model builders, designers, and marketers by providing transparent model interpretations, \"what-if\" scenario exploration, and iterative feedback mechanisms. We validate the system through two real-world case studies and a user study, demonstrating how FashionCook enhances collaborative workflows and improves design outcomes in data-driven fashion e-commerce environments.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AnchorTextVis: A Visual Analytics Approach for Fast Comparison of Text Embeddings.","authors":"Jingzhen Zhang, Hongjiang Lv, Zhibin Niu","doi":"10.1109/MCG.2025.3598262","DOIUrl":"https://doi.org/10.1109/MCG.2025.3598262","url":null,"abstract":"<p><p>Visual comparison of text embeddings is crucial for analyzing semantic differences and comparing embedding models. Existing methods fail to maintain visual consistency in comparative regions and lack AI-assisted analysis, leading to high cognitive loads and time-consuming exploration processes. In this paper, we propose AnchorTextVis, a visual analytics approach based on AnchorMap-our dynamic projection algorithm balancing spatial quality and temporal coherence and LLMs to preserve users' mental map and accelerate the exploration process. We introduce the use of comparable dimensionality reduction algorithms that maintain visual consistency, such as AnchorMap from our previous work and Joint t-SNE. Building on this foundation, we leverage LLMs to compare and summarize, offering users insights. For quantitative comparisons, we define two complementary metrics, Shared KNN and Coordinate distance. Besides, we have also designed intuitive representation and rich interactive tools to compare clusters of texts and individual texts. We demonstrate the effectiveness and usefulness of our approach through three case studies and expert feedback.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144849685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Shahriari, Yichi Yang, Danish Nisar Ahmed Tamboli, Michael Perez, Yuheng Zha, Jinyu Hou, Mingkai Deng, Eric D Ragan, Jaime Ruiz, Daisy Zhe Wang, Zhitting Hu, Eric Xing
{"title":"MuCHEx: A Multimodal Conversational Debugging Tool for Interactive Visual Exploration of Hierarchical Object Classification.","authors":"Reza Shahriari, Yichi Yang, Danish Nisar Ahmed Tamboli, Michael Perez, Yuheng Zha, Jinyu Hou, Mingkai Deng, Eric D Ragan, Jaime Ruiz, Daisy Zhe Wang, Zhitting Hu, Eric Xing","doi":"10.1109/MCG.2025.3598204","DOIUrl":"https://doi.org/10.1109/MCG.2025.3598204","url":null,"abstract":"<p><p>Object recognition is a fundamental challenge in computer vision, particularly for fine-grained object classification, where classes differ in minor features. Improved fine-grained object classification requires a teaching system with numerous classes and instances of data. As the number of hierarchical levels and instances grows, debugging these models becomes increasingly complex. Moreover, different types of debugging tasks require varying approaches, explanations, and levels of detail. We present MuCHEx, a multimodal conversational system that blends natural language and visual interaction for interactive debugging of hierarchical object classification. Natural language allows users to flexibly express high-level questions or debugging goals without needing to navigate complex interfaces, while adaptive explanations surface only the most relevant visual or textual details based on the user's current task. This multimodal approach combines the expressiveness of language with the precision of direct manipulation, enabling context-aware exploration during model debugging.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144838637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Texture Segmentation of 3D Scanned Models Leveraging Multiview Automatic Segmentation.","authors":"Koki Madono, Takeo Igarashi, Hiroharu Kato, Taisuke Hashimoto, Fabrice Matulic, Tsukasa Takagi, Keita Higuchi","doi":"10.1109/MCG.2025.3595378","DOIUrl":"https://doi.org/10.1109/MCG.2025.3595378","url":null,"abstract":"<p><p>In 3D model scanning, the raw texture of a 3D model often requires segmentation into distinct regions to apply different material properties to each region. Current methods, such as manual segmentation, are labor-intensive, while automatic segmentation techniques lack user control. We propose an interactive tool that combines automatic segmentation with minimal manual intervention, striking an optimal balance between efficiency and control. Following a multiview automatic segmentation process that divides the texture into small subsegments, users cluster the subsegments into segments by drawing simple scribbles in the 3D model view. We show that our approach results in more detailed subsegments compared to automatic segmentation approaches. Furthermore, a user study confirms that our approach improves segmentation accuracy and quality compared to manual segmentation with standard 3D computer graphics software. This research paves the way to more efficient texture segmentation in 3D model scanning.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}