{"title":"Example-based authoring of expressive space curves","authors":"JiříMinarčík , Jakub Fišer , Daniel Sýkora","doi":"10.1016/j.cag.2025.104249","DOIUrl":"10.1016/j.cag.2025.104249","url":null,"abstract":"<div><div>In this paper we present a novel example-based stylization method for 3D space curves. Inspired by image-based arbitrary style transfer (Gatys et al., 2016), we introduce a workflow that allows artists to transfer the stylistic characteristics of a short exemplar curve to a longer target curve in 3D—a problem, to the best of our knowledge, previously unexplored. Our approach involves extracting the underlying, unstyled form of the exemplar curve using a novel smoothing flow. This unstyled representation is then aligned with the target curve using a modified Fréchet distance. To achieve precise matching with reduced computational cost, we employ a semi-discrete optimization scheme, which outperforms existing methods for similar curve alignment problems. Furthermore, our formulation provides intuitive controls for adjusting stylization strength and transfer temperature, enabling greater creative flexibility. Its versatility also allows for the simultaneous stylization of additional attributes along the curve, which is particularly valuable in 3D applications where curves may represent medial axes of complex structures. We demonstrate the effectiveness of our method through a variety of expressive stylizations across different application contexts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104249"},"PeriodicalIF":2.5,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixuan Han , Diede van der Hoorn , Thomas Höllt , Qiaodan Luo , Leonardo Christino , Evangelos Milios , Fernando V. Paulovich
{"title":"DimenFix: A novel meta-strategy to preserve user-defined data values on dimensionality reduction layouts","authors":"Zixuan Han , Diede van der Hoorn , Thomas Höllt , Qiaodan Luo , Leonardo Christino , Evangelos Milios , Fernando V. Paulovich","doi":"10.1016/j.cag.2025.104231","DOIUrl":"10.1016/j.cag.2025.104231","url":null,"abstract":"<div><div>Dimensionality Reduction (DR) methods have become essential tools for the data analysis toolbox. Typically, DR methods combine features of a multivariate dataset to produce dimensions in a reduced space, preserving some data properties, usually pairwise distances or local neighborhoods. Preserving such properties makes DR methods attractive, but it is also one of their weaknesses. When calculating the embedded dimensions, usually through non-linear strategies, the original feature values are lost and not explicitly represented in the spatialization of the produced layouts, making it challenging to interpret the results and understand the features’ contributions to the attained representations. Some strategies have been proposed to tackle this issue, such as coloring the DR layouts or generating explanations. Still, they are post-processes, so specific features (values) are not guaranteed to be preserved or represented. This paper proposes <em>DimenFix</em>, a novel meta-DR strategy that explicitly preserves the values of a particular user-defined feature or external data (not used to generate a layout) in one of the embedded axes. <em>DimenFix</em> can be used to preserve ordinal (e.g., numerical measures) and nominal (e.g., labels) values and works with virtually any gradient-descent DR method. It requires minimum changes to the underlying DR technique, running in linear time considering the number of data instances. In our results, involving Force Scheme and t-SNE adaptations, <em>DimenFix</em> was capable of representing features without heavily impacting distance or neighborhood preservation, allowing for creating hybrid layouts that join characteristics of scatter plots and DR methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104231"},"PeriodicalIF":2.5,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maeve Hutchinson, Radu Jianu, Aidan Slingsby, Pranava Madhyastha
{"title":"Foundation model assisted visual analytics: Opportunities and Challenges","authors":"Maeve Hutchinson, Radu Jianu, Aidan Slingsby, Pranava Madhyastha","doi":"10.1016/j.cag.2025.104246","DOIUrl":"10.1016/j.cag.2025.104246","url":null,"abstract":"<div><div>We explore the integration of foundation models, such as large language models (LLMs) and multimodal LLMs (MLLMs), into visual analytics (VA) systems through intuitive natural language interactions. We survey current research directions in this emerging field, examining how foundation models have already been integrated into key visualisation-related processes in VA: visual mapping, the creation of data visualisations; visualisation observation, the process of generating a finding through visualisation; and visualisation manipulation, changing the viewport or highlighting areas of interest within a visualisation. We also highlight new possibilities that foundation models bring to VA, in particular, the opportunities to use MLLMs to interpret visualisations directly, to integrate multimodal interactions, and to provide guidance to users. We finally conclude with a vision of future VA systems as collaborative partners in analysis and address the prominent challenges in realising this vision through foundation models. Our discussions in this paper aim to guide future researchers working on foundation model assisted VA systems and help them navigate common obstacles when developing these systems.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104246"},"PeriodicalIF":2.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visually-supported topic modeling for understanding behavioral patterns from spatio–temporal events","authors":"Laleh Moussavi , Gennady Andrienko , Natalia Andrienko , Aidan Slingsby","doi":"10.1016/j.cag.2025.104245","DOIUrl":"10.1016/j.cag.2025.104245","url":null,"abstract":"<div><div>Spatio-temporal event sequences consist of activities or occurrences involving various interconnected elements in space and time. We show how topic modeling—typically used in text analysis—can be adapted to abstract and conceptualize such data. We propose an overall analytical workflow that combines computational and visual analytics methods to support some tasks, enabling the transformation of raw event data into meaningful insights. We apply our workflow to football matches as an example of important yet under-explored spatio-temporal event data. A key step in topic modeling is determining the appropriate number of topics; to address this, we introduce a visual method that organizes multiple modeling runs into a similarity-based layout, helping analysts identify patterns that balance interpretability and granularity.</div><div>We demonstrate how our workflow, which integrates visual analytics, supports five core analysis tasks: identifying common behavioral patterns, tracking their distribution across individuals or groups, observing progression at different temporal scales, comparing behavior under varied conditions, and detecting deviations from typical behavior.</div><div>Using real-world football data, we illustrate how our end-to-end process enables deeper insights into both tactical details and broader trends — from single match analyses to season wide perspectives. While our case study focuses on football, the proposed workflow is domain-agnostic and can be readily applied to other spatio-temporal event datasets, offering a flexible foundation for extracting and interpreting complex behavioral patterns.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104245"},"PeriodicalIF":2.5,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data meets creativity: Authentic learning through data art design and exhibition","authors":"Jonathan C. Roberts","doi":"10.1016/j.cag.2025.104248","DOIUrl":"10.1016/j.cag.2025.104248","url":null,"abstract":"<div><div>We introduce an authentic learning task, where students create data art visualisations from selected datasets to be showcased in a public exhibition. Our vision is to explore how creativity and visualisation intersect and how combining these elements results in an authentic learning task for computing students. Run over two completed academic years, with a third cohort nearing completion, this initiative offered an active learning environment that fostered student engagement, creativity, and the application of practical skills. We detail the structured approach, outlining eight steps that students perform: topic selection and research, data analysis, researching artistic inspiration, conceptualising designs, proposing solutions, creating visualisations, reflection and curating an exhibition. Our framework equips educators with detailed lectures and activities, enabling them to implement similar tasks in their own teaching. Finally, we present illustrative examples of student outcomes and share reflective insights, showcasing the impact of integrating authentic learning with public-facing creative projects. This approach enhances technical skills while connecting academic learning to real-world professional practice.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104248"},"PeriodicalIF":2.5,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144130830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction","authors":"Adeyemi Ademola , David Sinclair , Babis Koniaris , Samantha Hannah , Kenny Mitchell","doi":"10.1016/j.cag.2025.104244","DOIUrl":"10.1016/j.cag.2025.104244","url":null,"abstract":"<div><div>Advancements in prediction of human motion sequences are critical for enabling online virtual reality (VR) users to dance and move in ways that accurately mirror real-world actions, delivering a more immersive and connected experience. However, latency in networked motion tracking remains a significant challenge, disrupting engagement and necessitating predictive solutions to achieve real-time synchronization of remote motions. To address this issue, we propose a novel approach leveraging a synthetically generated dataset based on supervised foot anchor placement timings for rhythmic motions, ensuring periodicity and reducing prediction errors. Our model integrates a discrete cosine transform (DCT) to encode motion, refine high-frequency components, and smooth motion sequences, mitigating jittery artifacts. Additionally, we introduce a feed-forward attention mechanism designed to learn from N-window pairs of 3D key-point pose histories for precise future motion prediction. Quantitative and qualitative evaluations on the Human3.6M dataset highlight significant improvements in mean per joint position error (MPJPE) metrics, demonstrating the superiority of our technique over state-of-the-art approaches. We further introduce novel result pose visualizations through the use of generative AI methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104244"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144154879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Matt , Jana Sedlakova , Jürgen Bernard , Matthias Zeppelzauer , Manuela Waldner
{"title":"Scalable Class-Centric Visual Interactive Labeling","authors":"Matthias Matt , Jana Sedlakova , Jürgen Bernard , Matthias Zeppelzauer , Manuela Waldner","doi":"10.1016/j.cag.2025.104240","DOIUrl":"10.1016/j.cag.2025.104240","url":null,"abstract":"<div><div>Large unlabeled datasets demand efficient and scalable data labeling solutions, in particular when the number of instances and classes is large. This leads to significant visual scalability challenges and imposes a high cognitive load on the users. Traditional instance-centric labeling methods, where (single) instances are labeled in each iteration struggle to scale effectively in these scenarios. To address these challenges, we introduce cVIL, a <em>Class-Centric Visual Interactive Labeling</em> methodology designed for interactive visual data labeling. By shifting the paradigm from <em>assigning-classes-to-instances</em> to <em>assigning-instances-to-classes</em>, cVIL reduces labeling effort and enhances efficiency for annotators working with large, complex and class-rich datasets. We propose a novel visual analytics labeling interface built on top of the conceptual cVIL workflow, enabling improved scalability over traditional visual labeling. In a user study, we demonstrate that cVIL can improve labeling efficiency and user satisfaction over instance-centric interfaces. The effectiveness of cVIL is further demonstrated through a usage scenario, showcasing its potential to alleviate cognitive load and support experts in managing extensive labeling tasks efficiently.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104240"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144170213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic-aware hierarchical clustering for inverse rendering in indoor scenes","authors":"Xin Lv , Lijun Li , Zetao Chen","doi":"10.1016/j.cag.2025.104236","DOIUrl":"10.1016/j.cag.2025.104236","url":null,"abstract":"<div><div>Decomposing a scene into its material properties and illumination, given the geometry and multi-view HDR observations of an indoor environment, is a fundamental yet challenging problem in computer vision and graphics. Existing approaches, combined with neural rendering techniques, have shown promising results in object-specific scenarios but often struggle with inconsistencies in material estimation within complex indoor scenes. Besides, ambiguities frequently arise between lighting and material properties. To address these limitations, we propose an adaptive inverse rendering pipeline based on Factorized Inverse Path Tracing (FIPT) that incorporates a semantic-aware hierarchical clustering approach. This enhancement enables the disentanglement of lighting and material properties, facilitating more accurate and consistent estimations of albedo, roughness, and metallic characteristics. Additionally, we introduce a voxel grid filter to further reduce computational time. Experimental results on both synthetic and real-world room-scale scenes demonstrate that our method produces more accurate material estimations compared to state-of-the-art methods. Furthermore, we demonstrate the potential of our method through several applications, including novel view synthesis, object insertion, and relighting.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104236"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144088956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniela Blumberg , Yu Wang , Alexandru Telea , Daniel A. Keim , Frederik L. Dennig
{"title":"MultiInv: Inverting multidimensional scaling projections and computing decision maps by multilateration","authors":"Daniela Blumberg , Yu Wang , Alexandru Telea , Daniel A. Keim , Frederik L. Dennig","doi":"10.1016/j.cag.2025.104234","DOIUrl":"10.1016/j.cag.2025.104234","url":null,"abstract":"<div><div>Inverse projections enable a variety of tasks such as the exploration of classifier decision boundaries, creating counterfactual explanations, and generating synthetic data. Yet, many existing inverse projection methods are difficult to implement, challenging to predict, and sensitive to parameter settings. To address these, we propose to invert distance-preserving projections like Multidimensional Scaling (MDS) projections by using multilateration – a method used for geopositioning. Our approach finds data values for locations where no data point is projected under the key assumption that a given projection technique preserves pairwise distances among data samples in the low-dimensional space. Being based on a geometrical relationship, our technique is more interpretable than comparable machine learning-based approaches and can invert 2-dimensional projections up to <span><math><mrow><mfenced><mrow><mi>D</mi></mrow></mfenced><mo>−</mo><mn>1</mn></mrow></math></span> dimensional spaces if given at least <span><math><mfenced><mrow><mi>D</mi></mrow></mfenced></math></span> data points. We compare several strategies for multilateration point selection, show the application of our technique on three additional projection techniques apart from MDS, and use established quality metrics to evaluate its accuracy in comparison to existing inverse projections. We also show its application to computing decision maps for exploring the behavior of trained classification models. When the projection to invert captures data distances well, our inverse performs similarly to existing approaches while being interpretable and considerably simpler to compute.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104234"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isaac Cho, Heejin Jeong, Kangsoo Kim, Hyungil Kim, Myounghoon Jeon
{"title":"Foreword to the special section on eXtended Reality for Industrial and Occupational Supports (XRIOS)","authors":"Isaac Cho, Heejin Jeong, Kangsoo Kim, Hyungil Kim, Myounghoon Jeon","doi":"10.1016/j.cag.2025.104242","DOIUrl":"10.1016/j.cag.2025.104242","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104242"},"PeriodicalIF":2.5,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144166354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}