Ofek Pearl, Itai Lang, Yuhua Hu, Raymond A. Yeh, Rana Hanocka
{"title":"GeoCode: Interpretable Shape Programs","authors":"Ofek Pearl, Itai Lang, Yuhua Hu, Raymond A. Yeh, Rana Hanocka","doi":"10.1111/cgf.15276","DOIUrl":"https://doi.org/10.1111/cgf.15276","url":null,"abstract":"<p>The task of crafting procedural programs capable of generating structurally valid 3D shapes easily and intuitively remains an elusive goal in computer vision and graphics. Within the graphics community, generating procedural 3D models has shifted to using node graph systems. They allow the artist to create complex shapes and animations through visual programming. Being a high-level design tool, they made procedural 3D modelling more accessible. However, crafting those node graphs demands expertise and training. We present GeoCode, a novel framework designed to extend an existing node graph system and significantly lower the bar for the creation of new procedural 3D shape programs. Our approach meticulously balances expressiveness and generalization for part-based shapes. We propose a curated set of new geometric building blocks that are expressive and reusable across domains. We showcase three innovative and expressive programs developed through our technique and geometric building blocks. Our programs enforce intricate rules, empowering users to execute intuitive high-level parameter edits that seamlessly propagate throughout the entire shape at a lower level while maintaining its validity. To evaluate the user-friendliness of our geometric building blocks among non-experts, we conduct a user study that demonstrates their ease of use and highlights their applicability across diverse domains. Empirical evidence shows the superior accuracy of GeoCode in inferring and recovering 3D shapes compared to an existing competitor. Furthermore, our method demonstrates superior expressiveness compared to alternatives that utilize coarse primitives. Notably, we illustrate the ability to execute controllable local and global shape manipulations. Our code, programs, datasets and Blender add-on are available at https://github.com/threedle/GeoCode.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15276","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vojtěch Brůža, Alžběta Šašinková, Čeněk Šašinka, Zdeněk Stachoň, Barbora Kozlíková, Jiří Chmelík
{"title":"Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences","authors":"Vojtěch Brůža, Alžběta Šašinková, Čeněk Šašinka, Zdeněk Stachoň, Barbora Kozlíková, Jiří Chmelík","doi":"10.1111/cgf.70001","DOIUrl":"https://doi.org/10.1111/cgf.70001","url":null,"abstract":"<p>Virtual reality (VR) technology has become increasingly popular in education as a tool for enhancing learning experiences and engagement. This paper addresses the lack of a suitable tool for creating multi-user immersive educational content for virtual environments by introducing a novel solution called eDIVE. The solution is designed to facilitate the development of collaborative immersive educational VR experiences. Developed in close collaboration with psychologists and educators, it addresses specific functional needs identified by these professionals. eDIVE allows creators to extensively modify, expand or develop entirely new VR experiences. eDIVE ultimately makes collaborative VR education more accessible and inclusive for all stakeholders. Its utility is demonstrated through exemplary learning scenarios, developed in collaboration with experienced educators, and evaluated through real-world user studies.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning","authors":"Yuhang Huang, Takashi Kanai","doi":"10.1111/cgf.70002","DOIUrl":"https://doi.org/10.1111/cgf.70002","url":null,"abstract":"<p>In the field of brittle fracture animation, generating realistic destruction animations using physics-based simulation methods is computationally expensive. While techniques based on Voronoi diagrams or pre-fractured patterns are effective for real-time applications, they fail to incorporate collision conditions when determining fractured shapes during runtime. This paper introduces a novel learning-based approach for predicting fractured shapes based on collision dynamics at runtime. Our approach seamlessly integrates realistic brittle fracture animations with rigid body simulations, utilising boundary element method (BEM) brittle fracture simulations to generate training data. To integrate collision scenarios and fractured shapes into a deep learning framework, we introduce generative geometric segmentation, distinct from both instance and semantic segmentation, to represent 3D fragment shapes. We propose an eight-dimensional latent code to address the challenge of optimising multiple discrete fracture pattern targets that share similar continuous collision latent codes. This code will follow a discrete normal distribution corresponding to a specific fracture pattern within our latent impulse representation design. This adaptation enables the prediction of fractured shapes using neural discrete representation learning. Our experimental results show that our approach generates considerably more detailed brittle fractures than existing techniques, while the computational time is typically reduced compared to traditional simulation methods at comparable resolutions.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Suschnigg, B. Mutlu, G. Koutroulis, H. Hussain, T. Schreck
{"title":"MANDALA—Visual Exploration of Anomalies in Industrial Multivariate Time Series Data","authors":"J. Suschnigg, B. Mutlu, G. Koutroulis, H. Hussain, T. Schreck","doi":"10.1111/cgf.70000","DOIUrl":"https://doi.org/10.1111/cgf.70000","url":null,"abstract":"<p>The detection, description and understanding of anomalies in multivariate time series data is an important task in several industrial domains. Automated data analysis provides many tools and algorithms to detect anomalies, while visual interfaces enable domain experts to explore and analyze data interactively to gain insights using their expertise. Anomalies in multivariate time series can be diverse with respect to the dimensions, temporal occurrence and length within a dataset. Their detection and description depend on the analyst's domain, task and background knowledge. Therefore, anomaly analysis is often an underspecified problem. We propose a visual analytics tool called MANDALA (<b>M</b>ultivariate <b>AN</b>omaly <b>D</b>etection <b>A</b>nd exp<b>L</b>or<b>A</b>tion), which uses kernel density estimation to detect anomalies and provides users with visual means to explore and explain them. To assess our algorithm's effectiveness, we evaluate its ability to identify different types of anomalies using a synthetic dataset generated with the GutenTAG anomaly and time series generator. Our approach allows users to define normal data interactively first. Next, they can explore anomaly candidates, their related dimensions and their temporal scope. Our carefully designed visual analytics components include a tailored scatterplot matrix with semantic zooming features that visualize normal data through hexagonal binning plots and overlay candidate anomaly data as scatterplots. In addition, the system supports the analysis on a broader scope involving all dimensions simultaneously or on a smaller scope involving dimension pairs only. We define a taxonomy of important types of anomaly patterns, which can guide the interactive analysis process. The effectiveness of our system is demonstrated through a use case scenario on industrial data conducted with domain experts from the automotive domain and a user study utilizing a public dataset from the aviation domain.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Texture-Free Practical Model for Realistic Surface-Based Rendering of Woven Fabrics","authors":"Apoorv Khattar, Junqiu Zhu, Ling-Qi Yan, Zahra Montazeri","doi":"10.1111/cgf.15283","DOIUrl":"https://doi.org/10.1111/cgf.15283","url":null,"abstract":"<p>Rendering woven fabrics is challenging due to the complex micro geometry and anisotropy appearance. Conventional solutions either fully model every yarn/ply/fibre for high fidelity at a high computational cost, or ignore details, that produce non-realistic close-up renderings. In this paper, we introduce a model that shares the advantages of both. Our model requires only binary patterns as input yet offers all the necessary micro-level details by adding the yarn/ply/fibre implicitly. Moreover, we design a double-layer representation to handle light transmission accurately and use a constant timed (<span></span><math></math>) approach to accurately and efficiently depict parallax and shadowing-masking effects in a tandem way. We compare our model with curve-based and surface-based, on different patterns, under different lighting and evaluate with photographs to ensure capturing the aforementioned realistic effects.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15283","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quan Li, Yun Tian, Xiyuan Wang, Laixin Xie, Dandan Lin, Lingling Yi, Xiaojuan Ma
{"title":"MetapathVis: Inspecting the Effect of Metapath in Heterogeneous Network Embedding via Visual Analytics","authors":"Quan Li, Yun Tian, Xiyuan Wang, Laixin Xie, Dandan Lin, Lingling Yi, Xiaojuan Ma","doi":"10.1111/cgf.15285","DOIUrl":"https://doi.org/10.1111/cgf.15285","url":null,"abstract":"<p>In heterogeneous graphs (HGs), which offer richer network and semantic insights compared to homogeneous graphs, the <i>Metapath</i> technique serves as an essential tool for data mining. This technique facilitates the specification of sequences of entity connections, elucidating the semantic composite relationships between various node types for a range of downstream tasks. Nevertheless, selecting the most appropriate metapath from a pool of candidates and assessing its impact presents significant challenges. To address this issue, our study introduces <i>MetapathVis</i>, an interactive visual analytics system designed to assist machine learning (ML) practitioners in comprehensively understanding and comparing the effects of metapaths from multiple fine-grained perspectives. <i>MetapathVis</i> allows for an in-depth evaluation of various models generated with different metapaths, aligning HG network information at the individual level with model metrics. It also facilitates the tracking of aggregation processes associated with different metapaths. The effectiveness of our approach is validated through three case studies and a user study, with feedback from domain experts confirming that our system significantly aids ML practitioners in evaluating and comprehending the viability of different metapath designs.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Climbing Controllers for Physics-Based Characters","authors":"Kyungwon Kang, Taehong Gu, Taesoo Kwon","doi":"10.1111/cgf.15284","DOIUrl":"https://doi.org/10.1111/cgf.15284","url":null,"abstract":"<p>Despite the growing demand for capturing diverse motions, collecting climbing motion data remains challenging due to difficulties in tracking obscured markers and scanning climbing structures. Additionally, preparing varied routes further adds to the complexities of the data collection process. To address these challenges, this paper introduces a physics-based climbing controller for synthesizing climbing motions. The proposed method consists of two learning stages. In the first stage, a hanging policy is trained to naturally grasp holds. This policy is then used to generate a dataset containing hold positions, postures, and grip states, forming favourable initial poses. In the second stage, a climbing policy is trained using this dataset to perform actual climbing movements. The episode begins in a state close to the reference climbing motion, enabling the exploration of more natural climbing style states. This policy enables the character to reach the target position while utilizing its limbs more evenly. The experiments demonstrate that the proposed method effectively identifies good climbing postures and enhances limb coordination across environments with varying slopes and hold patterns.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constrained Spectral Uplifting for HDR Environment Maps","authors":"L. Tódová, A. Wilkie","doi":"10.1111/cgf.15280","DOIUrl":"https://doi.org/10.1111/cgf.15280","url":null,"abstract":"<p>Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called <i>spectral uplifting</i>. While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image-based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real-world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Single-Shot Example Terrain Sketching by Graph Neural Networks","authors":"Y. Liu, B. Benes","doi":"10.1111/cgf.15281","DOIUrl":"https://doi.org/10.1111/cgf.15281","url":null,"abstract":"<p>Terrain generation is a challenging problem. Procedural modelling methods lack control, while machine learning methods often need large training datasets and struggle to preserve the topology information. We propose a method that generates a new terrain from a single image for training and a simple user sketch. Our single-shot method preserves the sketch topology while generating diversified results. Our method is based on a graph neural network (GNN) and builds a detailed relation among the sketch-extracted features, that is, ridges and valleys and their neighbouring area. By disentangling the influence from different sketches, our model generates visually realistic terrains following the user sketch while preserving the features from the real terrains. Experiments are conducted to show both qualitative and quantitative comparisons. The structural similarity index measure of our generated and real terrains is around 0.8 on average.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15281","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}