Graphical ModelsPub Date : 2024-10-25DOI: 10.1016/j.gmod.2024.101237
Jieyi Chen , Zhen Wen , Li Zheng , Jiaying Lu , Hui Lu , Yiwen Ren , Wei Chen
{"title":"HammingVis: A visual analytics approach for understanding erroneous outcomes of quantum computing in hamming space","authors":"Jieyi Chen , Zhen Wen , Li Zheng , Jiaying Lu , Hui Lu , Yiwen Ren , Wei Chen","doi":"10.1016/j.gmod.2024.101237","DOIUrl":"10.1016/j.gmod.2024.101237","url":null,"abstract":"<div><div>Advanced quantum computers have the capability to perform practical quantum computing to address specific problems that are intractable for classical computers. Nevertheless, these computers are susceptible to noise, leading to unexpectable errors in outcomes, which makes them less trustworthy. To address this challenge, we propose HammingVis, a visual analytics approach that helps identify and understand errors in quantum outcomes. Given that these errors exhibit latent structural patterns within Hamming space, we introduce two graph visualizations to reveal these patterns from distinct perspectives. One highlights the overall structure of errors, while the other focuses on the impact of errors within important subspaces. We further develop a prototype system for interactively exploring and discerning the correct outcomes within Hamming space. A novel design is presented to distinguish the neighborhood patterns between error and correct outcomes. The effectiveness of our approach is demonstrated through case studies involving two classic quantum algorithms’ outcome data.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101237"},"PeriodicalIF":2.5,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the neural landscape: Visual analytics of neuron activation in large language models with NeuronautLLM","authors":"Ollie Woodman , Zhen Wen , Hui Lu , Yiwen Ren , Minfeng Zhu , Wei Chen","doi":"10.1016/j.gmod.2024.101238","DOIUrl":"10.1016/j.gmod.2024.101238","url":null,"abstract":"<div><div>Large language models (LLMs) like those that power OpenAI’s ChatGPT and Google’s Gemini have played a major part in the recent wave of machine learning and artificial intelligence advancements. However, interpreting LLMs and visualizing their components is extremely difficult due to the incredible scale and high dimensionality of model data. NeuronautLLM introduces a visual analysis system for identifying and visualizing influential neurons in transformer-based language models as they relate to user-defined prompts. Our approach combines simple, yet information-dense visualizations as well as neuron explanation and classification data to provide a wealth of opportunities for exploration. NeuronautLLM was reviewed by two experts to verify its efficacy as a tool for practical model interpretation. Interviews and usability tests with five LLM experts demonstrated NeuronautLLM’s exceptional usability and its readiness for real-world application. Furthermore, two in-depth case studies on model reasoning and social bias highlight NeuronautLLM’s versatility in aiding the analysis of a wide range of LLM research problems.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101238"},"PeriodicalIF":2.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-10-19DOI: 10.1016/j.gmod.2024.101236
Bingchuan Li , Yuping Ye , Junfeng Yao , Yong Yang , Weixing Xie , Mengyuan Ge
{"title":"A detail-preserving method for medial mesh computation in triangular meshes","authors":"Bingchuan Li , Yuping Ye , Junfeng Yao , Yong Yang , Weixing Xie , Mengyuan Ge","doi":"10.1016/j.gmod.2024.101236","DOIUrl":"10.1016/j.gmod.2024.101236","url":null,"abstract":"<div><div>The medial axis transform (MAT) of an object is the set of all points inside the object that have more than one closest point on the object’s boundary. Representing sharp edges and corners of triangular meshes using MAT poses a complex challenge. While some researchers have proposed using zero-radius medial spheres to depict these features, they have not clearly articulated how to establish proper connections among them. In this paper, we propose a novel framework for computing MAT of a triangular mesh while preserving its features. The initial medial axis mesh obtained may contain erroneous edges, which are discussed and addressed in Section 3.3. Furthermore, during the simplification process, it is crucial to ensure that the medial spheres remain within the confines of the triangular mesh. Our algorithm excels in preserving critical features throughout the simplification procedure, consistently ensuring that the spheres remain enclosed within the triangular mesh. Experiments on various types of 3D models demonstrate the robustness, shape fidelity, and efficiency in representation achieved by our algorithm.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101236"},"PeriodicalIF":2.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-10-11DOI: 10.1016/j.gmod.2024.101235
Jiazhe Miao , Tao Peng , Fei Fang , Xinrong Hu , Li Li
{"title":"GarTemFormer: Temporal transformer-based for optimizing virtual garment animation","authors":"Jiazhe Miao , Tao Peng , Fei Fang , Xinrong Hu , Li Li","doi":"10.1016/j.gmod.2024.101235","DOIUrl":"10.1016/j.gmod.2024.101235","url":null,"abstract":"<div><div>Virtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity and gravity, to generate realistic deformation effects. However, the computational complexity of such methods hinders real-time animation generation. Data-driven approaches, on the other hand, learn from existing garment deformation data, enabling rapid animation generation. Nevertheless, animations produced using this approach often lack realism, struggling to capture subtle variations in garment behavior. We proposes an approach that balances realism and speed, by considering both spatial and temporal dimensions, we leverage real-world videos to capture human motion and garment deformation, thereby producing more realistic animation effects. We address the complexity of spatiotemporal attention by aligning input features and calculating spatiotemporal attention at each spatial position in a batch-wise manner. For garment deformation, garment segmentation techniques are employed to extract garment templates from videos. Subsequently, leveraging our designed Transformer-based temporal framework, we capture the correlation between garment deformation and human body shape features, as well as frame-level dependencies. Furthermore, we utilize a feature fusion strategy to merge shape and motion features, addressing penetration issues between clothing and the human body through post-processing, thus generating collision-free garment deformation sequences. Qualitative and quantitative experiments demonstrate the superiority of our approach over existing methods, efficiently producing temporally coherent and realistic dynamic garment deformations.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101235"},"PeriodicalIF":2.5,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building semantic segmentation from large-scale point clouds via primitive recognition","authors":"Chiara Romanengo , Daniela Cabiddu , Simone Pittaluga, Michela Mortara","doi":"10.1016/j.gmod.2024.101234","DOIUrl":"10.1016/j.gmod.2024.101234","url":null,"abstract":"<div><div>Modelling objects at a large resolution or scale brings challenges in the storage and processing of data and requires efficient structures. In the context of modelling urban environments, we face both issues: 3D data from acquisition extends at geographic scale, and digitization of buildings of historical value can be particularly dense. Therefore, it is crucial to exploit the point cloud derived from acquisition as much as possible, before (or alongside) deriving other representations (e.g., surface or volume meshes) for further needs (e.g., visualization, simulation). In this paper, we present our work in processing 3D data of urban areas towards the generation of a semantic model for a city digital twin. Specifically, we focus on the recognition of shape primitives (e.g., planes, cylinders, spheres) in point clouds representing urban scenes, with the main application being the semantic segmentation into walls, roofs, streets, domes, vaults, arches, and so on.</div><div>Here, we extend the conference contribution in Romanengo et al. (2023a), where we presented our preliminary results on single buildings. In this extended version, we generalize the approach to manage whole cities by preliminarily splitting the point cloud building-wise and streamlining the pipeline. We added a thorough experimentation with a benchmark dataset from the city of Tallinn (47,000 buildings), a portion of Vaihingen (170 building) and our case studies in Catania and Matera, Italy (4 high-resolution buildings). Results show that our approach successfully deals with point clouds of considerable size, either surveyed at high resolution or covering wide areas. In both cases, it proves robust to input noise and outliers but sensitive to uneven sampling density.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101234"},"PeriodicalIF":2.5,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-10-03DOI: 10.1016/j.gmod.2024.101233
Kun Zhang , Ao Zhang , Xiaohong Wang , Weisong Li
{"title":"Deep-learning-based point cloud completion methods: A review","authors":"Kun Zhang , Ao Zhang , Xiaohong Wang , Weisong Li","doi":"10.1016/j.gmod.2024.101233","DOIUrl":"10.1016/j.gmod.2024.101233","url":null,"abstract":"<div><div>Point cloud completion aims to utilize algorithms to repair missing parts in 3D data for high-quality point clouds. This technology is crucial for applications such as autonomous driving and urban planning. With deep learning’s progress, the robustness and accuracy of point cloud completion have improved significantly. However, the quality of completed point clouds requires further enhancement to satisfy practical requirements. In this study, we conducted an extensive survey of point cloud completion methods, with the following main objectives: (i) We classified point cloud completion methods into categories based on their principles, such as point-based, convolution-based, GAN-based, and geometry-based methods, and thoroughly investigated the advantages and limitations of each category. (ii) We collected publicly available datasets for point cloud completion algorithms and conducted experimental comparisons using various typical deep-learning networks to draw conclusions. (iii) With our research in this paper, we discuss future research trends in this rapidly evolving field.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101233"},"PeriodicalIF":2.5,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-09-16DOI: 10.1016/j.gmod.2024.101231
Guo-Wei Yang, Dong-Yu Chen, Tai-Jiang Mu
{"title":"Sketch-2-4D: Sketch driven dynamic 3D scene generation","authors":"Guo-Wei Yang, Dong-Yu Chen, Tai-Jiang Mu","doi":"10.1016/j.gmod.2024.101231","DOIUrl":"10.1016/j.gmod.2024.101231","url":null,"abstract":"<div><p>Sketch-based content generation offers flexible controllability, making it a promising narrative avenue in film production. Directors often visualize their imagination by crafting storyboards using sketches and textual descriptions for each shot. However, current video generation methods suffer from three-dimensional inconsistencies, with notably artifacts during large motion or camera pans around scenes. A suitable solution is to directly generate 4D scene, enabling consistent dynamic three-dimensional scenes generation. We define the Sketch-2-4D problem, aiming to enhance controllability and consistency in this context. We propose a novel Control Score Distillation Sampling (SDS-C) for sketch-based 4D scene generation, providing precise control over scene dynamics. We further design Spatial Consistency Modules and Temporal Consistency Modules to tackle the temporal and spatial inconsistencies introduced by sketch-based control, respectively. Extensive experiments have demonstrated the effectiveness of our approach.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101231"},"PeriodicalIF":2.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000195/pdfft?md5=12c973a601d5430e660ae4453ec0a4d8&pid=1-s2.0-S1524070324000195-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-09-12DOI: 10.1016/j.gmod.2024.101230
Shuxian Cai , Yuanyan Ye , Juan Cao , Zhonggui Chen
{"title":"FACE: Feature-preserving CAD model surface reconstruction","authors":"Shuxian Cai , Yuanyan Ye , Juan Cao , Zhonggui Chen","doi":"10.1016/j.gmod.2024.101230","DOIUrl":"10.1016/j.gmod.2024.101230","url":null,"abstract":"<div><p>Feature lines play a pivotal role in the reconstruction of CAD models. Currently, there is a lack of a robust explicit reconstruction algorithm capable of achieving sharp feature reconstruction in point clouds with noise and non-uniformity. In this paper, we propose a feature-preserving CAD model surface reconstruction algorithm, named FACE. The algorithm initiates with preprocessing the point cloud through denoising and resampling steps, resulting in a high-quality point cloud that is devoid of noise and uniformly distributed. Then, it employs discrete optimal transport to detect feature regions and subsequently generates dense points along potential feature lines to enhance features. Finally, the advancing-front surface reconstruction method, based on normal vector directions, is applied to reconstruct the enhanced point cloud. Extensive experiments demonstrate that, for contaminated point clouds, this algorithm excels not only in reconstructing straight edges and corner points but also in handling curved edges and surfaces, surpassing existing methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101230"},"PeriodicalIF":2.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000183/pdfft?md5=c92c397f0636a8c7097baed24a31ef77&pid=1-s2.0-S1524070324000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-09-05DOI: 10.1016/j.gmod.2024.101229
K. He, J.B.T.M. Roerdink, J. Kosinka
{"title":"Image vectorization using a sparse patch layout","authors":"K. He, J.B.T.M. Roerdink, J. Kosinka","doi":"10.1016/j.gmod.2024.101229","DOIUrl":"10.1016/j.gmod.2024.101229","url":null,"abstract":"<div><p>Mesh-based image vectorization techniques have been studied for a long time, mostly owing to their compactness and flexibility in capturing image features. However, existing methods often lead to relatively dense meshes, especially when applied to images with high-frequency details or textures. We present a novel method that automatically vectorizes an image into a sparse collection of Coons patches whose size adapts to image features. To balance the number of patches and the accuracy of feature alignment, we generate the layout based on a harmonic cross field constrained by image features. We support T-junctions, which keeps the number of patches low and ensures local adaptation to feature density, naturally complemented by varying mesh-color resolution over the patches. Our experimental results demonstrate the utility, accuracy, and sparsity of our method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101229"},"PeriodicalIF":2.5,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000171/pdfft?md5=68d700973ee613d865f875bbdad4d05d&pid=1-s2.0-S1524070324000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical ModelsPub Date : 2024-09-02DOI: 10.1016/j.gmod.2024.101228
Yan Zhu, Yasushi Yamaguchi
{"title":"Corrigendum to Image restoration for digital line drawings using line masks [Graphical Models 135 (2024) 101226]","authors":"Yan Zhu, Yasushi Yamaguchi","doi":"10.1016/j.gmod.2024.101228","DOIUrl":"10.1016/j.gmod.2024.101228","url":null,"abstract":"","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101228"},"PeriodicalIF":2.5,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032400016X/pdfft?md5=c31a932ed00cc957b9680b9f31021df7&pid=1-s2.0-S152407032400016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}