{"title":"IMPAVID: Enhancing incident management process compliance assessment with visual analytics","authors":"Alessandro Palma , Marco Angelini","doi":"10.1016/j.cag.2025.104243","DOIUrl":"10.1016/j.cag.2025.104243","url":null,"abstract":"<div><div>The Incident Management Process (IMP) is crucial to prevent, protect against, and respond to security incidents that impact an organization. To ensure readiness for potential alerts, the IMP must comply with security standards, which provide guidelines for managing such incidents, and organizations are expected to adhere to these standards to establish a secure-by-design approach. Evaluating an organization’s compliance with security standards is often labor-intensive, as traditional methods rely heavily on manual analysis. Incorporating automated approaches to aid decision-making presents additional challenges, such as data interpretation and correlation. To address these challenges, we present IMPAVID, a visual analytics solution designed to support the assessment of IMP compliance through process-centric techniques. IMPAVID aims to enhance the security assessor’s awareness, enabling them to make informed decisions about improving the IMP alignment with regulatory and technical standards. To ensure the context-awareness of these techniques, IMPAVID leverages a deviations taxonomy and a cost model to propose a more fine-grained analysis linking together process and technical data while allowing to focus on general root causes for non-compliance. In the literature, cost models often rely on parametric cost functions that provide a valuable solution for fine-grained assessments while introducing additional challenges related to the effort necessary for security assessors to determine suitable parameter configurations. Thus, the IMPAVID system implements additional requirements and a visual environment to support data-driven, assisted, and interactive parameter configuration during IMP compliance assessment. We validate our system by presenting a comprehensive case study based on a publicly available dataset, which includes real IMP log data from an IT company. It shows the system’s capabilities to perform IMP compliance assessment while dynamically configuring the parameters of the proposed compliance cost model, enabling more effective and efficient analysis.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104243"},"PeriodicalIF":2.5,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Foreword to the special section on SIBGRAPI 2023 tutorials","authors":"Rafael Piccin Torchelsen , João Paulo Lima","doi":"10.1016/j.cag.2025.104257","DOIUrl":"10.1016/j.cag.2025.104257","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104257"},"PeriodicalIF":2.5,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144212902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wen-Xuan Chen , Yong Hu , Bei-Yi Tian , Wen Luo , Lin-Wang Yuan
{"title":"Multi-Level Cross-Attention Point Cloud Completion Network","authors":"Wen-Xuan Chen , Yong Hu , Bei-Yi Tian , Wen Luo , Lin-Wang Yuan","doi":"10.1016/j.cag.2025.104253","DOIUrl":"10.1016/j.cag.2025.104253","url":null,"abstract":"<div><div>Sensors often acquire point cloud data that is sparse and incomplete due to limitations in resolution or occlusion. Therefore, it is essential for practical applications to reconstruct the original shape from the incomplete point cloud. However, the existing methods based on Transformer architecture fail to make full use of the cross-attention mechanism to extract and fuse the features of points and the relationship between points, leading to a deficiency in detailed feature representation. In this paper, we present Multi-Level Cross-Attention Point Cloud Completion Network (MLCANet), which leverages the multi-level features of point clouds and their feature associations to optimize the generation of points. First, MLCANet enhances the features of the point cloud through Multi-Scale Feature Enhancement Cross-Attention (MSFECA) within the encoder. This approach facilitates interaction between channel and spatial dimension information derived from both low-resolution and high-resolution point clouds. Second, we propose Structural Similarity Cross-Attention (SSCA) in the decoder to learn prior knowledge from partial point clouds, thereby improving detail recovery. Third, we present an Augmented Affiliation Transformation (AAT) designed to correct positional discrepancies between partial and missing points. Our experiments demonstrate the effectiveness of our method for completing several challenging point cloud data both qualitatively and quantitatively, with the Chamfer Distance (CD) reduced by at least 3.1% and 4.7% compared to existing methods on the ShapeNet-Part and ModelNet40 datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104253"},"PeriodicalIF":2.5,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengyi Gong, Mingwen Shao, Chang Liu, Xiang Lv, Huan Liu
{"title":"FCAT-Diff: Flexible and Consistent Appearance Transfer Based on Training-free Diffusion Model","authors":"Zhengyi Gong, Mingwen Shao, Chang Liu, Xiang Lv, Huan Liu","doi":"10.1016/j.cag.2025.104247","DOIUrl":"10.1016/j.cag.2025.104247","url":null,"abstract":"<div><div>The core goal of appearance transfer is to seamlessly integrate the appearance of a reference image into a content image. However, existing methods operate on the entire image and fail to accurately identify the regions of interest for appearance transfer, leading to structural loss and incorrect background transfer. Additionally, these methods lack flexibility, making it difficult to achieve fine-grained control at the regional level. To address these issues, we propose <strong>FCAT-Diff</strong>, a training-free framework for flexible and consistent appearance transfer without additional training or fine-tuning. Specifically, to achieve more consistent appearance transfer, we employ a dual-guidance branch to provide structure and appearance features, which are fused through an enhanced self-attention module called <strong>Mask-Appearance-Attention (MAA)</strong>. The MAA clearly distinguishes the boundaries between the background and the transferred region, ensuring consistency in both the structure and background. To increase the flexibility of transfer, we utilize a mask that allows users to select the regions of interest for transfer, enabling appearance transfer for specified regions. Furthermore, given multiple reference images and their corresponding regions, our FCAT-Diff supports the transfer of multiple appearances. Extensive experiments demonstrate that our method achieves <strong>state-of-the-art (SOTA)</strong> performance in maintaining the structural and background consistency of the content image while providing greater flexibility.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104247"},"PeriodicalIF":2.5,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Reconstruction in Robotics: A Comprehensive Review","authors":"Dharmendra Selvaratnam, Dena Bazazian","doi":"10.1016/j.cag.2025.104256","DOIUrl":"10.1016/j.cag.2025.104256","url":null,"abstract":"<div><div>In this paper, we delve into the swiftly progressing domain of 3D reconstruction within robotics, a field of critical importance in contemporary research. The inherent potential of robots to engage with and understand their environment is significantly enhanced by integrating 3D reconstruction techniques, which draw inspiration from the complex processes of natural evolution and human perception. This study not only highlights the importance of 3D reconstruction methodologies in the broader context of technological advancement but also outlines their pivotal contributions to the field of robotics. Humans have evolved over millions of years to adapt to their surroundings through natural selection, enabling them to perceive the world. 3D reconstruction methods are inspired by natural processes to replicate objects, providing more detailed information about the perceived object. With this approach to object perception, robotics plays a crucial role in utilising these techniques to interact with the real world. Our study illustrates recent advancements in applying 3D reconstruction methods within robotics and discusses necessary improvements and applications for future research in the field.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104256"},"PeriodicalIF":2.5,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144288763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Hermann, Michel Krämer, Tobias Dorra, Arjan Kuijper
{"title":"Real-time indexing and visualization of LiDAR point clouds with arbitrary attributes using the M3NO data structure","authors":"Paul Hermann, Michel Krämer, Tobias Dorra, Arjan Kuijper","doi":"10.1016/j.cag.2025.104254","DOIUrl":"10.1016/j.cag.2025.104254","url":null,"abstract":"<div><div>In previous work, we have presented an approach to index 3D LiDAR point clouds in real time, i.e. while they are being recorded. We have further introduced a novel data structure called M<sup>3</sup>NO, which allows arbitrary attributes to be indexed directly during data acquisition. Based on this, we now present an integrated approach that supports not only real-time indexing but also visualization with attribute filtering. We specifically focus on large datasets from airborne and land-based mobile mapping systems. Compared to traditional indexing approaches running offline, the M<sup>3</sup>NO is created incrementally. This enables dynamic queries based on spatial extent and value ranges of arbitrary attributes. The points in the data structure are assigned to levels of detail (LOD), which can be used to create interactive visualizations. This is in contrast to other approaches, which focus on either spatial or attribute indexing, only support a limited set of attributes, or do not support real-time visualization. Using several publicly available large data sets, we evaluate the approach, assess quality and query performance, and compare it with existing state-of-the-art indexing solutions. The results show that our data structure is able to index 5.24 million points per second. This is more than most commercially available laser scanners can record and proves that low-latency visualization during the capturing process is possible.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104254"},"PeriodicalIF":2.5,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144184847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Zhou , Kunlong Liu , Weiwei Jin , Qian Wang , Yunfeng She , Yongxiang Yu , Caiwen Ma
{"title":"Advancements in deep learning for point cloud classification and segmentation: A comprehensive review","authors":"Wei Zhou , Kunlong Liu , Weiwei Jin , Qian Wang , Yunfeng She , Yongxiang Yu , Caiwen Ma","doi":"10.1016/j.cag.2025.104238","DOIUrl":"10.1016/j.cag.2025.104238","url":null,"abstract":"<div><div>Point clouds, a foundational 3D data representation, are extensively utilized in fields such as autonomous driving and robotics due to their capability to represent complex spatial structures. With the rapid advancement of artificial intelligence, leveraging deep learning to enhance point cloud processing has become a central focus in computer vision research. The unstructured nature, large-scale data volume, and labor-intensive annotation of point clouds present unique challenges for designing deep learning models. This paper provides a comprehensive review of the development and latest advancements in deep learning models for point cloud processing, with a specific focus on classification and segmentation. We systematically outline the technical approaches and key strategies for addressing these challenges, offering a clear understanding of the most recent and notable research in the field. Furthermore, we discuss the potential challenges and future research directions in point cloud processing by analyzing the respective strengths and weaknesses of prevailing techniques, thus to guide the evolution of point cloud processing technologies.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104238"},"PeriodicalIF":2.5,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144196440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Duet dancing from a solo dance video","authors":"Yarin Moshe, Yael Moses, Ariel Shamir","doi":"10.1016/j.cag.2025.104250","DOIUrl":"10.1016/j.cag.2025.104250","url":null,"abstract":"<div><div>This paper introduces a method to generate duet dance videos from an input solo dancer’s video performance. Addressing this novel problem, our system tackles a set of sub-tasks, including dancer segmentation, camera motion handling, stage reconstruction and the intricate management of geometric constraints such as dancer scale preservation and dancers collision prevention. The proposed approach leverages existing methodologies and new solutions. Notably, we address collisions in 2D space–time directly, departing from traditional 3D approaches. We modify the initial location of the dancer to avoid long-time collisions globally, while also modulating the pace of the dance by deliberate slowing down or accelerating motion to avoid short collisions locally. Experimental results attest to the efficacy of our approach. The system not only successfully synthesizes engaging duet dance sequences but also upholds the authenticity of individual performances, as shown by a user study.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104250"},"PeriodicalIF":2.5,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144223284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sketch2Data: Recovering data from hand-drawn infographics","authors":"Anran Qi , Theophanis Tsandilas , Ariel Shamir , Adrien Bousseau","doi":"10.1016/j.cag.2025.104251","DOIUrl":"10.1016/j.cag.2025.104251","url":null,"abstract":"<div><div>Data collection and visualization have traditionally been seen as activities reserved for experts. However, by drawing simple geometric figures – known as <em>glyphs</em> – anyone can visually record their own data. Still, the resulting <em>hand-drawn infographics</em> do not provide direct access to the underlying data, hindering digital editing of both the glyphs and their values. We introduce a method to recover data values from glyph-based hand-drawn infographics. Given a visualization in a bitmap format and a user-defined parametric template of its glyphs, we leverage deep neural networks to detect and localize the visualization glyphs, and estimate the data values they represent. We also provide a user interface to review and correct these estimates, informed by a measure of uncertainty of the neural network predictions. Our reverse-engineering procedure effectively disentangles the depicted data from its visual representation, enabling various visualization authoring applications, such as visualizing new data values or experimenting with alternative visualizations of the same data.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104251"},"PeriodicalIF":2.5,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144255258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}