Computers & Graphics-Uk最新文献

筛选
英文 中文
ADA-SCMS Net: A self-supervised clustering-based 3D mesh segmentation network with aggregation dual autoencoder ADA-SCMS 网络:基于自监督聚类的三维网状分割网络与聚合双自动编码器
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-11 DOI: 10.1016/j.cag.2024.104100
{"title":"ADA-SCMS Net: A self-supervised clustering-based 3D mesh segmentation network with aggregation dual autoencoder","authors":"","doi":"10.1016/j.cag.2024.104100","DOIUrl":"10.1016/j.cag.2024.104100","url":null,"abstract":"<div><div>Despite significant advances in 3D mesh segmentation techniques driven by deep learning, segmenting 3D meshes without exhaustive manual labeling remains a challenging due to difficulties in acquiring high-quality labeled datasets. This paper introduces an <strong>a</strong>ggregation <strong>d</strong>ual <strong>a</strong>utoencoder <strong>s</strong>elf-supervised <strong>c</strong>lustering-based <strong>m</strong>esh <strong>s</strong>egmentation network for unlabeled 3D meshes (ADA-SCMS Net). Expanding upon the previously proposed SCMS-Net, the ADA-SCMS Net enhances the segmentation process by incorporating a denoising autoencoder with an improved graph autoencoder as its basic structure. This modification prompts the segmentation network to concentrate on the primary structure of the input data during training, enabling the capture of robust features. In addition, the ADA-SCMS network introduces two new modules. One module is named the branch aggregation module, which combines the strengths of two branches to create a semantic latent representation. The other is the aggregation self-supervised clustering module, which facilitates end-to-end clustering training by iteratively updating each branch through mutual supervision. Extensive experiments on benchmark datasets validate the effectiveness of the ADA-SCMS network, demonstrating superior segmentation performance compared to the SCMS network.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142437819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative analysis of spatiotemporal playback manipulation on virtual reality training for External Ventricular Drainage 时空回放操作对虚拟现实脑室外引流训练的比较分析
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-10 DOI: 10.1016/j.cag.2024.104106
{"title":"Comparative analysis of spatiotemporal playback manipulation on virtual reality training for External Ventricular Drainage","authors":"","doi":"10.1016/j.cag.2024.104106","DOIUrl":"10.1016/j.cag.2024.104106","url":null,"abstract":"<div><div>Extensive research has been conducted in multiple surgical specialities where Virtual Reality (VR) has been utilised, such as spinal neurosurgery. However, cranial neurosurgery remains relatively unexplored in this regard. This work explores the impact of adopting VR to study External Ventricular Drainage (EVD). In this study, pre-recorded Motion Captured data of an EVD procedure is visualised on a VR headset, in comparison to a desktop monitor condition. Participants (<span><math><mrow><mi>N</mi><mo>=</mo><mn>20</mn></mrow></math></span>) were tasked with identifying and marking a key moment in the recordings. Objective and subjective metrics were recorded, such as completion time, temporal and spatial error distances, workload, and usability. The results from the experiment showed that the task was completed on average twice as fast in VR, when compared to desktop. However, desktop showed fewer error-prone results. Subjective feedback showed a slightly higher preference towards the VR environment concerning usability, while maintaining a comparable workload. Overall, VR displays are promising as an alternative tool to be used for educational and training purposes in cranial surgery.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An immersive labeling method for large point clouds 大型点云的沉浸式标注方法
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-05 DOI: 10.1016/j.cag.2024.104101
{"title":"An immersive labeling method for large point clouds","authors":"","doi":"10.1016/j.cag.2024.104101","DOIUrl":"10.1016/j.cag.2024.104101","url":null,"abstract":"<div><div>3D point clouds, such as those produced by 3D scanners, often require labeling – the accurate classification of each point into structural or semantic categories – before they can be used in their intended application. However, in the absence of fully automated methods, such labeling must be performed manually, which can prove extremely time and labor intensive. To address this we present a virtual reality tool for accelerating and improving the manual labeling of very large 3D point clouds. The labeling tool provides a variety of 3D interactions for efficient viewing, selection and labeling of points using the controllers of consumer VR-kits. The main contribution of our work is a mixed CPU/GPU-based data structure that supports rendering, selection and labeling with immediate visual feedback at high frame rates necessary for a convenient VR experience. Our mixed CPU/GPU data structure supports fluid interaction with very large point clouds in VR, what is not possible with existing continuous level-of-detail rendering algorithms. We evaluate our method with 25 users on tasks involving point clouds of up to 50 million points and find convincing results that support the case for VR-based point cloud labeling.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in vision-based deep learning methods for interacting hands reconstruction: A survey 基于视觉的深度学习方法在交互式手部重建方面的进展:调查
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-05 DOI: 10.1016/j.cag.2024.104102
{"title":"Advances in vision-based deep learning methods for interacting hands reconstruction: A survey","authors":"","doi":"10.1016/j.cag.2024.104102","DOIUrl":"10.1016/j.cag.2024.104102","url":null,"abstract":"<div><div>Vision-based hand reconstructions have become noteworthy tools in enhancing interactive experiences in various applications such as virtual reality, augmented reality, and autonomous driving, which enable sophisticated interactions by reconstructing complex motions of human hands. Despite significant progress driven by deep-learning methodologies, the quest for high-fidelity interacting hands reconstruction faces challenges such as limited dataset diversity, lack of detailed hand representation, occlusions, and differentiation between similar hand structures. This survey thoroughly reviews deep learning-based methods, diverse datasets, loss functions, and evaluation metrics addressing the complexities of interacting hands reconstruction. Mainstream algorithms of the past five years are systematically classified into two main categories: algorithms that employ explicit representations, such as parametric meshes and 3D Gaussian splatting, and those that utilize implicit representations, including signed distance fields and neural radiance fields. Novel deep-learning models like graph convolutional networks and transformers are applied to solve the aforementioned challenges in hand reconstruction effectively. Beyond summarizing these interaction-aware algorithms, this survey also briefly discusses hand tracking in virtual reality and augmented reality. To the best of our knowledge, this is the first survey specifically focusing on the reconstruction of both hands and their interactions with objects. The survey contains the various facets of hand modeling, deep learning approaches, and datasets, broadening the horizon of hand reconstruction research and future innovation in natural user interactions.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diverse non-homogeneous texture synthesis from a single exemplar 从单一范例中合成多样化非均质纹理
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-04 DOI: 10.1016/j.cag.2024.104099
{"title":"Diverse non-homogeneous texture synthesis from a single exemplar","authors":"","doi":"10.1016/j.cag.2024.104099","DOIUrl":"10.1016/j.cag.2024.104099","url":null,"abstract":"<div><div>Capturing non-local, long range features present in non-homogeneous textures is difficult to achieve with existing techniques. We introduce a new training method and architecture for single-exemplar texture synthesis that combines a Generative Adversarial Network (GAN) and a Variational Autoencoder (VAE). In the proposed architecture, the combined networks share information during training via structurally identical, independent blocks, facilitating highly diverse texture variations from a single image exemplar. Supporting this training method, we also include a similarity loss term that further encourages diverse output while also improving the overall quality. Using our approach, it is possible to produce diverse results over the entire sample size taken from a single model that can be trained in approximately 15 min. We show that our approach obtains superior performance when compared to SOTA texture synthesis methods and single image GAN methods using standard diversity and quality metrics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flow style-aware network for arbitrary style transfer 用于任意样式传输的流量样式感知网络
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-29 DOI: 10.1016/j.cag.2024.104098
{"title":"Flow style-aware network for arbitrary style transfer","authors":"","doi":"10.1016/j.cag.2024.104098","DOIUrl":"10.1016/j.cag.2024.104098","url":null,"abstract":"<div><div>Researchers have recently proposed arbitrary style transfer methods based on various model frameworks. Although all of them have achieved good results, they still face the problems of insufficient stylization, artifacts and inadequate retention of content structure. In order to solve these problems, we propose a flow style-aware network (FSANet) for arbitrary style transfer, which combines a VGG network and a flow network. FSANet consists of a flow style transfer module (FSTM), a dynamic regulation attention module (DRAM), and a style feature interaction module (SFIM). The flow style transfer module uses the reversible residue block features of the flow network to create a sample feature containing the target content and style. To adapt the FSTM to VGG networks, we design the dynamic regulation attention module and exploit the sample features both at the channel and pixel levels. The style feature interaction module computes a style tensor that optimizes the fused features. Extensive qualitative and quantitative experiments demonstrate that our proposed FSANet can effectively avoid artifacts and enhance the preservation of content details while migrating style features.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised reconstruction of re-renderable facial textures from single image 从单张图像自监督重建可重新渲染的面部纹理
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-28 DOI: 10.1016/j.cag.2024.104096
{"title":"Self-supervised reconstruction of re-renderable facial textures from single image","authors":"","doi":"10.1016/j.cag.2024.104096","DOIUrl":"10.1016/j.cag.2024.104096","url":null,"abstract":"<div><div>Reconstructing high-fidelity 3D facial texture from a single image is a quite challenging task due to the lack of complete face information and the domain gap between the 3D face and 2D image. Further, obtaining re-renderable 3D faces has become a strongly desired property in many applications, where the term ’re-renderable’ demands the facial texture to be spatially complete and disentangled with environmental illumination. In this paper, we propose a new self-supervised deep learning framework for reconstructing high-quality and re-renderable facial albedos from single-view images in the wild. Our main idea is to first utilize a <em>prior generation module</em> based on the 3DMM proxy model to produce an unwrapped texture and a globally parameterized prior albedo. Then we apply a <em>detail refinement module</em> to synthesize the final texture with both high-frequency details and completeness. To further make facial textures disentangled with illumination, we propose a novel detailed illumination representation that is reconstructed with the detailed albedo together. We also design several novel regularization losses on both the albedo and illumination maps to facilitate the disentanglement of these two factors. Finally, by leveraging a differentiable renderer, each face attribute can be jointly trained in a self-supervised manner without requiring ground-truth facial reflectance. Extensive comparisons and ablation studies on challenging datasets demonstrate that our framework outperforms state-of-the-art approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Psychophysiology of rhythmic stimuli and time experience in virtual reality 虚拟现实中节奏刺激和时间体验的心理生理学
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-27 DOI: 10.1016/j.cag.2024.104097
{"title":"Psychophysiology of rhythmic stimuli and time experience in virtual reality","authors":"","doi":"10.1016/j.cag.2024.104097","DOIUrl":"10.1016/j.cag.2024.104097","url":null,"abstract":"<div><div>Time experience is an essential part of one’s perception of any environment, real or virtual. In this article, from a virtual environment design perspective, we explore how rhythmic stimuli can influence an unrelated cognitive task regarding time experience and performance in virtual reality. This study explicitly includes physiological data to investigate how, overall, experience correlates with psychophysiological observations. The task involves sorting 3D objects by shape, with varying rhythmic stimuli in terms of their tempo and sensory channel (auditory and/or visual) in different trials, to collect subjective measures of time estimation and judgment. The results indicate different effects on time experience and performance depending on the context, such as user fatigue and trial repetition. Depending on the context, a positive impact of audio stimuli or a negative impact of visual stimuli on task performance can be observed, as well as time being underestimated concerning tempo in relation to task familiarity. However, some effects are consistent regardless of context, such as time being judged to pass faster with additional stimuli or consistent correlations between participants’ performance and time experience, suggesting flow-related aspects. We also observe correlations between time experience with eye-tracking data and body temperature, yet some of these correlations may be due to a confounding effect of fatigue. If confirmed as separate from fatigue, these physiological data could be used as reference point for evaluating a user’s time experience. This might be of great interest for designing virtual environments, as purposeful stimuli can strongly influence task performance and time experience, both essential components of virtual environment user experience.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing MeshNet for 3D shape classification with focal and regularization losses 利用焦点和正则化损失增强 MeshNet 的三维形状分类能力
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-25 DOI: 10.1016/j.cag.2024.104094
{"title":"Enhancing MeshNet for 3D shape classification with focal and regularization losses","authors":"","doi":"10.1016/j.cag.2024.104094","DOIUrl":"10.1016/j.cag.2024.104094","url":null,"abstract":"<div><div>With the development of deep learning and computer vision, an increasing amount of research has focused on applying deep learning models to the recognition and classification of three-dimensional shapes. In classification tasks, differences in sample quantity, feature amount, model complexity, and other aspects among different categories of 3D model data cause significant variations in classification difficulty. However, simple cross-entropy loss is generally used as the loss function, but it is insufficient to address these differences. In this paper, we used MeshNet as the base model and introduced focal loss as a metric for the loss function. Additionally, to prevent deep learning models from developing a preference for specific categories, we incorporated regularization loss. The combined use of focal loss and regularization loss in optimizing the MeshNet model’s loss function resulted in a classification accuracy of up to 92.46%, representing a 0.20% improvement over the original model’s highest accuracy of 92.26%. Furthermore, the average accuracy over the final 50 epochs remained stable at a higher level of 92.01%, reflecting a 0.71% improvement compared to the original MeshNet model’s 91.30%. These results indicate that our method performs better in 3D shape classification task.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatKG: Visualizing time-series patterns aided by intelligent agents and a knowledge graph ChatKG:在智能代理和知识图谱的帮助下可视化时间序列模式
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-24 DOI: 10.1016/j.cag.2024.104092
{"title":"ChatKG: Visualizing time-series patterns aided by intelligent agents and a knowledge graph","authors":"","doi":"10.1016/j.cag.2024.104092","DOIUrl":"10.1016/j.cag.2024.104092","url":null,"abstract":"<div><div>Line-chart visualizations of temporal data enable users to identify interesting patterns for the user to inquire about. Using Intelligent Agents (IA), Visual Analytic tools can automatically uncover <em>explicit knowledge</em> related information to said patterns. Yet, visualizing the association of data, patterns, and knowledge is not straightforward. In this paper, we present <em>ChatKG</em>, a novel visual analytics strategy that allows exploratory data analysis of a Knowledge Graph that associates temporal sequences, the patterns found in each sequence, the temporal overlap between patterns, the related knowledge of each given pattern gathered from a multi-agent IA, and the IA’s suggestions of related datasets for further analysis visualized as annotations. We exemplify and informally evaluate ChatKG by analyzing the world’s life expectancy. For this, we implement an oracle that automatically extracts relevant or interesting patterns, populates the Knowledge Graph to be visualized, and, during user interaction, inquires the multi-agent IA for related information and suggests related datasets to be displayed as visual annotations. Our tests and an interview conducted showed that ChatKG is well suited for temporal analysis of temporal patterns and their related knowledge when applied to history studies.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142357034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信