Computers & Graphics-Uk最新文献

筛选
英文 中文
Enhancing Visual Analytics systems with guidance: A task-driven methodology 通过引导增强可视分析系统:任务驱动方法
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-10 DOI: 10.1016/j.cag.2024.104121
Ignacio Pérez-Messina, Davide Ceneda, Silvia Miksch
{"title":"Enhancing Visual Analytics systems with guidance: A task-driven methodology","authors":"Ignacio Pérez-Messina,&nbsp;Davide Ceneda,&nbsp;Silvia Miksch","doi":"10.1016/j.cag.2024.104121","DOIUrl":"10.1016/j.cag.2024.104121","url":null,"abstract":"<div><div>Enhancing Visual Analytics (VA) systems with guidance, such as the automated provision of data-driven suggestions and answers to the user’s task, is becoming increasingly important and common. However, how to design such systems remains a challenging task. We present a methodology to aid and structure the design of guidance for enhancing VA solutions consisting of four steps: (S1) defining the target of analysis, (S2) identifying the user tasks, (S3) describing the guidance tasks, and (S4) placing guidance. In summary, our proposed methodology specifies a space of possible user tasks and maps them to the corresponding space of guidance tasks, using recent abstract task typologies for guidance and visualization. We exemplify this methodology through two case studies from the literature: <em>Overview</em>, a system for exploring and labeling document collections aimed at journalists, and <em>DoRIAH</em>, a system for historical imagery analysis. We show how our methodology enriches existing VA solutions with guidance and provides a structured way to design guidance in complex VA scenarios.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104121"},"PeriodicalIF":2.5,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing semantic mapping in text-to-image diffusion via Gather-and-Bind 通过聚合绑定增强文本到图像扩散中的语义映射
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-07 DOI: 10.1016/j.cag.2024.104118
Huan Fu, Guoqing Cheng
{"title":"Enhancing semantic mapping in text-to-image diffusion via Gather-and-Bind","authors":"Huan Fu,&nbsp;Guoqing Cheng","doi":"10.1016/j.cag.2024.104118","DOIUrl":"10.1016/j.cag.2024.104118","url":null,"abstract":"<div><div>Text-to-image synthesis is a challenging task that aims to generate realistic and diverse images from natural language descriptions. However, existing text-to-image diffusion models (e.g., Stable Diffusion) sometimes fail to satisfy the semantic descriptions of the users, especially when the prompts contain multiple concepts or modifiers such as colors. By visualizing the cross-attention maps of the Stable Diffusion model during the denoising process, we find that one of the concepts has a very scattered attention map, which cannot form a whole and gradually gets ignored. Moreover, the attention maps of the modifiers are hard to overlap with the corresponding concepts, resulting in incorrect semantic mapping. To address this issue, we propose a Gather-and-Bind method that intervenes in the cross-attention maps during the denoising process to alleviate the catastrophic forgetting and attribute binding problems without any pre-training. Specifically, we first use information entropy to measure the dispersion degree of the cross-attention maps and construct an information entropy loss to gather these scattered attention maps, which eventually captures all the concepts in the generated output. Furthermore, we construct an attribute binding loss that minimizes the distance between the attention maps of the attributes and their corresponding concepts, which enables the model to establish correct semantic mapping and significantly improves the performance of the baseline model. We conduct extensive experiments on public datasets and demonstrate that our method can better capture the semantic information of the input prompts. Code is available at <span><span>https://github.com/huan085128/Gather-and-Bind</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104118"},"PeriodicalIF":2.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning geometric complexes for 3D shape classification 学习用于三维形状分类的几何复合物
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-07 DOI: 10.1016/j.cag.2024.104119
Prachi Kudeshia, Muhammad Altaf Agowun, Jiju Poovvancheri
{"title":"Learning geometric complexes for 3D shape classification","authors":"Prachi Kudeshia,&nbsp;Muhammad Altaf Agowun,&nbsp;Jiju Poovvancheri","doi":"10.1016/j.cag.2024.104119","DOIUrl":"10.1016/j.cag.2024.104119","url":null,"abstract":"<div><div>Geometry and topology are vital elements in discerning and describing the shape of an object. Geometric complexes constructed on the point cloud of a 3D object capture the geometry as well as topological features of the underlying shape space. Leveraging this aspect of geometric complexes, we present an attention-based dual stream graph neural network (DS-GNN) for 3D shape classification. In the first stream of DS-GNN, we introduce spiked skeleton complex (SSC) for learning the shape patterns through comprehensive feature integration of the point cloud’s core structure. SSC is a novel and concise geometric complex comprising principal plane-based cluster centroids complemented with per-centroid spatial locality information. The second stream of DS-GNN consists of alpha complex which facilitates the learning of geometric patterns embedded in the object shapes via higher dimensional simplicial attention. To evaluate the model’s response to different shape topologies, we perform a persistent homology-based object segregation that groups the objects based on the underlying topological space characteristics quantified through the second Betti number. Our experimental study on benchmark datasets such as ModelNet40 and ScanObjectNN shows the potential of the proposed GNN for the classification of 3D shapes with different topologies and offers an alternative to the current evaluation practices in this domain.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104119"},"PeriodicalIF":2.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RenalViz: Visual analysis of cohorts with chronic kidney disease RenalViz:慢性肾脏病队列的可视化分析
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-07 DOI: 10.1016/j.cag.2024.104120
Markus Höhn , Sarah Schwindt-Drews , Sara Hahn , Sammy Patyna , Stefan Büttner , Jörn Kohlhammer
{"title":"RenalViz: Visual analysis of cohorts with chronic kidney disease","authors":"Markus Höhn ,&nbsp;Sarah Schwindt-Drews ,&nbsp;Sara Hahn ,&nbsp;Sammy Patyna ,&nbsp;Stefan Büttner ,&nbsp;Jörn Kohlhammer","doi":"10.1016/j.cag.2024.104120","DOIUrl":"10.1016/j.cag.2024.104120","url":null,"abstract":"<div><div>Chronic Kidney Disease (CKD) is a prominent health problem. Progressive CKD leads to impaired kidney function with decreased ability to filter the patients’ blood, concluding in multiple complications, like heart disease and ultimately death from the disease. In previous work, we developed a prototype to support nephrologists in gaining an overview of their CKD patients. The prototype visualizes the patients in cohorts according to their pairwise similarity. The user can interactively modify the similarity by changing the underlying weights of the included features. The work in this paper expands upon this previous work by the enlargement of the data set and the user interface of the application. With a focus on the distinction between individual CKD classes we introduce a color scheme used throughout all visualization. Furthermore, the visualizations were adopted to display the data of several patients at once. This also involved the option to align the visualizations to sentinel points, such as the onset of a particular CKD stage, in order to quantify the progression of all selected patients in relation to this event. The prototype was developed in response to the identified potential for improvement of the earlier application. An additional user study concerning the intuitiveness and usability confirms good results for the prototype and leads to the assessment of an easy-to-use approach.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104120"},"PeriodicalIF":2.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive 360° video timeline exploration in VR environment 在 VR 环境中探索自适应 360° 视频时间轴
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-04 DOI: 10.1016/j.cag.2024.104108
Mengmeng Yu, Chongke Bi
{"title":"Adaptive 360° video timeline exploration in VR environment","authors":"Mengmeng Yu,&nbsp;Chongke Bi","doi":"10.1016/j.cag.2024.104108","DOIUrl":"10.1016/j.cag.2024.104108","url":null,"abstract":"<div><div>Timeline control is a crucial interaction during video viewing, aiding users in quickly locating or jumping to specific points in the video playback, especially when dealing with lengthy content. 360°videos, with their ability to offer an all-encompassing view, have gradually gained popularity, providing a more immersive experience compared to videos with a single perspective. While most 360°videos are currently displayed on two-dimensional screens, the timeline design has largely remained similar to that of conventional videos. However, virtual reality (VR) headsets provide a more immersive viewing experience for 360°videos and offer additional dimensions for timeline design. In this paper, we initially explored 6 timeline design styles by varying the shape and interaction distance of the timeline, aiming to discover designs more suitable for the VR environment of 360°videos. Subsequently, we introduced an adaptive timeline display mechanism based on eye gaze sequences to optimize the timeline, addressing issues like obstructing the view and causing distractions when the timeline is consistently visible. Through two studies, we first demonstrated that in the 360°space, the three-dimensional timeline performs better in terms of usability than the two-dimensional one, and the reachable timeline has advantages in performance and experience over the distant one. Secondly, we verified that, without compromising interaction efficiency and system usability, the adaptive display timeline gained more user preference due to its accurate prediction of user timeline needs.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104108"},"PeriodicalIF":2.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGLight: An effective indoor illumination estimation method based on improved convmixer and GauGAN CGLight:基于改进型卷积混频器和 GauGAN 的有效室内光照度估计方法
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-04 DOI: 10.1016/j.cag.2024.104122
Yang Wang , Shijia Song , Lijun Zhao , Huijuan Xia , Zhenyu Yuan , Ying Zhang
{"title":"CGLight: An effective indoor illumination estimation method based on improved convmixer and GauGAN","authors":"Yang Wang ,&nbsp;Shijia Song ,&nbsp;Lijun Zhao ,&nbsp;Huijuan Xia ,&nbsp;Zhenyu Yuan ,&nbsp;Ying Zhang","doi":"10.1016/j.cag.2024.104122","DOIUrl":"10.1016/j.cag.2024.104122","url":null,"abstract":"<div><div>Illumination consistency is a key factor for seamlessly integrating virtual objects with real scenes in augmented reality (AR) systems. High dynamic range (HDR) panoramic images are widely used to estimate scene lighting accurately. However, generating environment maps requires complex deep network architectures, which cannot operate on devices with limited memory space. To address this issue, this paper proposes CGLight, an effective illumination estimation method that predicts HDR panoramic environment maps from a single limited field-of-view (LFOV) image. We first design a CMAtten encoder to extract features from input images, which learns the spherical harmonic (SH) lighting representation with fewer model parameters. Guided by the lighting parameters, we train a generative adversarial network (GAN) to generate HDR environment maps. In addition, to enrich lighting details and reduce training time, we specifically introduce the color consistency loss and independent discriminator, considering the impact of color properties on the lighting estimation task while improving computational efficiency. Furthermore, the effectiveness of CGLight is verified by relighting virtual objects using the predicted environment maps, and the root mean square error and angular error are 0.0494 and 4.0607 in the gray diffuse sphere, respectively. Extensive experiments and analyses demonstrate that CGLight achieves a balance between indoor illumination estimation accuracy and resource efficiency, attaining higher accuracy with nearly 4 times fewer model parameters than the ViT-B16 model.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104122"},"PeriodicalIF":2.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note Computers & Graphics Issue 124 编者按 《计算机与图形》第 124 期
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-11-01 DOI: 10.1016/j.cag.2024.104117
Joaquim Jorge (Editor-in-Chief)
{"title":"Editorial Note Computers & Graphics Issue 124","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.cag.2024.104117","DOIUrl":"10.1016/j.cag.2024.104117","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104117"},"PeriodicalIF":2.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142661622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on Symposium on Virtual and Augmented Reality 2024 (SVR 2024) 2024 年虚拟与增强现实研讨会(SVR 2024)特别部分前言
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-31 DOI: 10.1016/j.cag.2024.104111
Rosa Costa, Cléber Corrêa, Skip Rizzo
{"title":"Foreword to the special section on Symposium on Virtual and Augmented Reality 2024 (SVR 2024)","authors":"Rosa Costa,&nbsp;Cléber Corrêa,&nbsp;Skip Rizzo","doi":"10.1016/j.cag.2024.104111","DOIUrl":"10.1016/j.cag.2024.104111","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104111"},"PeriodicalIF":2.5,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DVRT: Design and evaluation of a virtual reality drone programming teaching system DVRT:虚拟现实无人机编程教学系统的设计与评估
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-29 DOI: 10.1016/j.cag.2024.104114
Zean Jin, Yulong Bai, Wei Song, Qinghe Yu, Xiaoxin Yue, Xiang Jia
{"title":"DVRT: Design and evaluation of a virtual reality drone programming teaching system","authors":"Zean Jin,&nbsp;Yulong Bai,&nbsp;Wei Song,&nbsp;Qinghe Yu,&nbsp;Xiaoxin Yue,&nbsp;Xiang Jia","doi":"10.1016/j.cag.2024.104114","DOIUrl":"10.1016/j.cag.2024.104114","url":null,"abstract":"<div><div>Virtual Reality (VR) is an immersive virtual environment generated through computer technology. VR teaching, by utilizing an immersive learning model, offers innovative learning methods for Science, Technology, Engineering and Mathematics (STEM) education as well as programming education. This study developed a Drone Virtual Reality Teaching (DVRT) system aimed at beginners in drone operation and programming, with the goal of addressing the challenges in traditional drone and programming education, such as difficulty in engaging students and lack of practicality. Through the system's curriculum, students learn basic drone operation skills and advanced programming techniques. We conducted a course experiment primarily targeting undergraduate students who are beginners in drone operation. The test results showed that most students achieved scores above 4 out of 5, indicating that DVRT can effectively promote the development of users' comprehensive STEM literacy and computational thinking, thereby demonstrating the great potential of VR technology in STEM education. Through this innovative teaching method, students not only gain knowledge but also enjoy the fun of immersive learning.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104114"},"PeriodicalIF":2.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast spline collision detection (FSCD) algorithm for solving multiple contacts in real-time 用于实时解决多重接触的快速样条碰撞检测(FSCD)算法
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-10-20 DOI: 10.1016/j.cag.2024.104107
Lucas Zanusso Morais , Marcelo Gomes Martins , Rafael Piccin Torchelsen , Anderson Maciel , Luciana Porcher Nedel
{"title":"Fast spline collision detection (FSCD) algorithm for solving multiple contacts in real-time","authors":"Lucas Zanusso Morais ,&nbsp;Marcelo Gomes Martins ,&nbsp;Rafael Piccin Torchelsen ,&nbsp;Anderson Maciel ,&nbsp;Luciana Porcher Nedel","doi":"10.1016/j.cag.2024.104107","DOIUrl":"10.1016/j.cag.2024.104107","url":null,"abstract":"<div><div>Collision detection has been widely studied in the last decades. While plenty of solutions exist, certain simulation scenarios are still challenging when permanent contact and deformable bodies are involved. In this paper, we introduce a novel approach based on volumetric splines that is applicable to complex deformable tubes, such as in the simulation of colonoscopy and other endoscopies. The method relies on modeling radial control points, extracting surface information from a triangle mesh, and storing the volume information around a spline path. Such information is later used to compute the intersection between the object surfaces under the assumption of spatial coherence between neighboring splines. We analyze the method’s performance in terms of both speed and accuracy, comparing it with previous works. Results show that our method solves collisions between complex meshes with over 300k triangles, generating over 1,000 collisions per frame between objects while maintaining an average time of under 1ms without compromising accuracy.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104107"},"PeriodicalIF":2.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信