Computers & Graphics-Uk最新文献

筛选
英文 中文
Choreographing multi-degree of freedom behaviors in large-scale crowd simulations 在大规模人群模拟中编排多自由度行为
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-23 DOI: 10.1016/j.cag.2024.104051
Kexiang Huang , Gangyi Ding , Dapeng Yan , Ruida Tang , Tianyu Huang , Nuria Pelechano
{"title":"Choreographing multi-degree of freedom behaviors in large-scale crowd simulations","authors":"Kexiang Huang ,&nbsp;Gangyi Ding ,&nbsp;Dapeng Yan ,&nbsp;Ruida Tang ,&nbsp;Tianyu Huang ,&nbsp;Nuria Pelechano","doi":"10.1016/j.cag.2024.104051","DOIUrl":"10.1016/j.cag.2024.104051","url":null,"abstract":"<div><p>This study introduces a novel framework for choreographing multi-degree of freedom (MDoF) behaviors in large-scale crowd simulations. The framework integrates multi-objective optimization with spatio-temporal ordering to effectively generate and control diverse MDoF crowd behavior states. We propose a set of evaluation criteria for assessing the aesthetic quality of crowd states and employ multi-objective optimization to produce crowd states that meet these criteria. Additionally, we introduce time offset functions and interpolation progress functions to perform complex and diversified behavior state interpolations. Furthermore, we designed a user-centric interaction module that allows for intuitive and flexible adjustments of crowd behavior states through sketching, spline curves, and other interactive means. Qualitative tests and quantitative experiments on the evaluation criteria demonstrate the effectiveness of this method in generating and controlling MDoF behaviors in crowds. Finally, case studies, including real-world applications in the Opening Ceremony of the 2022 Beijing Winter Olympics, validate the practicality and adaptability of this approach.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104051"},"PeriodicalIF":2.5,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
US-Net: U-shaped network with Convolutional Attention Mechanism for ultrasound medical images US-Net:采用卷积注意机制的 U 型网络,用于超声医学图像
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-23 DOI: 10.1016/j.cag.2024.104054
Xiaoyu Xie , Pingping Liu , Yijun Lang , Zhenjie Guo , Zhongxi Yang , Yuhao Zhao
{"title":"US-Net: U-shaped network with Convolutional Attention Mechanism for ultrasound medical images","authors":"Xiaoyu Xie ,&nbsp;Pingping Liu ,&nbsp;Yijun Lang ,&nbsp;Zhenjie Guo ,&nbsp;Zhongxi Yang ,&nbsp;Yuhao Zhao","doi":"10.1016/j.cag.2024.104054","DOIUrl":"10.1016/j.cag.2024.104054","url":null,"abstract":"<div><p>Ultrasound imaging, characterized by low contrast, high noise, and interference from surrounding tissues, poses significant challenges in lesion segmentation. To tackle these issues, we introduce an enhanced U-shaped network that incorporates several novel features for precise, automated segmentation. Firstly, our model utilizes a convolution-based self-attention mechanism to establish long-range dependencies in feature maps, crucial for small dataset applications, accompanied by a soft thresholding method for noise reduction. Secondly, we employ multi-sized convolutional kernels to enrich feature processing, coupled with curvature calculations to accentuate edge details via a soft-attention approach. Thirdly, an advanced skip connection strategy is implemented in the UNet architecture, integrating information entropy to assess and utilize texture-rich channels, thereby improving semantic detail in the encoder before merging with decoder outputs. We validated our approach using a newly curated dataset, VPUSI (Vascular Plaques Ultrasound Images), alongside the established datasets, BUSI, TN3K and DDTI. Comparative experiments on these datasets show that our model outperforms existing state-of-the-art techniques in segmentation accuracy.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104054"},"PeriodicalIF":2.5,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142122100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShapeBench: A new approach to benchmarking local 3D shape descriptors ShapeBench:为本地三维形状描述符设定基准的新方法
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-22 DOI: 10.1016/j.cag.2024.104052
Bart Iver van Blokland
{"title":"ShapeBench: A new approach to benchmarking local 3D shape descriptors","authors":"Bart Iver van Blokland","doi":"10.1016/j.cag.2024.104052","DOIUrl":"10.1016/j.cag.2024.104052","url":null,"abstract":"<div><p>The ShapeBench evaluation methodology is proposed as an extension to the popular Area Under Precision-Recall Curve (PRC/AUC) for measuring the matching performance of local 3D shape descriptors. It is observed that the PRC inadequately accounts for other similar surfaces in the same or different objects when determining whether a candidate match is a true positive. The novel Descriptor Distance Index (DDI) metric is introduced to address this limitation. In contrast to previous evaluation methodologies, which identify entire objects in a given scene, the DDI metric measures descriptor performance by analysing point-to-point distances. The ShapeBench methodology is also more scalable than previous approaches, by using procedural generation. The benchmark is used to evaluate both old and new descriptors. The results produced by the implementation of the benchmark are fully replicable, and are made publicly available.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104052"},"PeriodicalIF":2.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324001870/pdfft?md5=5829ea110e365c2d20b6d416c88f685a&pid=1-s2.0-S0097849324001870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Transformer for 3D point clouds classification and semantic segmentation 用于三维点云分类和语义分割的图形变换器
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-22 DOI: 10.1016/j.cag.2024.104050
Wei Zhou , Qian Wang , Weiwei Jin , Xinzhe Shi , Ying He
{"title":"Graph Transformer for 3D point clouds classification and semantic segmentation","authors":"Wei Zhou ,&nbsp;Qian Wang ,&nbsp;Weiwei Jin ,&nbsp;Xinzhe Shi ,&nbsp;Ying He","doi":"10.1016/j.cag.2024.104050","DOIUrl":"10.1016/j.cag.2024.104050","url":null,"abstract":"<div><p>Recently, graph-based and Transformer-based deep learning have demonstrated excellent performances on various point cloud tasks. Most of the existing graph-based methods rely on static graph, which take a fixed input to establish graph relations. Moreover, many graph-based methods apply maximizing and averaging to aggregate neighboring features, so that only a single neighboring point affects the feature of centroid or different neighboring points own the same influence on the centroid’s feature, which ignoring the correlation and difference between points. Most Transformer-based approaches extract point cloud features based on global attention and lack the feature learning on local neighbors. To solve the above issues of graph-based and Transformer-based models, we propose a new feature extraction block named Graph Transformer and construct a 3D point cloud learning network called GTNet to learn features of point clouds on local and global patterns. Graph Transformer integrates the advantages of graph-based and Transformer-based methods, and consists of Local Transformer that use intra-domain cross-attention and Global Transformer that use global self-attention. Finally, we use GTNet for shape classification, part segmentation and semantic segmentation tasks in this paper. The experimental results show that our model achieves good learning and prediction ability on most tasks. The source code and pre-trained model of GTNet will be released on <span><span>https://github.com/NWUzhouwei/GTNet</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104050"},"PeriodicalIF":2.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing the effect of undermining on suture forces during simulated skin flap surgeries with a three-dimensional finite element method 用三维有限元方法分析在模拟皮瓣手术中破坏对缝合力的影响
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-22 DOI: 10.1016/j.cag.2024.104057
Wenzhangzhi Guo , Allison Tsz Kwan Lau , Joel C. Davies , Vito Forte , Eitan Grinspun , Lueder Alexander Kahrs
{"title":"Analyzing the effect of undermining on suture forces during simulated skin flap surgeries with a three-dimensional finite element method","authors":"Wenzhangzhi Guo ,&nbsp;Allison Tsz Kwan Lau ,&nbsp;Joel C. Davies ,&nbsp;Vito Forte ,&nbsp;Eitan Grinspun ,&nbsp;Lueder Alexander Kahrs","doi":"10.1016/j.cag.2024.104057","DOIUrl":"10.1016/j.cag.2024.104057","url":null,"abstract":"<div><p>Skin flaps are common procedures used by surgeons to cover an excised area during the reconstruction of a defect. It is often a challenging task for a surgeon to come up with the most optimal design for a patient. In this paper, we set up a simulation system based on the finite element method for one of the most common flap types — the rhomboid flap. Instead of using the standard 2D planar patch, we constructed a 3D patch with multiple layers. This allowed us to investigate the impact of different undermining areas and depths. We compared the suture forces for each case and identified vertices with the largest suture force. The shape of the final suture line is also visualized for each case, which is an important clue when deciding on the most optimal skin flap orientation according to medical textbooks. We found that under the optimal undermining setup, the maximum suture force is around 0.7 N for top of the undermined layer and 1.0 N for bottom of the undermined layer. When measuring difference in final suture line shape, the maximum normalized Hausdorff distance is 0.099, which suggests that different undermining region can have significant impact on the shape of the suture line, especially in the tail region. After analyzing the suture force plots, we provided recommendations on the most optimal undermining region for rhomboid flaps.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104057"},"PeriodicalIF":2.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324001924/pdfft?md5=2b5edd39561a506e03eb5a66dbf3e9fc&pid=1-s2.0-S0097849324001924-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on Shape Modeling International 2024 (SMI2024) 2024 年国际建模会议(SMI2024)特别会议前言
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-22 DOI: 10.1016/j.cag.2024.104047
Georges-Pierre Bonneau, Tao Ju, Zichun Zhong
{"title":"Foreword to the special section on Shape Modeling International 2024 (SMI2024)","authors":"Georges-Pierre Bonneau,&nbsp;Tao Ju,&nbsp;Zichun Zhong","doi":"10.1016/j.cag.2024.104047","DOIUrl":"10.1016/j.cag.2024.104047","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"123 ","pages":"Article 104047"},"PeriodicalIF":2.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142050396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenECAD: An efficient visual language model for editable 3D-CAD design OpenECAD:可编辑 3D CAD 设计的高效视觉语言模型
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-22 DOI: 10.1016/j.cag.2024.104048
Zhe Yuan , Jianqi Shi , Yanhong Huang
{"title":"OpenECAD: An efficient visual language model for editable 3D-CAD design","authors":"Zhe Yuan ,&nbsp;Jianqi Shi ,&nbsp;Yanhong Huang","doi":"10.1016/j.cag.2024.104048","DOIUrl":"10.1016/j.cag.2024.104048","url":null,"abstract":"<div><p>Computer-aided design (CAD) tools are utilized in the manufacturing industry for modeling everything from cups to spacecraft. These programs are complex to use and typically require years of training and experience to master. Structured and well-constrained 2D sketches and 3D constructions are crucial components of CAD modeling. A well-executed CAD model can be seamlessly integrated into the manufacturing process, thereby enhancing production efficiency. Deep generative models of 3D shapes and 3D object reconstruction models have garnered significant research interest. However, most of these models produce discrete forms of 3D objects that are not editable. Moreover, the few models based on CAD operations often have substantial input restrictions. In this work, we fine-tuned pre-trained models to create OpenECAD models (0.55B, 0.89B, 2.4B and 3.1B), leveraging the visual, logical, coding, and general capabilities of visual language models. OpenECAD models can process images of 3D designs as input and generate highly structured 2D sketches and 3D construction commands, ensuring that the designs are editable. These outputs can be directly used with existing CAD tools’ APIs to generate project files. To train our network, we created a series of OpenECAD datasets. These datasets are derived from existing public CAD datasets, adjusted and augmented to meet the specific requirements of vision language model (VLM) training. Additionally, we have introduced an approach that utilizes dependency relationships to define and generate sketches, further enriching the content and functionality of the datasets.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104048"},"PeriodicalIF":2.5,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the Special Section on XR Technologies for Healthcare and Wellbeing 为 "XR 技术促进医疗保健和福祉 "专题撰写的前言
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-20 DOI: 10.1016/j.cag.2024.104046
Anderson Maciel, Matias Volonte, Helena Mentis
{"title":"Foreword to the Special Section on XR Technologies for Healthcare and Wellbeing","authors":"Anderson Maciel,&nbsp;Matias Volonte,&nbsp;Helena Mentis","doi":"10.1016/j.cag.2024.104046","DOIUrl":"10.1016/j.cag.2024.104046","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104046"},"PeriodicalIF":2.5,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSGRNet: Local Spatial Latent Geometric Relation Learning Network for 3D point cloud semantic segmentation LSGRNet:用于三维点云语义分割的本地空间潜在几何关系学习网络
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-20 DOI: 10.1016/j.cag.2024.104053
Liguo Luo, Jian Lu, Xiaogai Chen, Kaibing Zhang, Jian Zhou
{"title":"LSGRNet: Local Spatial Latent Geometric Relation Learning Network for 3D point cloud semantic segmentation","authors":"Liguo Luo,&nbsp;Jian Lu,&nbsp;Xiaogai Chen,&nbsp;Kaibing Zhang,&nbsp;Jian Zhou","doi":"10.1016/j.cag.2024.104053","DOIUrl":"10.1016/j.cag.2024.104053","url":null,"abstract":"<div><p>In recent years, remarkable ability has been demonstrated by the Transformer model in capturing remote dependencies and improving point cloud segmentation performance. However, localized regions separated from conventional sampling architectures have resulted in the destruction of structural information of instances and a lack of exploration of potential geometric relationships between localized regions. To address this issue, a Local Spatial Latent Geometric Relation Learning Network (LSGRNet) is proposed in this paper, with the geometric properties of point clouds serving as a reference. Specifically, spatial transformation and gradient computation are performed on the local point cloud to uncover potential geometric relationships within the local neighborhood. Furthermore, a local relationship aggregator based on semantic and geometric relationships is constructed to enable the interaction of spatial geometric structure and information within the local neighborhood. Simultaneously, boundary interaction feature learning module is employed to learn the boundary information of the point cloud, aiming to better describe the local structure. The experimental results indicate that excellent segmentation performance is exhibited by the proposed LSGRNet in benchmark tests on the indoor datasets S3DIS and ScanNetV2, as well as the outdoor datasets SemanticKITTI and Semantic3D.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104053"},"PeriodicalIF":2.5,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An impartial framework to investigate demosaicking input embedding options 调查去马赛克输入嵌入选项的公正框架
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-08-16 DOI: 10.1016/j.cag.2024.104044
Yan Niu , Xuanchen Li , Yang Tao , Bo Zhao
{"title":"An impartial framework to investigate demosaicking input embedding options","authors":"Yan Niu ,&nbsp;Xuanchen Li ,&nbsp;Yang Tao ,&nbsp;Bo Zhao","doi":"10.1016/j.cag.2024.104044","DOIUrl":"10.1016/j.cag.2024.104044","url":null,"abstract":"<div><p>Convolutional Neural Networks (CNNs) have proven highly effective for demosaicking, transforming raw Color Filter Array (CFA) sensor samples into standard RGB images. Directly applying convolution to the CFA tensor can lead to misinterpretation of the color context, so existing demosaicking networks typically embed the CFA tensor into the Euclidean space before convolution. The most prevalent embedding options are <em>Reordering</em> and <em>Pre-interpolation</em>. However, it remains unclear which option is more advantageous for demosaicking. Moreover, no existing demosaicking network is suitable for conducting a fair comparison. As a result, in practice, the selection of these two embedding options is often based on intuition and heuristic approaches. This paper addresses the non-comparability between the two options and investigates whether pre-interpolation contributes additional knowledge to the demosaicking network. Based on rigorous mathematical derivation, we design pairs of end-to-end fully convolutional evaluation networks, ensuring that the performance difference between each pair of networks can be solely attributed to their differing CFA embedding strategies. Under strictly fair comparison conditions, we measure the performance contrast between the two embedding options across various scenarios. Our comprehensive evaluation reveals that the prior knowledge introduced by pre-interpolation benefits lightweight models. Additionally, pre-interpolation enhances the robustness to imaging artifacts for larger models. Our findings offer practical guidelines for designing imaging software or Image Signal Processors (ISPs) for RGB cameras.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"123 ","pages":"Article 104044"},"PeriodicalIF":2.5,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信