Weiwei Zheng, Haiyan Wu, Gang Xu, Ran Ling, Renshu Gu
{"title":"Feature-preserving quadrilateral mesh Boolean operation with cross-field guided layout blending","authors":"Weiwei Zheng, Haiyan Wu, Gang Xu, Ran Ling, Renshu Gu","doi":"10.1016/j.cagd.2024.102324","DOIUrl":"10.1016/j.cagd.2024.102324","url":null,"abstract":"<div><p>Compared to triangular meshes, high-quality quadrilateral meshes offer significant advantages in the field of simulation. However, generating high-quality quadrilateral meshes has always been a challenging task. By synthesizing high-quality quadrilateral meshes based on existing ones through Boolean operations such as mesh intersection, union, and difference, the automation level of quadrilateral mesh modeling can be improved. This significantly reduces modeling time. We propose a feature-preserving quadrilateral mesh Boolean operation method that can generate high-quality all-quadrilateral meshes through Boolean operations while preserving the geometric features and shape of the original mesh. Our method, guided by cross-field techniques, aligns mesh faces with geometric features of the model and maximally preserves the original mesh's geometric shape and layout. Compared to traditional quadrilateral mesh generation methods, our approach demonstrates higher efficiency, offering a substantial improvement to the pipeline of mesh-based modeling tools.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102324"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140769741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast parameterization of planar domains for isogeometric analysis via generalization of deep neural network","authors":"Zheng Zhan , Wenping Wang , Falai Chen","doi":"10.1016/j.cagd.2024.102313","DOIUrl":"https://doi.org/10.1016/j.cagd.2024.102313","url":null,"abstract":"<div><p>One prominent step in isogeometric analysis (IGA) is known as domain parameterization, that is, finding a parametric spline representation for a computational domain. Typically, domain parameterization is divided into two separate steps: identifying an appropriate boundary correspondence and then parameterizing the interior region. However, this separation significantly degrades the quality of the parameterization. To attain high-quality parameterization, it is necessary to optimize both the boundary correspondence and the interior mapping simultaneously, referred to as integral parameterization. In a prior research, an integral parameterization approach for planar domains based on neural networks was introduced. One limitation of this approach is that the neural network has no ability of generalization, that is, a network has to be trained to obtain a parameterization for each specific computational domain. In this article, we propose an efficient enhancement over this work, and we train a network which has the capacity of generalization—once the network is trained, a parameterization can be immediately obtained for each specific computational via evaluating the network. The new network greatly speeds up the parameterization process by two orders of magnitudes. We evaluate the performance of the new network on the MPEG data set and a self-design data set, and experimental results demonstrate the superiority of our algorithm compared to state-of-the-art parameterization methods.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102313"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140649957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Ji , Shibo Liu , Jia-Peng Guo , Jian-Ping Su , Xiao-Ming Fu
{"title":"Evolutionary multi-objective high-order tetrahedral mesh optimization","authors":"Yang Ji , Shibo Liu , Jia-Peng Guo , Jian-Ping Su , Xiao-Ming Fu","doi":"10.1016/j.cagd.2024.102302","DOIUrl":"https://doi.org/10.1016/j.cagd.2024.102302","url":null,"abstract":"<div><p>High-order mesh optimization has many goals, such as improving smoothness, reducing approximation error, and improving mesh quality. The previous methods do not optimize these objectives together, resulting in suboptimal results. To this end, we propose a multi-objective optimization method for high-order meshes. Central to our algorithm is using the multi-objective genetic algorithm (MOGA) to adapt to the multiple optimization objectives. Specifically, we optimize each control point one by one, where the MOGA is applied. We demonstrate the feasibility and effectiveness of our method over various models. Compared to other state-of-the-art methods, our method achieves a favorable trade-off between multiple objectives.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102302"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140649958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature-preserving shrink wrapping with adaptive alpha","authors":"Jiayi Dai , Yiqun Wang , Dong-Ming Yan","doi":"10.1016/j.cagd.2024.102321","DOIUrl":"10.1016/j.cagd.2024.102321","url":null,"abstract":"<div><p>Recent advancements in shrink-wrapping-based mesh approximation have shown tremendous advantages for non-manifold defective meshes. However, these methods perform unsatisfactorily when maintaining the regions with sharp features and rich details of the input mesh. We propose an adaptive shrink-wrapping method based on the recent Alpha Wrapping technique, offering improved feature preservation while handling defective inputs. The proposed approach comprises three main steps. First, we compute a new sizing field with the capability to assess the discretization density of non-manifold defective meshes. Then, we generate a mesh feature skeleton by projecting input feature lines onto the offset surface, ensuring the preservation of sharp features. Finally, an adaptive wrapping approach based on normal projection is applied to preserve the regions with sharp features and rich details simultaneously. By conducting experimental tests on various datasets including Thingi10k, ABC, and GrabCAD, we demonstrate that our method exhibits significant improvements in mesh fidelity compared to the Alpha Wrapping method, while maintaining the advantage of manifold property inherited from shrink-wrapping methods.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102321"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140769915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sipeng Yang , Wenhui Ren , Xiwen Zeng , Qingchuan Zhu , Hongbo Fu , Kaijun Fan , Lei Yang , Jingping Yu , Qilong Kou , Xiaogang Jin
{"title":"Generated realistic noise and rotation-equivariant models for data-driven mesh denoising","authors":"Sipeng Yang , Wenhui Ren , Xiwen Zeng , Qingchuan Zhu , Hongbo Fu , Kaijun Fan , Lei Yang , Jingping Yu , Qilong Kou , Xiaogang Jin","doi":"10.1016/j.cagd.2024.102306","DOIUrl":"10.1016/j.cagd.2024.102306","url":null,"abstract":"<div><p>3D mesh denoising is a crucial pre-processing step in many graphics applications. However, existing data-driven mesh denoising models, primarily trained on synthetic white noise, are less effective when applied to real-world meshes with the noise of complex intensities and distributions. Moreover, how to comprehensively capture information from input meshes and apply suitable denoising models for feature-preserving mesh denoising remains a critical and unresolved challenge. This paper presents a rotation-Equivariant model-based Mesh Denoising (EMD) model and a Realistic Mesh Noise Generation (RMNG) model to address these issues. Our EMD model leverages rotation-equivariant features and self-attention weights of geodesic patches for more effective feature extraction, thereby achieving SOTA denoising results. The RMNG model, based on the Generative Adversarial Networks (GANs) architecture, generates massive amounts of realistic noisy and noiseless mesh pairs data for data-driven mesh denoising model training, significantly benefiting real-world denoising tasks. To address the smooth degradation and loss of sharp edges commonly observed in captured meshes, we further introduce varying levels of Laplacian smoothing to input meshes during the paired training data generation, endowing the trained denoising model with feature recovery capabilities. Experimental results demonstrate the superior performance of our proposed method in preserving fine-grained features while removing noise on real-world captured meshes.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102306"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140787282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anisotropic triangular meshing using metric-adapted embeddings","authors":"Yueqing Dai , Jian-Ping Su , Xiao-Ming Fu","doi":"10.1016/j.cagd.2024.102314","DOIUrl":"https://doi.org/10.1016/j.cagd.2024.102314","url":null,"abstract":"<div><p>We propose a novel method to generate high-quality triangular meshes with specified anisotropy. Central to our algorithm is to present metric-adapted embeddings for converting the anisotropic meshing problem to an isotropic meshing problem with constant density. Moreover, the orientation of the input Riemannian metric forms a field, enabling us to use field-based meshing techniques to improve regularity and penalize obtuse angles. To achieve such metric-adapted embeddings, we use the cone singularities, which are generated to adapt to the input Riemannian metric. We demonstrate the feasibility and effectiveness of our method over various models. Compared to other state-of-the-art methods, our method achieves higher quality on all metrics in most models.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102314"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140649959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unpaired high-quality image-guided infrared and visible image fusion via generative adversarial network","authors":"Hang Li, Zheng Guan, Xue Wang, Qiuhan Shao","doi":"10.1016/j.cagd.2024.102325","DOIUrl":"10.1016/j.cagd.2024.102325","url":null,"abstract":"<div><p>Current infrared and visible image fusion (IVIF) methods lack ground truth and require prior knowledge to guide the feature fusion process. However, in the fusion process, these features have not been placed in an equal and well-defined position, which causes the degradation of image quality. To address this challenge, this study develops a new end-to-end model, termed unpaired high-quality image-guided generative adversarial network (UHG-GAN). Specifically, we introduce the high-quality image as the reference standard of the fused image and employ a global discriminator and a local discriminator to identify the distribution difference between the high-quality image and the fused image. Through adversarial learning, the generator can generate images that are more consistent with high-quality expression. In addition, we also designed the laplacian pyramid augmentation (LPA) module in the generator, which integrates multi-scale features of source images across domains so that the generator can more fully extract the structure and texture information. Extensive experiments demonstrate that our method can effectively preserve the target information in the infrared image and the scene information in the visible image and significantly improve the image quality.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102325"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140797052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aleksander Płocharski , Joanna Porter-Sobieraj , Andrzej Lamecki , Tomasz Herman , Andrzej Uszakow
{"title":"Skeleton based tetrahedralization of surface meshes","authors":"Aleksander Płocharski , Joanna Porter-Sobieraj , Andrzej Lamecki , Tomasz Herman , Andrzej Uszakow","doi":"10.1016/j.cagd.2024.102317","DOIUrl":"10.1016/j.cagd.2024.102317","url":null,"abstract":"<div><p>We propose a new method for generating tetrahedralizations for 3D surface meshes. The method builds upon a segmentation of the mesh that forms a rooted skeleton structure. Each segment in the structure is fitted with a stamp - a predefined basic shape with a regular and well-defined topology. After molding each stamp to the shape of the segment it is assigned to, we connect the segments with a layer of tetrahedra using a new approach to stitching two triangulated surfaces with tetrahedra. Our method not only generates a tetrahedralization with regular topology mimicking a bone-like structure with tissue being grouped around it, but also achieves running times that would allow for real-time usages. The running time of the method is closely correlated with the density of the input mesh which allows for controlling the expected time by decreasing the vertex count while still preserving the general shape of the object.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102317"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0167839624000517/pdfft?md5=ea6fec521d15182f3dcee80325fefe51&pid=1-s2.0-S0167839624000517-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140789775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengfei Liu , Yuqing Zhang , He Wang , Milo K. Yip , Elvis S. Liu , Xiaogang Jin
{"title":"Real-time collision detection between general SDFs","authors":"Pengfei Liu , Yuqing Zhang , He Wang , Milo K. Yip , Elvis S. Liu , Xiaogang Jin","doi":"10.1016/j.cagd.2024.102305","DOIUrl":"10.1016/j.cagd.2024.102305","url":null,"abstract":"<div><p>Signed Distance Fields (SDFs) have found widespread utility in collision detection applications due to their superior query efficiency and ability to represent continuous geometries. However, little attention has been paid to calculating the intersection of two arbitrary SDFs. In this paper, we propose a novel, accurate, and real-time approach for SDF-based collision detection between two solids, both represented as SDFs. Our primary strategy entails using interval calculations and the SDF gradient to guide the search for intersection points within the geometry. For arbitrary objects, we take inspiration from existing collision detection pipelines and segment the two SDFs into multiple parts with bounding volumes. Once potential collisions between two parts are identified, our method quickly computes comprehensive intersection information such as penetration depth, contact points, and contact normals. Our method is general in that it accepts both continuous and discrete SDF representations. Experiment results show that our method can detect collisions in high-precision models in real time, highlighting its potential for a wide range of applications in computer graphics and virtual reality.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102305"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140770078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuming Zhang , Zhidong Guan , Hao Jiang , Xiaodong Wang , Pingan Tan
{"title":"BrepMFR: Enhancing machining feature recognition in B-rep models through deep learning and domain adaptation","authors":"Shuming Zhang , Zhidong Guan , Hao Jiang , Xiaodong Wang , Pingan Tan","doi":"10.1016/j.cagd.2024.102318","DOIUrl":"https://doi.org/10.1016/j.cagd.2024.102318","url":null,"abstract":"<div><p>Feature Recognition (FR) plays a crucial role in modern digital manufacturing, serving as a key technology for integrating Computer-Aided Design (CAD), Computer-Aided Process Planning (CAPP) and Computer-Aided Manufacturing (CAM) systems. The emergence of deep learning methods in recent years offers a new approach to address challenges in recognizing highly intersecting features with complex geometric shapes. However, due to the high cost of labeling real CAD models, neural networks are usually trained on computer-synthesized datasets, resulting in noticeable performance degradation when applied to real-world CAD models. Therefore, we propose a novel deep learning network, BrepMFR, designed for Machining Feature Recognition (MFR) from Boundary Representation (B-rep) models. We transform the original B-rep model into a graph representation as network-friendly input, incorporating local geometric shape and global topological relationships. Leveraging a graph neural network based on Transformer architecture and graph attention mechanism, we extract the feature representation of high-level semantic information to achieve machining feature recognition. Additionally, employing a two-step training strategy under a transfer learning framework, we enhance BrepMFR's generalization ability by adapting synthetic training data to real CAD data. Furthermore, we establish a large-scale synthetic CAD model dataset inclusive of 24 typical machining features, showcasing diversity in geometry that closely mirrors real-world mechanical engineering scenarios. Extensive experiments across various datasets demonstrate that BrepMFR achieves state-of-the-art machining feature recognition accuracy and performs effectively on CAD models of real-world mechanical parts.</p></div>","PeriodicalId":55226,"journal":{"name":"Computer Aided Geometric Design","volume":"111 ","pages":"Article 102318"},"PeriodicalIF":1.5,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140649960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}