Shuming Zhang, Zhidong Guan, Hao Jiang, Tao Ning, Xiaodong Wang, Pingan Tan
{"title":"Brep2Seq:用于重建和生成计算机辅助设计模型的数据集和分层深度学习网络","authors":"Shuming Zhang, Zhidong Guan, Hao Jiang, Tao Ning, Xiaodong Wang, Pingan Tan","doi":"10.1093/jcde/qwae005","DOIUrl":null,"url":null,"abstract":"\n 3D reconstruction is a significant research topic in the field of Computer-Aided Design (CAD), which is used to recover editable CAD models from original shapes, including point clouds, voxels, meshes, and boundary representations (B-rep). Recently, there has been considerable research interest in deep model generation due to the increasing potential of deep learning methods. To address the challenges of 3D reconstruction and generation, we propose Brep2Seq, a novel deep neural network designed to transform the B-rep model into a sequence of editable parametrized feature-based modeling operations comprising principal primitives and detailed features. Brep2Seq employs an encoder-decoder architecture based on the Transformer, leveraging geometry and topological information within B-rep models to extract the feature representation of the original 3D shape. Due to its hierarchical network architecture and training strategy, Brep2Seq achieved improved model reconstruction and controllable model generation by distinguishing between the primary shape and detailed features of CAD models. To train Brep2Seq, a large-scale dataset comprising one million CAD designs is established through an automatic geometry synthesis method. Extensive experiments on both DeepCAD and Fusion 360 datasets demonstrate the effectiveness of Brep2Seq, and show its applicability to simple mechanical components in real-world scenarios. We further apply Brep2Seq to various downstream applications, including point cloud reconstruction, model interpolation, shape constraint generation and CAD feature recognition.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Brep2Seq: A dataset and hierarchical deep learning network for reconstruction and generation of computer-aided design models\",\"authors\":\"Shuming Zhang, Zhidong Guan, Hao Jiang, Tao Ning, Xiaodong Wang, Pingan Tan\",\"doi\":\"10.1093/jcde/qwae005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n 3D reconstruction is a significant research topic in the field of Computer-Aided Design (CAD), which is used to recover editable CAD models from original shapes, including point clouds, voxels, meshes, and boundary representations (B-rep). Recently, there has been considerable research interest in deep model generation due to the increasing potential of deep learning methods. To address the challenges of 3D reconstruction and generation, we propose Brep2Seq, a novel deep neural network designed to transform the B-rep model into a sequence of editable parametrized feature-based modeling operations comprising principal primitives and detailed features. Brep2Seq employs an encoder-decoder architecture based on the Transformer, leveraging geometry and topological information within B-rep models to extract the feature representation of the original 3D shape. Due to its hierarchical network architecture and training strategy, Brep2Seq achieved improved model reconstruction and controllable model generation by distinguishing between the primary shape and detailed features of CAD models. To train Brep2Seq, a large-scale dataset comprising one million CAD designs is established through an automatic geometry synthesis method. Extensive experiments on both DeepCAD and Fusion 360 datasets demonstrate the effectiveness of Brep2Seq, and show its applicability to simple mechanical components in real-world scenarios. We further apply Brep2Seq to various downstream applications, including point cloud reconstruction, model interpolation, shape constraint generation and CAD feature recognition.\",\"PeriodicalId\":48611,\"journal\":{\"name\":\"Journal of Computational Design and Engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Design and Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1093/jcde/qwae005\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Design and Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1093/jcde/qwae005","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Brep2Seq: A dataset and hierarchical deep learning network for reconstruction and generation of computer-aided design models
3D reconstruction is a significant research topic in the field of Computer-Aided Design (CAD), which is used to recover editable CAD models from original shapes, including point clouds, voxels, meshes, and boundary representations (B-rep). Recently, there has been considerable research interest in deep model generation due to the increasing potential of deep learning methods. To address the challenges of 3D reconstruction and generation, we propose Brep2Seq, a novel deep neural network designed to transform the B-rep model into a sequence of editable parametrized feature-based modeling operations comprising principal primitives and detailed features. Brep2Seq employs an encoder-decoder architecture based on the Transformer, leveraging geometry and topological information within B-rep models to extract the feature representation of the original 3D shape. Due to its hierarchical network architecture and training strategy, Brep2Seq achieved improved model reconstruction and controllable model generation by distinguishing between the primary shape and detailed features of CAD models. To train Brep2Seq, a large-scale dataset comprising one million CAD designs is established through an automatic geometry synthesis method. Extensive experiments on both DeepCAD and Fusion 360 datasets demonstrate the effectiveness of Brep2Seq, and show its applicability to simple mechanical components in real-world scenarios. We further apply Brep2Seq to various downstream applications, including point cloud reconstruction, model interpolation, shape constraint generation and CAD feature recognition.
期刊介绍:
Journal of Computational Design and Engineering is an international journal that aims to provide academia and industry with a venue for rapid publication of research papers reporting innovative computational methods and applications to achieve a major breakthrough, practical improvements, and bold new research directions within a wide range of design and engineering:
• Theory and its progress in computational advancement for design and engineering
• Development of computational framework to support large scale design and engineering
• Interaction issues among human, designed artifacts, and systems
• Knowledge-intensive technologies for intelligent and sustainable systems
• Emerging technology and convergence of technology fields presented with convincing design examples
• Educational issues for academia, practitioners, and future generation
• Proposal on new research directions as well as survey and retrospectives on mature field.