Visual Computing for Industry, Biomedicine, and Art最新文献

筛选
英文 中文
Indoor versus outdoor scene recognition for navigation of a micro aerial vehicle using spatial color gist wavelet descriptors. 基于空间色彩基小波描述子的微型飞行器导航室内外场景识别。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-11-26 DOI: 10.1186/s42492-019-0030-9
Anitha Ganesan, Anbarasu Balasubramanian
{"title":"Indoor versus outdoor scene recognition for navigation of a micro aerial vehicle using spatial color gist wavelet descriptors.","authors":"Anitha Ganesan,&nbsp;Anbarasu Balasubramanian","doi":"10.1186/s42492-019-0030-9","DOIUrl":"https://doi.org/10.1186/s42492-019-0030-9","url":null,"abstract":"<p><p>In the context of improved navigation for micro aerial vehicles, a new scene recognition visual descriptor, called spatial color gist wavelet descriptor (SCGWD), is proposed. SCGWD was developed by combining proposed Ohta color-GIST wavelet descriptors with census transform histogram (CENTRIST) spatial pyramid representation descriptors for categorizing indoor versus outdoor scenes. A binary and multiclass support vector machine (SVM) classifier with linear and non-linear kernels was used to classify indoor versus outdoor scenes and indoor scenes, respectively. In this paper, we have also discussed the feature extraction methodology of several, state-of-the-art visual descriptors, and four proposed visual descriptors (Ohta color-GIST descriptors, Ohta color-GIST wavelet descriptors, enhanced Ohta color histogram descriptors, and SCGWDs), in terms of experimental perspectives. The proposed enhanced Ohta color histogram descriptors, Ohta color-GIST descriptors, Ohta color-GIST wavelet descriptors, SCGWD, and state-of-the-art visual descriptors were evaluated, using the Indian Institute of Technology Madras Scene Classification Image Database two, an Indoor-Outdoor Dataset, and the Massachusetts Institute of Technology indoor scene classification dataset [(MIT)-67]. Experimental results showed that the indoor versus outdoor scene recognition algorithm, employing SVM with SCGWDs, produced the highest classification rates (CRs)-95.48% and 99.82% using radial basis function kernel (RBF) kernel and 95.29% and 99.45% using linear kernel for the IITM SCID2 and Indoor-Outdoor datasets, respectively. The lowest CRs-2.08% and 4.92%, respectively-were obtained when RBF and linear kernels were used with the MIT-67 dataset. In addition, higher CRs, precision, recall, and area under the receiver operating characteristic curve values were obtained for the proposed SCGWDs, in comparison with state-of-the-art visual descriptors.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2019-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0030-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37795219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Robustness of radiomic features in magnetic resonance imaging: review and a phantom study. 磁共振成像中放射学特征的鲁棒性:回顾和幻像研究。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-11-20 DOI: 10.1186/s42492-019-0025-6
Renee Cattell, Shenglan Chen, Chuan Huang
{"title":"Robustness of radiomic features in magnetic resonance imaging: review and a phantom study.","authors":"Renee Cattell,&nbsp;Shenglan Chen,&nbsp;Chuan Huang","doi":"10.1186/s42492-019-0025-6","DOIUrl":"https://doi.org/10.1186/s42492-019-0025-6","url":null,"abstract":"<p><p>Radiomic analysis has exponentially increased the amount of quantitative data that can be extracted from a single image. These imaging biomarkers can aid in the generation of prediction models aimed to further personalized medicine. However, the generalizability of the model is dependent on the robustness of these features. The purpose of this study is to review the current literature regarding robustness of radiomic features on magnetic resonance imaging. Additionally, a phantom study is performed to systematically evaluate the behavior of radiomic features under various conditions (signal to noise ratio, region of interest delineation, voxel size change and normalization methods) using intraclass correlation coefficients. The features extracted in this phantom study include first order, shape, gray level cooccurrence matrix and gray level run length matrix. Many features are found to be non-robust to changing parameters. Feature robustness assessment prior to feature selection, especially in the case of combining multi-institutional data, may be warranted. Further investigation is needed in this area of research.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2019-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0025-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37795170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Energy enhanced tissue texture in spectral computed tomography for lesion classification. 光谱计算机断层扫描中用于病变分类的能量增强组织纹理。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-11-18 eCollection Date: 2019-12-01 DOI: 10.1186/s42492-019-0028-3
Yongfeng Gao, Yongyi Shi, Weiguo Cao, Shu Zhang, Zhengrong Liang
{"title":"Energy enhanced tissue texture in spectral computed tomography for lesion classification.","authors":"Yongfeng Gao,&nbsp;Yongyi Shi,&nbsp;Weiguo Cao,&nbsp;Shu Zhang,&nbsp;Zhengrong Liang","doi":"10.1186/s42492-019-0028-3","DOIUrl":"https://doi.org/10.1186/s42492-019-0028-3","url":null,"abstract":"<p><p>Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels, i.e., the tissue heterogeneity, and has been recognized as important biomarkers in various clinical tasks. Spectral computed tomography (CT) is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies. Therefore, this paper aims to address two related issues for clinical usage of spectral CT, especially the photon counting CT (PCCT): (1) texture enhancement by spectral CT image reconstruction, and (2) spectral energy enriched tissue texture for improved lesion classification. For issue (1), we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory. Reconstruction results showed the proposed method outperforms existing methods of total variation (TV), low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise. For issue (2), this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs: one is the spectral images, another is the co-occurrence matrices (CMs) extracted from the spectral images, and the third one is the Haralick features (HF) extracted from the CMs. Studies were performed on simulated photon counting data by introducing attenuation-energy response curve to the traditional CT images from energy integration detectors. Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve (AUC) score by 7.3%, 0.42% and 3.0% for the spectral images, CMs and HFs respectively on the five-energy spectral data over the original single energy data only. The CM- and HF-inputs can achieve the best AUC of 0.934 and 0.927. This texture themed study shows the insight that incorporating clinical important prior information, e.g., tissue texture in this paper, into the medical imaging, such as the upstream image reconstruction, the downstream diagnosis, and so on, can benefit the clinical tasks.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2019-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7089716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37783630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy. 基于深度学习的光学分辨率光声显微镜运动校正算法。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-10-29 DOI: 10.1186/s42492-019-0022-9
Xingxing Chen, Weizhi Qi, Lei Xi
{"title":"Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy.","authors":"Xingxing Chen,&nbsp;Weizhi Qi,&nbsp;Lei Xi","doi":"10.1186/s42492-019-0022-9","DOIUrl":"https://doi.org/10.1186/s42492-019-0022-9","url":null,"abstract":"<p><p>In this study, we propose a deep-learning-based method to correct motion artifacts in optical resolution photoacoustic microscopy (OR-PAM). The method is a convolutional neural network that establishes an end-to-end map from input raw data with motion artifacts to output corrected images. First, we performed simulation studies to evaluate the feasibility and effectiveness of the proposed method. Second, we employed this method to process images of rat brain vessels with multiple motion artifacts to evaluate its performance for in vivo applications. The results demonstrate that this method works well for both large blood vessels and capillary networks. In comparison with traditional methods, the proposed method in this study can be easily modified to satisfy different scenarios of motion corrections in OR-PAM by revising the training sets.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2019-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0022-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37796307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Scalable point cloud meshing for image-based large-scale 3D modeling. 基于图像的大规模3D建模的可扩展点云网格。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-08-07 DOI: 10.1186/s42492-019-0020-y
Jiali Han, Shuhan Shen
{"title":"Scalable point cloud meshing for image-based large-scale 3D modeling.","authors":"Jiali Han,&nbsp;Shuhan Shen","doi":"10.1186/s42492-019-0020-y","DOIUrl":"https://doi.org/10.1186/s42492-019-0020-y","url":null,"abstract":"<p><p>Image-based 3D modeling is an effective method for reconstructing large-scale scenes, especially city-level scenarios. In the image-based modeling pipeline, obtaining a watertight mesh model from a noisy multi-view stereo point cloud is a key step toward ensuring model quality. However, some state-of-the-art methods rely on the global Delaunay-based optimization formed by all the points and cameras; thus, they encounter scaling problems when dealing with large scenes. To circumvent these limitations, this study proposes a scalable point-cloud meshing approach to aid the reconstruction of city-scale scenes with minimal time consumption and memory usage. Firstly, the entire scene is divided along the x and y axes into several overlapping chunks so that each chunk can satisfy the memory limit. Then, the Delaunay-based optimization is performed to extract meshes for each chunk in parallel. Finally, the local meshes are merged together by resolving local inconsistencies in the overlapping areas between the chunks. We test the proposed method on three city-scale scenes with hundreds of millions of points and thousands of images, and demonstrate its scalability, accuracy, and completeness, compared with the state-of-the-art methods.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2019-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0020-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37796397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Intensity-curvature functional based digital high pass filter of the bivariate cubic B-spline model polynomial function. 基于强度曲率泛函的二元三次b样条模型多项式函数数字高通滤波器。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-08-02 DOI: 10.1186/s42492-019-0017-6
Carlo Ciulla, Grace Agyapong
{"title":"Intensity-curvature functional based digital high pass filter of the bivariate cubic B-spline model polynomial function.","authors":"Carlo Ciulla,&nbsp;Grace Agyapong","doi":"10.1186/s42492-019-0017-6","DOIUrl":"https://doi.org/10.1186/s42492-019-0017-6","url":null,"abstract":"<p><p>This research addresses the design of intensity-curvature functional (ICF) based digital high pass filter (HPF). ICF is calculated from bivariate cubic B-spline model polynomial function and is called ICF-based HPF. In order to calculate ICF, the model function needs to be second order differentiable and to have non-null classic-curvature calculated at the origin (0, 0) of the pixel coordinate system. The theoretical basis of this research is called intensity-curvature concept. The concept envisions to replace signal intensity with the product between signal intensity and sum of second order partial derivatives of the model function. Extrapolation of the concept in two-dimensions (2D) makes it possible to calculate the ICF of an image. Theoretical treatise is presented to demonstrate the hypothesis that ICF is HPF signal. Empirical evidence then validates the assumption and also extends the comparison between ICF-based HPF and ten different HPFs among which is traditional HPF and particle swarm optimization (PSO) based HPF. Through comparison of image space and k-space magnitude, results indicate that HPFs behave differently. Traditional HPF filtering and ICF-based filtering are superior to PSO-based filtering. Images filtered with traditional HPF are sharper than images filtered with ICF-based filter. The contribution of this research can be summarized as follows: (1) Math description of the constraints that ICF need to obey to in order to function as HPF; (2) Math of ICF-based HPF of bivariate cubic B-spline; (3) Image space comparisons between HPFs; (4) K-space magnitude comparisons between HPFs. This research provides confirmation on the math procedure to use in order to design 2D HPF from a model bivariate polynomial function.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2019-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0017-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37796395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Modeling and simulation of an anatomy teaching system. 解剖学教学系统的建模与仿真。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-08-02 DOI: 10.1186/s42492-019-0019-4
Xiaoqin Zhang, Jingyi Yang, Na Chen, Shaoxiang Zhang, Yifa Xu, Liwen Tan
{"title":"Modeling and simulation of an anatomy teaching system.","authors":"Xiaoqin Zhang,&nbsp;Jingyi Yang,&nbsp;Na Chen,&nbsp;Shaoxiang Zhang,&nbsp;Yifa Xu,&nbsp;Liwen Tan","doi":"10.1186/s42492-019-0019-4","DOIUrl":"https://doi.org/10.1186/s42492-019-0019-4","url":null,"abstract":"<p><p>Specimen observation and dissection have been regarded as the best approach to teach anatomy, but due to the severe lack of anatomical specimens in recent years, the quality of anatomy teaching has been seriously affected. In order to disseminate anatomical knowledge effectively under such circumstances, this study discusses three key factors (modeling, perception, and interaction) involved in constructing virtual anatomy teaching systems in detail. To ensure the authenticity, integrity, and accuracy of modeling, detailed three-dimensional (3D) digital anatomical models are constructed using multi-scale data, such as the Chinese Visible Human dataset, clinical imaging data, tissue sections, and other sources. The anatomical knowledge ontology is built according to the needs of the particular teaching purposes. Various kinds of anatomical knowledge and 3D digital anatomical models are organically combined to construct virtual anatomy teaching system by means of virtual reality equipment and technology. The perception of knowledge is realized by the Yi Chuang Digital Human Anatomy Teaching System that we have created. The virtual interaction mode, which is similar to actual anatomical specimen observation and dissection, can enhance the transmissibility of anatomical knowledge. This virtual anatomy teaching system captures the three key factors. It can provide realistic and reusable teaching resources, expand the new medical education model, and effectively improve the quality of anatomy teaching.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2019-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0019-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37795216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Assessing performance of augmented reality-based neurosurgical training. 基于增强现实的神经外科训练绩效评估。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-07-03 DOI: 10.1186/s42492-019-0015-8
Wei-Xin Si, Xiang-Yun Liao, Yin-Ling Qian, Hai-Tao Sun, Xiang-Dong Chen, Qiong Wang, Pheng Ann Heng
{"title":"Assessing performance of augmented reality-based neurosurgical training.","authors":"Wei-Xin Si,&nbsp;Xiang-Yun Liao,&nbsp;Yin-Ling Qian,&nbsp;Hai-Tao Sun,&nbsp;Xiang-Dong Chen,&nbsp;Qiong Wang,&nbsp;Pheng Ann Heng","doi":"10.1186/s42492-019-0015-8","DOIUrl":"https://doi.org/10.1186/s42492-019-0015-8","url":null,"abstract":"<p><p>This paper presents a novel augmented reality (AR)-based neurosurgical training simulator which provides a very natural way for surgeons to learn neurosurgical skills. Surgical simulation with bimanual haptic interaction is integrated in this work to provide a simulated environment for users to achieve holographic guidance for pre-operative training. To achieve the AR guidance, the simulator should precisely overlay the 3D anatomical information of the hidden target organs in the patients in real surgery. In this regard, the patient-specific anatomy structures are reconstructed from segmented brain magnetic resonance imaging. We propose a registration method for precise mapping of the virtual and real information. In addition, the simulator provides bimanual haptic interaction in a holographic environment to mimic real brain tumor resection. In this study, we conduct AR-based guidance validation and a user study on the developed simulator, which demonstrate the high accuracy of our AR-based neurosurgery simulator, as well as the AR guidance mode's potential to improve neurosurgery by simplifying the operation, reducing the difficulty of the operation, shortening the operation time, and increasing the precision of the operation.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2019-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0015-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37795167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Dynamically loading IFC models on a web browser based on spatial semantic partitioning. 基于空间语义划分在web浏览器上动态加载IFC模型。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-06-03 DOI: 10.1186/s42492-019-0011-z
Hong-Lei Lu, Jia-Xing Wu, Yu-Shen Liu, Wan-Qi Wang
{"title":"Dynamically loading IFC models on a web browser based on spatial semantic partitioning.","authors":"Hong-Lei Lu,&nbsp;Jia-Xing Wu,&nbsp;Yu-Shen Liu,&nbsp;Wan-Qi Wang","doi":"10.1186/s42492-019-0011-z","DOIUrl":"https://doi.org/10.1186/s42492-019-0011-z","url":null,"abstract":"<p><p>Industry foundation classes (IFC) is an open and neutral data format specification for building information modeling (BIM) that plays a crucial role in facilitating interoperability. With increases in web-based BIM applications, there is an urgent need for fast loading large IFC models on a web browser. However, the task of fully loading large IFC models typically consumes a large amount of memory of a web browser or even crashes the browser, and this significantly limits further BIM applications. In order to address the issue, a method is proposed for dynamically loading IFC models based on spatial semantic partitioning (SSP). First, the spatial semantic structure of an input IFC model is partitioned via the extraction of story information and establishing a component space index table on the server. Subsequently, based on user interaction, only the model data that a user is interested in is transmitted, loaded, and displayed on the client. The presented method is implemented via Web Graphics Library, and this enables large IFC models to be fast loaded on the web browser without requiring any plug-ins. When compared with conventional methods that load all IFC model data for display purposes, the proposed method significantly reduces memory consumption in a web browser, thereby allowing the loading of large IFC models. When compared with the existing method of spatial partitioning for 3D data, the proposed SSP entirely uses semantic information in the IFC file itself, and thereby provides a better interactive experience for users.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0011-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37795215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Slicing and support structure generation for 3D printing directly on B-rep models. 切片和支持结构生成3D打印直接在B-rep模型。
4区 计算机科学
Visual Computing for Industry, Biomedicine, and Art Pub Date : 2019-05-22 DOI: 10.1186/s42492-019-0013-x
Kanle Shi, Conghui Cai, Zijian Wu, Junhai Yong
{"title":"Slicing and support structure generation for 3D printing directly on B-rep models.","authors":"Kanle Shi,&nbsp;Conghui Cai,&nbsp;Zijian Wu,&nbsp;Junhai Yong","doi":"10.1186/s42492-019-0013-x","DOIUrl":"https://doi.org/10.1186/s42492-019-0013-x","url":null,"abstract":"<p><p>Traditional 3D printing is based on stereolithography or standard tessellation language models, which contain many redundant data and have low precision. This paper proposes a slicing and support structure generation algorithm for 3D printing directly on boundary representation (B-rep) models. First, surface slicing is performed by efficiently computing the intersection curves between the faces of the B-rep models and each slicing plane. Then, the normals of the B-rep models are used to detect where the support structures should be located and the support structures are generated. Experimental results show the efficiency and stability of our algorithm.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":"2 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2019-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s42492-019-0013-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37796398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信