International Conference on Quality Control by Artificial Vision最新文献

筛选
英文 中文
Development of a 3D visualization system for the cerebral aneurysm coil embolization 脑动脉瘤线圈栓塞三维可视化系统的研制
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2589084
H. Kadota, S. Fujisawa, M. Oono, R. Morigaki, N. Yamamoto, Y. Kanematsu, M. Ishihara, M. Sisibori
{"title":"Development of a 3D visualization system for the cerebral aneurysm coil embolization","authors":"H. Kadota, S. Fujisawa, M. Oono, R. Morigaki, N. Yamamoto, Y. Kanematsu, M. Ishihara, M. Sisibori","doi":"10.1117/12.2589084","DOIUrl":"https://doi.org/10.1117/12.2589084","url":null,"abstract":"In this study, a system has been developed to draw three-dimensional coordinates of a catheter in real time using Unity for cerebral aneurysm coil embolization, which is one of the endovascular treatments. The system detects the tip of a catheter by binarization and delineation from X-ray images taken from two directions (frontal and lateral views) and calculates the tip coordinates of the catheter in the 3D model of blood vessels generated using VTK from DICOM images. In this system, surgeons can more effectively visualize the catheter tip in 3D by using an head-mounted display, a virtual reality device. In the evaluation experiment, we compared the catheter tip detection with conventional methods and found that it was faster and more accurate than conventional methods. Future tasks include accurate positioning of 2D images and 3D models and drawing catheters other than the tip.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124004406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposal for 3D visualization of cardiac mechanical activity by hybrid stereo measurement 基于混合立体测量的心脏机械活动三维可视化方案
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2590865
H. Aoki, T. Shiga, A. Suzuki, K. Takeuchi
{"title":"Proposal for 3D visualization of cardiac mechanical activity by hybrid stereo measurement","authors":"H. Aoki, T. Shiga, A. Suzuki, K. Takeuchi","doi":"10.1117/12.2590865","DOIUrl":"https://doi.org/10.1117/12.2590865","url":null,"abstract":"In this study, we proposed a new non-contact cardiac motion measurement for the purpose of treating cardiac mechanical activity (physical contraction / expansion mode) as clinical information. The proposed method is a hybrid method that combines passive stereo and active stereo, and simultaneously realized three-dimensional reconstruction of the chest shape and measurement of minute changes in the chest by cardiac motion. As a result of the experiment using the prototype system, it was clarified that the respiratory motion and the cardiac motion appearing in the chest can be extracted separately by the proposed method. In addition, based on the extracted signal of cardiac motion, it was clarified that the cardiac mechanical activity can be visualized by expressing the minute displacement on the chest surface due to cardiac motion as a color change.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116045110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inner and outer 3D shape reconstruction of puppet head from CT images 基于CT图像的木偶头部内外三维形状重建
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2588748
H. Ukida, Yoshitaka Hatakenaka, Masahide Tominaga, Tomoyo Sasao, K. Terada, Atsuya Yoshida
{"title":"Inner and outer 3D shape reconstruction of puppet head from CT images","authors":"H. Ukida, Yoshitaka Hatakenaka, Masahide Tominaga, Tomoyo Sasao, K. Terada, Atsuya Yoshida","doi":"10.1117/12.2588748","DOIUrl":"https://doi.org/10.1117/12.2588748","url":null,"abstract":"This paper proposes a method to estimate 3D shape of puppet head for a digital archiving. To reconstruct an inner shape of head, we use CT images. First, we divide four regions (wood, hair, paint and air) by thresholds based on manual directed regions. Subsequently, we divide these regions by a graph cut method. We also extract 3D shapes of parts in puppet head based on a graph cut method. We apply these methods to puppet heads and we confirm effectiveness of our proposed method.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"27 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116426926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Digital twins of human corneal endothelium from generative adversarial networks 来自生成对抗网络的人角膜内皮的数字双胞胎
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2586772
Eloi Dussy Lachaud, Andrew Caunes, G. Thuret, Y. Gavet
{"title":"Digital twins of human corneal endothelium from generative adversarial networks","authors":"Eloi Dussy Lachaud, Andrew Caunes, G. Thuret, Y. Gavet","doi":"10.1117/12.2586772","DOIUrl":"https://doi.org/10.1117/12.2586772","url":null,"abstract":"The human corneal endothelium, the posterior most layer of the cornea, is a monolayer of flat cells that are essential for maintening its transparency over time. Endothelial cells are easily visualized in patients using a specular microscope, a routine device, but accurate cell counting and cell morphometry determination has remained challenging since decades. The first automatic segmentations used mathematical morphology techniques, or the principles of the Fourier transform. In recent years, convolutional neural networks have further improved the results, but they need a large learning database, which takes a long time to collect. Thus, this work proposes a method for simulating digital twins of the images observed in specular microscopy, in order to enrich medical databases.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116529362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Aperture localization based on vibro-stimuli generated from distance variation 基于距离变化产生的振动刺激的孔径定位
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2589107
I. Endo, Shinichiro Morita, Kazuho Kito, Harumi Kobayashi, N. Mukawa, Akio Nakamura
{"title":"Aperture localization based on vibro-stimuli generated from distance variation","authors":"I. Endo, Shinichiro Morita, Kazuho Kito, Harumi Kobayashi, N. Mukawa, Akio Nakamura","doi":"10.1117/12.2589107","DOIUrl":"https://doi.org/10.1117/12.2589107","url":null,"abstract":"The purpose of this research is to verify that the proposed method, which generates vibro-stimuli according to the distance variation, has a capability of supporting depth edge detection for aperture localization. We have proposed a method to support the visually impaired understanding their surrounding environment via modality conversion from distance variation to haptic vibration information, and have developed the wearable device to provide the user with vibro-stimuli generated from visual variation inspired by optical flow characteristics. The characteristic of this vibration stimulus is that the user feels a strong vibration when the distance difference per unit time is large. It is inferred that this feature is effective in perceiving the boundary of obstacles, i.e., depth edge, which may contribute to the user's aperture localization. To verify the effectiveness of the proposed method, an aperture localization experiment was conducted to compare it with the ETAslike conventional method by changing the number of vibration motors and the method of vibration stimuli. We asked six blindfolded subjects to guess the location of the aperture. The experimental results showed that the proposed method achieved using a single motor a 98.3% correct rate. The contributions of this research are to propose the way of vibrostimuli based on distance variation for aperture localization, which is one of the important environmental elements, and to validate its effectiveness.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125932264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A reconstruction method of 3D face model from front and side 2D face images using deep learning model 一种基于深度学习模型的二维正面和侧面人脸图像三维人脸模型重建方法
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2588983
Ryota Nishio, M. Oono, T. Goto, Takahiro Kishimoto, M. Shishibori
{"title":"A reconstruction method of 3D face model from front and side 2D face images using deep learning model","authors":"Ryota Nishio, M. Oono, T. Goto, Takahiro Kishimoto, M. Shishibori","doi":"10.1117/12.2588983","DOIUrl":"https://doi.org/10.1117/12.2588983","url":null,"abstract":"In this study, we focus on automatic three-dimensional (3D) face reconstruction from two-dimensional (2D) face images using a deep learning model. The conventional methods have been used to develop models that can reconstruct 3D faces from 2D images. However, for Japanese faces, the models cannot accurately reconstruct images, large errors occur in areas such as the nose and mouth, because most of the training data are foreigner’s face images. To solve this problem, we proposed a method that uses not only a frontal 2D face image but also a side-view 2D face image for the 3D face reconstruction, and the resulting 3D model is a combination of two 3D reconstructed models, which are created from the frontal and side-view 2D face images using iterative closest point algorithm. As a result, the accuracy of the proposed method is better than the conventional method.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117039834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study of batch ensemble for multi-object tracking approximations in embedded vision 嵌入式视觉中多目标跟踪逼近的批量集成比较研究
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2589037
Robert Nsinga, S. Karungaru, K. Terada
{"title":"A comparative study of batch ensemble for multi-object tracking approximations in embedded vision","authors":"Robert Nsinga, S. Karungaru, K. Terada","doi":"10.1117/12.2589037","DOIUrl":"https://doi.org/10.1117/12.2589037","url":null,"abstract":"We present a series of adaptations in low probability distributions scenarios to detect and track multiple moving objects of interest. We investigate the benefits of the linearization of the loss trajectory1 in training neural networks, mainly addressing the lack of auto-differentiation in MOTA2 evaluations, and observe what characteristics can support parallelism3 and differential computation and to what extent these observations contributes to our objectives. Using benchmarks from DeepMOT4 and CenterNet,5 we highlight the use of sparsemax activations by mounting a finite number of independent, asynchronous detectors to augment performance and gain from compounded accuracy.∗ Empirical results show optimistic gains when applying parallelization on low-powered, low-latency embedded systems in cases where automatic differentiation is available.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114155870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-modal action segmentation in the kitchen with a feature fusion approach 基于特征融合方法的厨房多模态动作分割
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2591752
Shunsuke Kogure, Y. Aoki
{"title":"Multi-modal action segmentation in the kitchen with a feature fusion approach","authors":"Shunsuke Kogure, Y. Aoki","doi":"10.1117/12.2591752","DOIUrl":"https://doi.org/10.1117/12.2591752","url":null,"abstract":"In this paper, we propose a “Multi-modal Action Segmentation approach” that uses three modalities: (i) video, (ii) audio, (iii) thermal to classify cooking behavior in the kitchen. These 3 modalities are assumed to be features related to cooking. However, there is no public dataset containing these three modalities. Therefore, we built the original dataset and frame-level annotation. We then examined the usefulness of Action Segmentation using multi-modal features. We analyzed the effects of each modality using three evaluation metrics. As a result, the accuracy, edit distance, and F1 value were improved by up to about 1%, 2%, and 8%, respectively, compared to the case when only images were used.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"11794 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129746548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polynomial-fitting based calibration for an active 3D sensing system using dynamic light section method 基于多项式拟合的动态光截面法有源三维传感系统标定
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2590827
Mikihiro Ikura, Sarthak Pathak, A. Yamashita, H. Asama
{"title":"Polynomial-fitting based calibration for an active 3D sensing system using dynamic light section method","authors":"Mikihiro Ikura, Sarthak Pathak, A. Yamashita, H. Asama","doi":"10.1117/12.2590827","DOIUrl":"https://doi.org/10.1117/12.2590827","url":null,"abstract":"This paper proposes a polynomial-fitting based calibration method for an active 3D sensing using a dynamic light section method. In the dynamic light section method, the relative position of the line laser is dynamically changed at high speed to extend the measurement area with a low computational cost. To conduct 3D sensing, it is necessary to find the equation of the laser plane. In the proposed calibration method, a part of the line laser is irradiated on a reference plane fixed in the 3D sensing system, and correspondences between the normal vectors of the line laser and the image coordinates of the bright point on the reference plane are obtained. Then, the correspondences are regressed to a polynomial function. As a result, the plane equation of the line laser can be obtained at any given moment without considering the complicated system model. Through the measurement accuracy evaluation of the dynamic light section method calibrated by the polynomial fitting, we showed that a measurement target at a distance of 800 mm can be measured with an accuracy of an average of -5.94 mm and a standard deviation of 13.19 mm by rotating the line laser at 210 rpm.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132515163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Weed detection by analysis of multispectral images acquired under uncontrolled illumination conditions 非受控光照条件下多光谱图像的杂草检测
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2586823
A. Amziane, O. Losson, B. Mathon, L. Macaire, A. Dumenil
{"title":"Weed detection by analysis of multispectral images acquired under uncontrolled illumination conditions","authors":"A. Amziane, O. Losson, B. Mathon, L. Macaire, A. Dumenil","doi":"10.1117/12.2586823","DOIUrl":"https://doi.org/10.1117/12.2586823","url":null,"abstract":"Localized weed control is one of the promising solutions to improve the application of herbicides in crop fields. To target weeds exclusively during the spray, their location in the field must be accurately determined. As weeds have colorimetric properties similar to crops, their detection may be difficult especially under varying illumination conditions. Among available technologies, multispectral cameras provide radiance images with a high spectral resolution allowing for the analysis of vegetation signatures beyond the visible and near infrared domains. In this study, we address the problem of outdoor weed detection and identification using multispectral and RGB-NIR imaging.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"11794 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131328899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信