International Conference on Quality Control by Artificial Vision最新文献

筛选
英文 中文
Semi-automatic tools for nanoscale metrology and annotations for deep learning automation on electron microscopy images 用于纳米尺度计量的半自动工具和用于电子显微镜图像深度学习自动化的注释
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2690493
I. Sanou, J. Baderot, Y. Benezeth, S. Bricq, F. Marzani, S. Martínez, J. Foucher
{"title":"Semi-automatic tools for nanoscale metrology and annotations for deep learning automation on electron microscopy images","authors":"I. Sanou, J. Baderot, Y. Benezeth, S. Bricq, F. Marzani, S. Martínez, J. Foucher","doi":"10.1117/12.2690493","DOIUrl":"https://doi.org/10.1117/12.2690493","url":null,"abstract":"For semiconductor applications, billions of objects are manufactured for a single device such as a central processing unit (CPU), storage drive, or graphical processing unit (GPU). To obtain functional devices, each element of the device has to follow precise dimensional and physical specifications at the nanoscale. Generally, the pipeline consists to annotate an object in an image and then take the measurements of the object. Manually annotating images is extremely time-consuming. In this paper, we propose a robust and fast semi-automatic method to annotate an object in a microscopy image. The approach is a deep learning contour-based method able first to detect the object and after finding the contour thanks to a constraint loss function. This constraint follows the physical meaning of electron microscopy images. It improves the quality of boundary detail of the vertices of each object by matching the predicted vertices and most likely the contour. The loss is computed during training for each object using a proximal way of our dataset. The approach was tested on 3 different types of datasets. The experiments showed that our approaches can achieve state-of-the-art performance on several microscopy images dataset.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"12749 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131309662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Registration of thermal 3D models over time using low cost and portable devices 随着时间的推移,使用低成本和便携式设备的热3D模型注册
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.3000628
Evelyn Gutierrez, B. Castañeda, S. Treuillet
{"title":"Registration of thermal 3D models over time using low cost and portable devices","authors":"Evelyn Gutierrez, B. Castañeda, S. Treuillet","doi":"10.1117/12.3000628","DOIUrl":"https://doi.org/10.1117/12.3000628","url":null,"abstract":"Thermography is a highly beneficial non-invasive and non-contact tool that finds applications in various fields, such as building inspection, industrial equipment monitoring, quality control, and medical evaluations. Analyzing the surface temperature of an object at different points in time, and under varying conditions, can help detect defects, cracks, and anomalies in industry components. In this study, we propose a framework for reproducible and quantitative measurement of surface temperature changes over time using thermal 3D models created with low-cost and portable devices. We present the application of this framework in two cases: to analyze temperature changes over time in a plastic container and to analyze temperature changes before and after medical treatment of a chronic wound. The results on a plastic container and on a chronic wound, show that our approach for multi-temporal registration of thermal 3D models could be a cost-effective and practical solution for studying temperature changes in various applications.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121899307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object detection model-based quality inspection using a deep CNN 基于物体检测模型的深度CNN质量检测
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2689921
Mohamed Chetoui, M. Akhloufi
{"title":"Object detection model-based quality inspection using a deep CNN","authors":"Mohamed Chetoui, M. Akhloufi","doi":"10.1117/12.2689921","DOIUrl":"https://doi.org/10.1117/12.2689921","url":null,"abstract":"In the competitive world of the metal industry where companies have to offer quality products, quality control is crucial. However, it takes a considerable amount of time, especially if it is a manual process. Automatic Fault Detection (AFD) system reduces a lot of work for the companies, saves time, money and improves use of available resources. Deep learning can be efficiently used to develop such a AFD system. In this article, we present the development of deep learning (DL) algorithms for quality control. We trained State-of the-art DL (YOLO v8n, YOLO v8s, YOLO v8m, YOLO v8l and YOLO v8x) for a quality control task using a manually annotated dataset of 3 classes (neck scratch, scratch and bent) for 2 objects (Screw and Metal Nut). The results show very interesting scores for YOLO v8s with an mAP@0.50 of 90.60%, a precision of 100% and a recall of 94.0% for the 3 classes on average. We also compared the performance of these models with a popular DL model detector called Faster-RCNN x101 in order to confirm the performance of the developed models. The qualitative results show good detection of defects with different sizes (small, medium and large). Our proposition gives very interesting results to deploy an AFD system for metal industries.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114978538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person segmentation and identification across multiple wearable cameras 跨多个可穿戴摄像头的人物分割和识别
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2692433
Noriko Takemura, Haruya Sakashita, Shizuka Shirai, Mehrasa Alizadeh, Hajime Nagahara
{"title":"Person segmentation and identification across multiple wearable cameras","authors":"Noriko Takemura, Haruya Sakashita, Shizuka Shirai, Mehrasa Alizadeh, Hajime Nagahara","doi":"10.1117/12.2692433","DOIUrl":"https://doi.org/10.1117/12.2692433","url":null,"abstract":"Recent major developments in the understanding of human social interactions have greatly contributed to the development of computers with social interaction capabilities. Many studies have investigated the understanding of human interaction from cameras. Identification of people across multiple videos is important for exploring human social interactions in group activities. We propose a framework for person segmentation and identification across videos captured by multiple wearable cameras. The proposed method comprises a local tracking module for tracking people in a single video and a global matching module for matching people across multiple videos. The method uses global consistency to identify people across multiple videos as well as ensures spatial-temporal consistency in a single video. We have demonstrated the effectiveness of our proposed method in comparison with a baseline method by using public datasets and our own dataset.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126402778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-design of an active depth from defocus camera for surface inspection 参与设计一种用于表面检测的主动离焦深度相机
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2692630
B. Buat, P. Trouvé-Peloux, F. Champagnat, T. Simon
{"title":"Co-design of an active depth from defocus camera for surface inspection","authors":"B. Buat, P. Trouvé-Peloux, F. Champagnat, T. Simon","doi":"10.1117/12.2692630","DOIUrl":"https://doi.org/10.1117/12.2692630","url":null,"abstract":"Structured illumination combined with a monocular 3D camera leveraging on the estimation of the defocus blur has been proposed in the literature for industrial surface inspection. The accuracy of such active depth from defocus (ADFD) system depends on the camera/projector and processing parameters. Here, we propose to optimize the settings of an ADFD system, using a performance model that can predict the theoretical depth estimation accuracy for a given set of optical/projector/processing parameters. The accuracy of the optimized system is then experimentally evaluated. Moreover, we provide experimental results on real objects, including metallic parts, compared to a reference depth map obtained with an active stereoscopic camera.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual inspection of complex mechanical assemblies based on Siamese networks for 3D point clouds 基于三维点云Siamese网络的复杂机械组件视觉检测
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2692751
Velibor Došljak, Igor Jovančević, J. Orteu
{"title":"Visual inspection of complex mechanical assemblies based on Siamese networks for 3D point clouds","authors":"Velibor Došljak, Igor Jovančević, J. Orteu","doi":"10.1117/12.2692751","DOIUrl":"https://doi.org/10.1117/12.2692751","url":null,"abstract":"This paper proposes a solution for the problem of visual mechanical assembly inspection by processing point cloud data acquired via a 3D scanner. The approach is based on deep Siamese neural networks for 3D point clouds. To overcome the requirement for a large amount of labeled training data, only synthetically generated data is used for training and validation. Real-acquired point clouds are used only in testing phase.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132718258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OLF: RGB-D adaptive late fusion for robust 6D pose estimation 基于RGB-D自适应后期融合的稳健6D姿态估计
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2690943
Théo Petitjean, Zongwei Wu, C. Demonceaux, O. Laligant
{"title":"OLF: RGB-D adaptive late fusion for robust 6D pose estimation","authors":"Théo Petitjean, Zongwei Wu, C. Demonceaux, O. Laligant","doi":"10.1117/12.2690943","DOIUrl":"https://doi.org/10.1117/12.2690943","url":null,"abstract":"RGB-D 6D pose estimation has recently gained significant research attention due to the complementary information provided by depth data. However, in real-world scenarios, especially in industrial applications, the depth and color images are often more noisy1 . 2 Existing methods typically employ fusion designs that equally average RGB and depth features, which may not be optimal. In this paper, we propose a novel fusion design that adaptively merges RGB-D cues. Our approach involves assigning two learnable weight α1 and α2 to adjust the RGB and depth contributions with respect to the network depth. This enables us to improve the robustness against low-quality depth input in a simple yet effective manner. We conducted extensive experiments on the 6D pose estimation benchmark and demonstrated the effectiveness of our method. We evaluated our network in conjunction with DenseFusion on two datasets (LineMod3 and YCB4) using similar noise scenarios to verify the usefulness of reinforcing the fusion with the α1 and α2 parameters. Our experiments show that our method outperforms existing methods, particularly in low-quality depth input scenarios. We plan to make our source code publicly available for future research.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"12749 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129138824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Separation of compound actions with wrist and finger based on EMG 基于肌电图的腕指复合动作分离
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2585334
Eisuke Yamamoto, Momoyo Ito, S. Ito, M. Fukumi
{"title":"Separation of compound actions with wrist and finger based on EMG","authors":"Eisuke Yamamoto, Momoyo Ito, S. Ito, M. Fukumi","doi":"10.1117/12.2585334","DOIUrl":"https://doi.org/10.1117/12.2585334","url":null,"abstract":"In this paper, we propose to measure the EMGs of the wrist and fingers using dry-type sensors worn near the wrist, and to separate the measured data into wrist and finger EMGs by using independent component analysis (ICA). Then we can confirm the EMGs of the wrist and fingers from the complex motion and realize individual identification in more complex motions. The final goal of this study is to identify individual motions from complex motions. In this paper, as a preliminary step, the ICA is used to isolate compound motions and the validity of the method is evaluated. We measured the EMGs for three days and four motions. The results of the combination of FastICA, Infomax and JADE, respectively, were evaluated by the correlation coefficient with the original signal. The most accurate combination was FastICA + Infomax with an accuracy of 70.5%","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130216033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identify solar panel defects by using differences between solar panels 利用太阳能电池板之间的差异来识别太阳能电池板缺陷
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2586911
J. Deng, T. Minematsu, A. Shimada, R. Taniguchi
{"title":"Identify solar panel defects by using differences between solar panels","authors":"J. Deng, T. Minematsu, A. Shimada, R. Taniguchi","doi":"10.1117/12.2586911","DOIUrl":"https://doi.org/10.1117/12.2586911","url":null,"abstract":"Automatic solar panel inspection systems are essential to maintain power generation efficiency and reduce the cost. Thermal images generated by thermographic cameras can be used for solar panel fault diagnosis because defective panels show abnormal temperature. However, it is difficult to identify an anomaly from a single panel image when similar temperature features appear in normal panels and abnormal panels. In this paper, we propose a different feature based method to identify defective solar panels in thermal images. To determine abnormal panel from input panel images, we apply a voting strategy by using the prediction results of subtraction network. In our experiments, we construct two datasets to evaluate our method: the clean panels dataset which is constructed by manually extracted panel images and the noise containing dataset which is consisting of panel images extracted by the automatic panel extraction method. Our method achieves more than 90% classification accuracy on both clean panels dataset and noise containing dataset.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126137364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Latent space visualization of half face and full face by generative model 基于生成模型的半脸和全脸潜在空间可视化
International Conference on Quality Control by Artificial Vision Pub Date : 2021-07-16 DOI: 10.1117/12.2588980
Zou Min, T. Akashi
{"title":"Latent space visualization of half face and full face by generative model","authors":"Zou Min, T. Akashi","doi":"10.1117/12.2588980","DOIUrl":"https://doi.org/10.1117/12.2588980","url":null,"abstract":"Generally, most face detection and recognition tasks are based on the training of intact facial images and their corresponding labels. The training image is supposed to contain as much facial area as possible, and sometimes expanding the training image area to the upper body may also enhance the learning ability. However, we noticed that both the three-dimensional structure and two-dimensional appearance from the frontal view of human faces are bilaterally symmetrical. Few research makes use of this characteristics to simplify the learning process. We have proposed a flipping strategy to apply the facial symmetrical characteristic to transfer learning and proved training with half faces can also achieve equivalent performance in face recognition for a small group of individuals. This paper extend the transfer learning of cropped half face images for face recognition rather than flipping the half face. The facial symmetrical characteristics is utilized to improve face recognition through transfer learning of only a half of the common human face image. We also investigate and explain the reason why the half face area is enough to accurately classify small groups of individuals. A variational autoencoder network is utilized to impose the probability distribution on the facial latent space. Finally, the dimensions of the facial latent space are reduced to visualize the distributed perceptual manifold for face identity.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121838068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信