International Conference on Quality Control by Artificial Vision最新文献

筛选
英文 中文
Effect of lighting conditions on grape quality control by artificial vision 光照条件对葡萄品质人工视觉控制的影响
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2688756
P. Beauseroy, A. Baussard, M. Panon, Marie Loyaux
{"title":"Effect of lighting conditions on grape quality control by artificial vision","authors":"P. Beauseroy, A. Baussard, M. Panon, Marie Loyaux","doi":"10.1117/12.2688756","DOIUrl":"https://doi.org/10.1117/12.2688756","url":null,"abstract":"In order to develop a new device for automatic quality control of grapes stored in crates just before pressing, it is necessary to specify many parameters. Among these, lighting is particularly important, both for the recognition methods and for the control system physical design and cost. This study introduces a database of images of grapes in crates, created specifically for the study, and investigates the possibility of recognizing healthy grapes from other visible elements (diseases, leaves. . . ) with four different lighting conditions and two classifiers (SVM and CNN). The experimental results show the feasibility of the system and provide objective and quantified elements to guide its design.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129860236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of subtle facial emotion changes from expressionless images 从无表情图像中估计细微的面部情绪变化
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2690777
Arvin Valderrama, Takumi Taketomi, Chandra Louis, Tamami Sanbongi, Akihiro Kuno, Satoru Takahashi, Takeshi Nagata
{"title":"Estimation of subtle facial emotion changes from expressionless images","authors":"Arvin Valderrama, Takumi Taketomi, Chandra Louis, Tamami Sanbongi, Akihiro Kuno, Satoru Takahashi, Takeshi Nagata","doi":"10.1117/12.2690777","DOIUrl":"https://doi.org/10.1117/12.2690777","url":null,"abstract":"Subtle changes in emotional expressions occur more frequently compared to rich ones, which makes the evaluation of the emotional response of an individual challenging. In this study, we focus on the near-expressionless facial images, indicated with low arousal and valence value. We investigated the facial landmarks which are crucial in estimating subtle emotion through a novel feature selection method named Random Combination Selection with Iterative Step (RACSIS)1 . By combining appearance and geometrical features, while reducing the feature points up to 93.8%, the Mean Absolute Error (MAE) for Arousal = [-4 8], Valence = [-7 6], was reduced to 54.95% and 46.39% for the full emotional spectrum and the subtle emotion, respectively. We then tested the performance of the RACSIS to estimate the emotional response of participants undertaking audio-visual activities. We conclude that: 1. Appearance features played a greater role in reducing the MAE. 2. Feature selection (FS) by RACSIS achieved lower MAE values compared to correlation.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126953143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-camera multi-point vision: on the use of robotics for digital image correlation 单摄像头多点视觉:利用机器人技术进行数字图像相关
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2691470
Matthieu Vitse, Xuyang Chang, S. Roux
{"title":"Single-camera multi-point vision: on the use of robotics for digital image correlation","authors":"Matthieu Vitse, Xuyang Chang, S. Roux","doi":"10.1117/12.2691470","DOIUrl":"https://doi.org/10.1117/12.2691470","url":null,"abstract":"We introduce in this paper a novel approach for the design of buckling tests for which the deformation of the sampled is studied using one single camera mounted on a six-axis robot arm. This setup allows to pre-determine multiple configurations of deformation of the sample using a virtual model of the experiment. The question of the calibration of the digital camera parameters is also studied, introducing a 3D target containing CharUco boards. Early results are given.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114176962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We can pass through the wall gap: aperture passage perception via vibration stimuli generated from distance variation 我们可以通过墙体间隙:孔径通道感知通过距离变化产生的振动刺激
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2692384
Shinichiro Morita, Yuki Kikuchi, Akira Saeki, I. Endo, Harumi Kobayashi, N. Mukawa, Akio Nakamura
{"title":"We can pass through the wall gap: aperture passage perception via vibration stimuli generated from distance variation","authors":"Shinichiro Morita, Yuki Kikuchi, Akira Saeki, I. Endo, Harumi Kobayashi, N. Mukawa, Akio Nakamura","doi":"10.1117/12.2692384","DOIUrl":"https://doi.org/10.1117/12.2692384","url":null,"abstract":"We have been developing a helmet-type spatial perception support system that provides the user with vibration stimuli in response to the distance variation to an obstacle. The purpose of this research is set to propose a method for generating vibration stimuli that appropriately represent two environmental elements, walls and apertures, and verify its effectiveness for the aperture passage perception. The five vibro-motors are positioned at directional angles of 0 degree in front, 30 and 60 degrees to left and right, and generate vibration stimuli of an intensity calculated by assigning appropriate damping weights. We set the distance-dependent damping weights separately for each directional angle in the calculation of the vibration intensity to be generated for each motor. Experimental results demonstrate that the subjects were able to pass through the aperture in approximately 91 % of trials. This suggests that the developed system and the proposed vibration stimuli generation method are effective in perceiving space from the vibration stimuli provided to the head.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125237554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated fish detection and classification on sonar images using detection transformer and YOLOv7 利用探测变压器和YOLOv7对声纳图像进行鱼类自动探测和分类
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2688330
Ella Mahoro, M. Akhloufi
{"title":"Automated fish detection and classification on sonar images using detection transformer and YOLOv7","authors":"Ella Mahoro, M. Akhloufi","doi":"10.1117/12.2688330","DOIUrl":"https://doi.org/10.1117/12.2688330","url":null,"abstract":"In order to maintain a healthy ecosystem and fish stocks, it is necessary to monitor the abundance and frequency of fish species. In this article, we propose a fish detection and classification system. In the first step, the images were extracted from a public Ocqueoc River DIDSON high-resolution imaging sonar dataset and annotated. End-to-end object detection models, Detection Transformer with a ResNet-50 backbone (DETR-ResNet-50) and YOLOv7 were used to detect and classify fish species. With a mean average precision of 0.79, YOLOv7 outperformed DETR-ResNet-50. The results demonstrated that the proposed system can in fact be used to detect and classify fish species using high-resolution imaging sonar data.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"51 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131829215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene understanding pipeline for maintenance oriented tasks automation 面向维护任务自动化的场景理解管道
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2692213
Younes Zegaoui, S. Dufour, C. Bortolaso
{"title":"Scene understanding pipeline for maintenance oriented tasks automation","authors":"Younes Zegaoui, S. Dufour, C. Bortolaso","doi":"10.1117/12.2692213","DOIUrl":"https://doi.org/10.1117/12.2692213","url":null,"abstract":"Computerized Maintenance Management System (CMMS) are to assist in organising maintenance, both proactive and reactive, as well as technical operations. It usually works alongside constant surveillance and monitoring of said equipment through repetitive and time-consuming tasks. AI can ease maintenance activities by reducing the time spent on these repetitive task and allocate more time on decision. In this article we present our works on automating part of the intervention request handling in Berger-Levrault’s CMMs. We designed a pipeline of computer vision operations to predict the type of intervention needed from a picture of the situation at hand. The pipeline is basically a decision tree which combines different computer vision models and funnel images between them according to their respective outputs. Each of these models are trained separately on a specific task. To validate our approach, we performed a topic modeling analysis on the maintenance request forms to identify the ten most common topics of intervention. We show that our pipeline performs better than direct prediction by scene recognition model with a five points increases in global F1 score (40% / 45%) which is even more true for the classes with fewer training examples (23% / 37%).","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133535233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OCR for laser marking quality assessment 用于激光打标质量评估的OCR
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2691129
Jeanne Beyazian, J. Sadi
{"title":"OCR for laser marking quality assessment","authors":"Jeanne Beyazian, J. Sadi","doi":"10.1117/12.2691129","DOIUrl":"https://doi.org/10.1117/12.2691129","url":null,"abstract":"Since 2020 in the USA1 and 2021 in Europe, all medical devices have to be marked with a Unique Device Identification (UDI) code to ensure their traceability. UDI codes are laser marked but the engraving process is error-prone due laser-related or external conditions. Defects may be assessed visually but this process is costly and gives rise to human errors. Using machine vision to perform this task for large batches of UDI codes may be challenging due to alterations in readability caused by marking defects or image quality. Therefore, we have tested several learned methods to achieve two goals: correctly recognize characters and identifying marking defects on UDI codes. As the codes were engraved on cylindrical metallic surfaces with a metallic paint effect, we had to address the problem of specular and stray reflections through the development of a tailor-made lighting engine. Our image grabbing and processing pipeline comprises of an imaging device designed to prevent reflections onto engraved codes; an Optical Character Recognition (OCR) algorithm (multilayer perceptron, support vector machine, classical image segmentation), and a probabilistic model to detect faulty characters that need to be further qualified by a human operator. Our results show that multilayer perceptron (MLP) and support vector machine (SVM) recognition performances are very close together and above classical image segmentation.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"12749 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129304983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construction of a grape quality index from RGB images of crates 从板条箱的RGB图像构建葡萄质量指数
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2688348
Soizic Lefevre, D. Nuzillard, A. Goupil
{"title":"Construction of a grape quality index from RGB images of crates","authors":"Soizic Lefevre, D. Nuzillard, A. Goupil","doi":"10.1117/12.2688348","DOIUrl":"https://doi.org/10.1117/12.2688348","url":null,"abstract":"Ranging the crates of grapes using a robust quality index is a major tool for operators during the Champagne grape harvest. We propose building such an index by processing RGB images of crates of grapes. Each image is segmented into six classes such as healthy grape, crate, diseases (grey rot, powdery mildew, conidia), green elements (stalk, leaf, unripe healthy grape), shadow, dry elements (dry leaf, dry grape, wood) and the index of quality reflects the proportion of healthy part inside the crate. As the main pretreatment, the segmentation must be carefully performed, and a random forest-based solution for each variety of grape is proposed here whose training is done on hand-tagged pixels.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127679738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency perturbation analysis for anomaly detection using Fourier heat map 傅立叶热图异常检测的频率摄动分析
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2690078
Yoshikazu Hayashi, Hiroaki Aizawa, Shunsuke Nakatsuka, K. Kato
{"title":"Frequency perturbation analysis for anomaly detection using Fourier heat map","authors":"Yoshikazu Hayashi, Hiroaki Aizawa, Shunsuke Nakatsuka, K. Kato","doi":"10.1117/12.2690078","DOIUrl":"https://doi.org/10.1117/12.2690078","url":null,"abstract":"Anomaly detection is an essential task within an industry domain, and sophisticated approaches have been proposed. PaDiM has a promising direction, utilizing ImageNet-pretrained convolutional neural networks without expensive training costs. However, the cues and biases utilized by PaDiM, i.e., shape-vs-texture bias in an anomaly detection process, are unclear. To reveal the bias, we proposed to apply frequency analysis to PaDiM. For frequency analysis, we use a Fourier Heat Map that investigates the sensitivity of the anomaly detection model to input noise in the frequency domain. As a result, we found that PaDiM utilizes texture information as a cue for anomaly detection, similar to the classification models. Based on this preliminary experiment, we propose a shape-aware Stylized PaDiM. Our model is a PaDiM that uses pre-trained weights learned on Stylized ImageNet instead of ImageNet. In the experiments, we confirmed that Stylized PaDiM improves the robustness of high-frequency perturbations. Stylized PaDiM also achieved higher performance than PaDiM for anomaly detection in clean images of MVTecAD.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124112727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse image measurement using deep compressed sensing to accelerate image acquisition in 3D XRM 基于深度压缩感知的稀疏图像测量加速三维XRM图像采集
International Conference on Quality Control by Artificial Vision Pub Date : 2023-07-28 DOI: 10.1117/12.2691418
Ying Hao Tan, N. Vun, B. Lee
{"title":"Sparse image measurement using deep compressed sensing to accelerate image acquisition in 3D XRM","authors":"Ying Hao Tan, N. Vun, B. Lee","doi":"10.1117/12.2691418","DOIUrl":"https://doi.org/10.1117/12.2691418","url":null,"abstract":"This paper proposes the Sparse Matrix Deep Compressed Sensing (SM-DCS) that leverages on compressive sensing and deep learning techniques for 3D X-ray Microscopy (XRM) based applications. It enables up to 85% reduction in the number of pixels to be measured while maintaining reasonable accurate image quality. Unlike other direct compressed sensing approaches, SM-DCS can be applied using existing measurement equipment. SM-DCS works by measuring a subset of the image pixels followed by performing compressed sensing recovery process to recover each image slice. Experimental results demonstrate that SM-DCS produces reconstruction images that are comparable to direct compressed sensing measurement approach on various performance metrics, but without the need to change the existing equipment.","PeriodicalId":295011,"journal":{"name":"International Conference on Quality Control by Artificial Vision","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127848709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信