Anais do XVII Workshop de Visão Computacional (WVC 2021)最新文献

筛选
英文 中文
Evaluation of normalization technique on classification with deep learning features 基于深度学习特征的分类归一化技术评价
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18898
A. D. Freitas, Adriano B. Silva, A. S. Martins, L. A. Neves, T. A. A. Tosta, P. D. Faria, M. Z. Nascimento
{"title":"Evaluation of normalization technique on classification with deep learning features","authors":"A. D. Freitas, Adriano B. Silva, A. S. Martins, L. A. Neves, T. A. A. Tosta, P. D. Faria, M. Z. Nascimento","doi":"10.5753/wvc.2021.18898","DOIUrl":"https://doi.org/10.5753/wvc.2021.18898","url":null,"abstract":"Cancer is one of the diseases with the highest mortality rate in the world. Dysplasia is a difficult-to-diagnose precancerous lesion, which may not have a good Hematoxylin and Eosin (H&E) stain ratio, making it difficult for the histology specialist to diagnose. In this work, a method for normalizing H&E stains in histological images was investigated. This method uses a generative neural network based on a U-net for image generation and a PatchGAN architecture for information discrimination. Then, the normalized histological images were employed in classification algorithms to investigate the detection of the level of dysplasia present in the histological tissue of the oral cavity. The CNN models as well as hybrid models based on learning features and machine learning algorithms were evaluated. The employment of the ResNet-50 architecture and the Random Forest algorithm provided results with an accuracy rate around 97% for the images normalized with the investigated method.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127525728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grocery Product Recognition to Aid Visually Impaired People 杂货产品识别以帮助视障人士
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18896
André Machado, K. Aires, R. Veras, L. B. Britto Neto
{"title":"Grocery Product Recognition to Aid Visually Impaired People","authors":"André Machado, K. Aires, R. Veras, L. B. Britto Neto","doi":"10.5753/wvc.2021.18896","DOIUrl":"https://doi.org/10.5753/wvc.2021.18896","url":null,"abstract":"This paper proposes a new approach in object recognition to assist visually impaired people. This approach achieved accuracy rates higher than the approaches proposed by the authors of the selected datasets. We applied Data Augmentation with other techniques and adjustments to different Pre-trained CNNs (Convolutional Neural Networks). The ResNet-50 based approach achieved the best results in the most recent datasets. This work focused on products that are usually found on grocery store shelves, supermarkets, refrigerators or pantries.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115580861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Periocular authentication in smartphones applying uLBP descriptor on CNN Feature Maps 基于CNN Feature Maps的uLBP描述符智能手机眼周认证
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18890
William Barcellos, A. Gonzaga
{"title":"Periocular authentication in smartphones applying uLBP descriptor on CNN Feature Maps","authors":"William Barcellos, A. Gonzaga","doi":"10.5753/wvc.2021.18890","DOIUrl":"https://doi.org/10.5753/wvc.2021.18890","url":null,"abstract":"The outputs of CNN layers, called Activations, are composed of Feature Maps, which show textural information that can be extracted by a texture descriptor. Standard CNN feature extraction use Activations as feature vectors for object recognition. The goal of this work is to evaluate a new methodology of CNN feature extraction. In this paper, instead of using the Activations as a feature vector, we use a CNN as a feature extractor, and then we apply a texture descriptor directly on the Feature Maps. Thus, we use the extracted features obtained by the texture descriptor as a feature vector for authentication. To evaluate our proposed method, we use the AlexNet CNN previously trained on the ImageNet database as a feature extractor; then we apply the uniform LBP (uLBP) descriptor on the Feature Maps for texture extraction. We tested our proposed method on the VISOB dataset composed of periocular images taken from 3 different smartphones under 3 different lighting conditions. Our results show that the use of a texture descriptor on CNN Feature Maps achieves better performance than computer vision handcrafted methods or even by standard CNN feature extraction.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116446748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study of convolutional neural networks for classification of pigmented skin lesions 卷积神经网络用于色素性皮肤病变分类的比较研究
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18909
Natalia Camillo do Carmo, J. F. Mari
{"title":"A comparative study of convolutional neural networks for classification of pigmented skin lesions","authors":"Natalia Camillo do Carmo, J. F. Mari","doi":"10.5753/wvc.2021.18909","DOIUrl":"https://doi.org/10.5753/wvc.2021.18909","url":null,"abstract":"Skin cancer is one of the most common types of cancer in Brazil and its incidence rate has increased in recent years. Melanoma cases are more aggressive compared to nonmelanoma skin cancer. Machine learning-based classification algorithms can help dermatologists to diagnose whether skin lesion is melanoma or non-melanoma cancer. We compared four convolutional neural networks architectures (ResNet-50, VGG16, Inception-v3, and DenseNet-121) using different training strategies and validation methods to classify seven classes of skin lesions. The experiments were executed using the HAM10000 dataset which contains 10,015 images of pigmented skin lesions. We considered the test accuracy to determine the best model for each strategy. DenseNet-121 was the best model when trained with fine-tuning and data augmentation, 90% (k-fold crossvalidation). Our results can help to improve the use of machine learning algorithms for classifying pigmented skin lesions.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129708846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neonatal Face Mosaic: An areas-of-interest segmentation method based on 2D face images 新生儿面部镶嵌:一种基于二维面部图像的兴趣区域分割方法
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18914
Pedro Henrique Silva Domingues, Renan Martins Mendes da Silva, Ibrahim Jamil Orra, Matheus Elias Cruz, T. Heiderich, C. Thomaz
{"title":"Neonatal Face Mosaic: An areas-of-interest segmentation method based on 2D face images","authors":"Pedro Henrique Silva Domingues, Renan Martins Mendes da Silva, Ibrahim Jamil Orra, Matheus Elias Cruz, T. Heiderich, C. Thomaz","doi":"10.5753/wvc.2021.18914","DOIUrl":"https://doi.org/10.5753/wvc.2021.18914","url":null,"abstract":"The daily life of preterm babies may be involved with long exposure to pain, causing problems in the development of the nervous system. In this context, an on-going area of research is the scientific development of image-based automatic pain detection systems based on several techniques, from anatomical measurements to artificial intelligence, they have generally two main issues: the categorization of the most relevant facial regions for identifying neonatal pain and the practical difficulty related to the presence of artifacts obstructing parts of the face. This paper proposes and implements an areas-of-interest automatic segmentation method that allows the creation of a novel dataset containing crops of neonatal faces relevant for pain classification, labelled by areas-of-interest and pain status. Moreover, we have also investigated the use of similarity matching techniques to compare each area-of-interest to the corresponding one extracted from a prototype face with no occlusion.t","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133107317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The interference of optical zoom in human and machine classification of pollen grain images 光学变焦对花粉颗粒图像人机分类的干扰
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18897
Felipe Silveira Brito Borges, Juliana Velasques Balta, Milad Roghanian, A. B. Gonçalves, Marco A. Alvarez, H. Pistori
{"title":"The interference of optical zoom in human and machine classification of pollen grain images","authors":"Felipe Silveira Brito Borges, Juliana Velasques Balta, Milad Roghanian, A. B. Gonçalves, Marco A. Alvarez, H. Pistori","doi":"10.5753/wvc.2021.18897","DOIUrl":"https://doi.org/10.5753/wvc.2021.18897","url":null,"abstract":"Palynology can be applied to different areas, such as archeology and allergy, where it is constantly growing. However, no publication comparing human classifications with machine learning classifications at different optical scales has been found in the literature. An image dataset with 17 pollen species that occur in Brazil was created, and machine learning algorithms were used for their automatic classification and subsequent comparison with humans. The experiments presented here show how machine and human classification behave according to different optical image scales. Satisfactory results were achieved, with 98.88% average accuracy for the machine and 45.72% for human classification. The results impact a single scale pattern for capturing pollen grain images for both future computer vision experiments and for a faster advance in palynology science.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133230822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pavement Crack Segmentation using a U-Net based Neural Network 基于U-Net神经网络的路面裂缝分割
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18893
Raido Lacorte Galina, Thadeu Pezzin Melo, K. S. Komati
{"title":"Pavement Crack Segmentation using a U-Net based Neural Network","authors":"Raido Lacorte Galina, Thadeu Pezzin Melo, K. S. Komati","doi":"10.5753/wvc.2021.18893","DOIUrl":"https://doi.org/10.5753/wvc.2021.18893","url":null,"abstract":"Cracks on the concrete surface are symptoms and precursors of structural degradation and hence must be identified and remedied. However, locating cracks is a time-consuming task that requires specialized professionals and special equipment. The use of neural networks for automatic crack detection emerges to assist in this task. This work proposes one U-Net based neural network to perform crack segmentation, trained with the Crack500 and DeepCrack datasets, separately. The U-Net used has seven contraction and seven expansion layers, which differs from the original architecture of four layers of each part. The IoU results obtained by the model trained with Crack500 was 71.03%, and by the model trained with DeepCrack was 86.38%.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115765053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HandArch: A deep learning architecture for LIBRAS hand configuration recognition HandArch:用于LIBRAS手型识别的深度学习架构
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18883
Gabriel Peixoto de Carvalho, André Luiz Brandão, F. Ferreira
{"title":"HandArch: A deep learning architecture for LIBRAS hand configuration recognition","authors":"Gabriel Peixoto de Carvalho, André Luiz Brandão, F. Ferreira","doi":"10.5753/wvc.2021.18883","DOIUrl":"https://doi.org/10.5753/wvc.2021.18883","url":null,"abstract":"Despite the recent advancements in deep learning, sign language recognition persists as a challenge in computer vision due to its complexity in shape and movement patterns. Current studies that address sign language recognition treat hand pose recognition as an image classification problem. Based on this approach, we introduce HandArch, a novel architecture for realtime hand pose recognition from video to accelerate the development of sign language recognition applications. Furthermore, we present Libras91, a novel dataset of Brazilian sign language (LIBRAS) hand configurations containing 91 classes and 108,896 samples. Experimental results show that our approach surpasses the accuracy of previous studies while working in real-time on video files. The recognition accuracy of our system is 99% for the novel dataset and over 95% for other hand pose datasets.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122127075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Segmentation of Cattle Images Using Deep Learning 基于深度学习的牛图像无监督分割
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18886
Vinícius Guardieiro Sousa, A. Backes
{"title":"Unsupervised Segmentation of Cattle Images Using Deep Learning","authors":"Vinícius Guardieiro Sousa, A. Backes","doi":"10.5753/wvc.2021.18886","DOIUrl":"https://doi.org/10.5753/wvc.2021.18886","url":null,"abstract":"In this work, we used the Deep Learning (DL) architecture named U-Net to segment images containing side view cattle. We evaluated the ability of the U-Net to segment images captured with different backgrounds and from the different breeds, both acquired by us and from the Internet. Since cattle images present a more constant background than other applications, we also evaluated the performance of the U-Net when we change the numbers of convolutional blocks and filters. Results show that U-Net can be used to segment cattle images using fewer blocks and filters than traditional U-Net and that the number of blocks is more important than the total number of filters used.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129954111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of Convolutional Neural Network in Coffee Capsule Count Aiming Collection System for Recycling 卷积神经网络在咖啡胶囊计数回收系统中的应用
Anais do XVII Workshop de Visão Computacional (WVC 2021) Pub Date : 2021-11-22 DOI: 10.5753/wvc.2021.18907
Henrique Wippel Parucker da Silva, G. B. Santos
{"title":"Application of Convolutional Neural Network in Coffee Capsule Count Aiming Collection System for Recycling","authors":"Henrique Wippel Parucker da Silva, G. B. Santos","doi":"10.5753/wvc.2021.18907","DOIUrl":"https://doi.org/10.5753/wvc.2021.18907","url":null,"abstract":"The coffee capsules brought practicality and speed in the preparation of the drink. However, with its popularization came a major environmental problem, the generation of a large amount of garbage, which for 2021 has an estimated 14 thousand tons of garbage, only coming from the capsules. To avoid this disposal it is necessary to recycle them, however it is not a trivial job, since they are composed of various materials, as well as the collection of these capsules presents challenges. Therefore, a collection system is of great value, which, in addition to being automated, generates bonuses proportional to the quantity of discarded capsules. This work is dedicated preliminary tests on the development of such a system using a convolutional neural network for the detection of coffee capsules. This algorithm was trained with two image sets, one containing images with reflection and the other without, which presented an accuracy of approximately 97%.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128821716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信