Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)最新文献

筛选
英文 中文
KutralNext: An Efficient Multi-label Fire and Smoke Image Recognition Model KutralNext:一种高效的多标签火灾和烟雾图像识别模型
Angel Ayala, David Macêdo, C. Zanchettin, Francisco Cruz, Bruno Fernandes
{"title":"KutralNext: An Efficient Multi-label Fire and Smoke Image Recognition Model","authors":"Angel Ayala, David Macêdo, C. Zanchettin, Francisco Cruz, Bruno Fernandes","doi":"10.5753/sibgrapi.est.2021.20007","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20007","url":null,"abstract":"Early alert fire and smoke detection systems are crucial for management decision making as daily and security operations. One of the new approaches to the problem is the use of images to perform the detection. Fire and smoke recognition from visual scenes is a demanding task due to the high variance of color and texture. In recent years, several fire-recognition approaches based on deep learning methods have been proposed to overcome this problem. Nevertheless, many developments have been focused on surpassing previous state-of-the-art model's accuracy, regardless of the computational resources needed to execute the model. In this work, is studied the trade-off between accuracy and complexity of the inverted residual block and the octave convolution techniques, which reduces the model's size and computation requirements. The literature suggests that those techniques work well by themselves, and in this research was demonstrated that combined, it achieves a better trade-off. We proposed the KutralNext architecture, an efficient model with reduced number of layers and computacional resources for singleand multi-label fire and smoke recognition tasks. Additionally, a more efficient KutralNext+ model improved with novel techniques, achieved an 84.36% average test accuracy in FireNet, FiSmo, and FiSmoA fire datasets. For the KutralSmoke and FiSmo fire and smoke datasets attained an 81.53% average test accuracy. Furthermore, state-of-the-art fire and smoke recognition model considered, FireDetection, KutralNext uses 59% fewer parameters, and KutralNext+ requires 97% fewer flops and is 4x faster.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"207 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122990570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Coffee Leaf Diseases Identification and Severity Classification using Deep Learning 基于深度学习的咖啡叶病害识别与严重程度分类
E. Lisboa, Givanildo Lima, Fabiane Queiroz
{"title":"Coffee Leaf Diseases Identification and Severity Classification using Deep Learning","authors":"E. Lisboa, Givanildo Lima, Fabiane Queiroz","doi":"10.5753/sibgrapi.est.2021.20039","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20039","url":null,"abstract":"In this paper, we propose a method for automatic identification and classification of leaf diseases and pests in the Brazilian Arabica Coffee leaves. We developed a Machine Learning model, trained with the BRACOL public image dataset, to evaluate if a given image of a leaf has a disease or pest — Miner, Phoma, Cercospora and Rust — or if it is healthy. We then compared our model with other famous and well-known classification models, and we were able to achieve an accuracy of 98,04%, which greatly exceeds the accuracy of the other methods implemented. In addition, we developed an assessment to perform a classification related to the percentage of each leaf that is affected by the disease, achieving an accuracy of approximately 90%.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131867069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Methods for segmentation of spinal cord and esophagus in radiotherapy planning computed tomography 放射治疗计划中脊髓和食管分割的计算机断层扫描方法
J. O. Diniz, A. Silva, A. Paiva
{"title":"Methods for segmentation of spinal cord and esophagus in radiotherapy planning computed tomography","authors":"J. O. Diniz, A. Silva, A. Paiva","doi":"10.5753/sibgrapi.est.2021.20009","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20009","url":null,"abstract":"Organs at Risk (OARs) are healthy tissues around cancer that must be preserved in radiotherapy (RT). The spinal cord and esophagus are crucial OARs. In this work, we proposed methods for the segmentation of these OARs from the CT using image processing techniques and deep convolutional neural network (CNN). For spinal cord segmentation, two methods are proposed, the first using techniques such as template matching, superpixel, and CNN. The second method, use adaptive template matching and CNN. In the esophagus segmentation, we proposed a method composed of registration techniques, atlas, pre-processing, U-Net, and post-processing. The methods were applied to 36 planning CT images provided by The Cancer Imaging Archive. The first method for spinal cord segmentation obtained 78.20% Dice. The second method for spinal cord segmentation obtained 81.69% Dice. The esophagus segmentation method obtained an accuracy of 82.15% Dice.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128586612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On periodic tilings with regular polygons 在正则多边形的周期性平铺上
José Ezequiel Soto Sánchez, Asla Medeiros e Sá, L. H. Figueiredo
{"title":"On periodic tilings with regular polygons","authors":"José Ezequiel Soto Sánchez, Asla Medeiros e Sá, L. H. Figueiredo","doi":"10.5753/sibgrapi.est.2021.20025","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20025","url":null,"abstract":"The thesis describes a simple integer-based computational representation for periodic tilings of regular polygons using complex numbers, which is now the state of the art for these objects. Several properties of this representation are discussed, including elegant and efficient strategies for acquisition, reconstruction, rendering, and automatic crystallographic classification by symmetry detection. The thesis also describes a novel strategy for the enumeration and generation of triangle-square tilings via equivalence with edge-labeled hexagonal graphs. The equivalence provide triangle-square tilings with an algebraic structure that allows an unfolding interpretation.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131609671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Avaliação de Modelos de Detecção de Objetos para Detectar Glomérulos em Imagens Histológicas 组织学图像中肾小球检测对象检测模型的评价
Abel Ramalho Galvão, Jonathan M. C. Rehem, W. L. C. D. Santos, Luciano Rebouças de Oliveira, A. A. Duarte, M. F. Angelo
{"title":"Avaliação de Modelos de Detecção de Objetos para Detectar Glomérulos em Imagens Histológicas","authors":"Abel Ramalho Galvão, Jonathan M. C. Rehem, W. L. C. D. Santos, Luciano Rebouças de Oliveira, A. A. Duarte, M. F. Angelo","doi":"10.5753/sibgrapi.est.2021.20028","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20028","url":null,"abstract":"Os glomérulos são estrutruas renais responsáveis pela filtragem do sangue e podem ser acometidos por lesões. Atualmente, sistemas computacionais para auxiliar na identficação destas lesões têm sido desenvolvidos, e assim, é de grande importância a detecção destes glomérulos. O objetivo deste trabalho é avaliar o desempenho de modelos de detecção de objetos para a detecção de glomérulos em imagens histológicas digitais. Foram avaliados 3 modelos: SM1 (SSD Mobilenet v1), FRR50 (Faster RCNN Resnet 50) e FRR101 (Faster RCNN Resnet 101), dos quais, o modelo FRR50 obteve o melhor resultado, mAP=0.88.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132217257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual lines for offside situations analysis in football 用于足球越位分析的虚拟线
Karim Ferreira Lima, R. M. D. Figueiredo, Eduardo Augusto Martins, Jean Schmith
{"title":"Virtual lines for offside situations analysis in football","authors":"Karim Ferreira Lima, R. M. D. Figueiredo, Eduardo Augusto Martins, Jean Schmith","doi":"10.5753/sibgrapi.est.2021.20037","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20037","url":null,"abstract":"Offside is one of the situations that is analyzed by the Video Assistant Referee (VAR). However, it has caused some controversy due to the delay in the analysis and definition of the irregularity. This work proposes a method that helps in the analysis of offside situations and also makes it available for nonprofessional matches. Here, image processing algorithms were used to determine offside situations in football matches from TV videos, of course, in accordance with the game regulation. The method includes the image vanishing point identification, camera calibration and the virtual offside line drawing. The method presented good results from 10 videos selected for analysis, with five from the right side of the field and five on from the left side. Among the videos, one was chosen as the basis for explaining the development of the method and demonstrate a situation with a virtual line drawn automatically, therefore determining an offside situation. As a result, the virtual line is identified by the color red when the manually selected player is in offside and green when he is not.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121487921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation and graph generation of muzzle images for cattle identification 用于牛识别的口吻图像分割与图形生成
Lucas Wojcik, J. Junior, D. Menotti, J. Hill
{"title":"Segmentation and graph generation of muzzle images for cattle identification","authors":"Lucas Wojcik, J. Junior, D. Menotti, J. Hill","doi":"10.5753/sibgrapi.est.2021.20033","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20033","url":null,"abstract":"The current methods for the organizing the records (i.e., cataloguing) of cattle are known to be archaic and inefficient, and often harmful to the animal. Such methods include the use of metal tags attached to the animal's ears like earrings and of branding irons on their necks. Previous research on new methods of livestock branding based on computer vision techniques utilized a mixture of texture features such as Gabor Filters and Local Binary Pattern as a means of extracting identifying features for each animal. The presented approach proposes a new technique using the muzzle image as an individual identifier as a novel technique, assuming that the muzzle RoI taken as input for the model pipeline is already extracted and cropped. This task is performed in three steps. First, the muzzle image is segmented via a convolutional neural network, resulting in a bitmap from which a graph structure is extracted in the second phase. The final phase consists of matching the resulting graph with the ones previously extracted and stored in the database for an optimal match. The results for the segmentation quality show a fidelity of around seventy percent, while the extracted graph perfectly represents the extracted bitmap. The matching algorithm is currently in progress.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134507297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Study of Text Document Representation Approaches Using Point Placement-based Visualizations 基于点放置的可视化文本文档表示方法的比较研究
Hevelyn Sthefany Lima de Carvalho, Vinícius R. P. Borges
{"title":"A Comparative Study of Text Document Representation Approaches Using Point Placement-based Visualizations","authors":"Hevelyn Sthefany Lima de Carvalho, Vinícius R. P. Borges","doi":"10.5753/sibgrapi.est.2021.20035","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20035","url":null,"abstract":"In natural language processing, text representation plays an important role which can affect the performance of language models and machine learning algorithms. Basic vector space models, such as the term frequency-inverse document frequency, became popular approaches to represent text documents. In the last years, approaches based on word embeddings have been proposed to preserve the meaning and semantic relations of words, phrases and texts. In this paper, we focus on studying the influences of different text representations to the quality of the 2D visual spaces (layouts) generated by state-of-art visualizations based on point placement. For that purpose, a visualizationassisted approach is proposed to support users when exploring such representations in classification tasks. Experimental results using two public labeled corpora were conducted to assess the quality of the layouts and to discuss possible relations to the classification performances. The results are promising, indicating that the proposed approach can guide users to understand the relevant patterns of a corpus in each representation.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133647794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Human-Machine Hybrid Framework for Person Re-Identification from Full Frame Videos 基于人机混合框架的全帧视频人物再识别
Felix Olivier Sumari Huayta, E. Clua, Joris Guérin
{"title":"A Novel Human-Machine Hybrid Framework for Person Re-Identification from Full Frame Videos","authors":"Felix Olivier Sumari Huayta, E. Clua, Joris Guérin","doi":"10.5753/sibgrapi.est.2021.20013","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20013","url":null,"abstract":"With the major adoption of automation for cities security, person re-identification (Re-ID) has been extensively studied. In this dissertation, we argue that the current way of studying person re-identification, i.e. by trying to re-identify a person within already detected and pre-cropped images of people, is not sufficient to implement practical security applications, where the inputs to the system are the full frames of the video streams. To support this claim, we introduce the Full Frame Person Re-ID setting (FF-PRID) and define specific metrics to evaluate FF-PRID implementations. To improve robustness, we also formalize the hybrid human-machine collaboration framework, which is inherent to any Re-ID security applications. To demonstrate the importance of considering the FF-PRID setting, we build an experiment showing that combining a good people detection network with a good Re-ID model does not necessarily produce good results for the final application. This underlines a failure of the current formulation in assessing the quality of a Re-ID model and justifies the use of different metrics. We hope that this work will motivate the research community to consider the full problem in order to develop algorithms that are better suited to real-world scenarios.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133558248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecção de Fissuras Utilizando Redes Neurais Convolucionais 利用卷积神经网络进行裂缝检测
R. P. C. D. Oliveira, C. Mauricio, V. N. D. Santos, F. F. F. Peres
{"title":"Detecção de Fissuras Utilizando Redes Neurais Convolucionais","authors":"R. P. C. D. Oliveira, C. Mauricio, V. N. D. Santos, F. F. F. Peres","doi":"10.5753/sibgrapi.est.2021.20041","DOIUrl":"https://doi.org/10.5753/sibgrapi.est.2021.20041","url":null,"abstract":"Fissuras em concreto representam manifestações patológicas e ocorrem por diversos motivos, mesmo que haja boas práticas na fase de construção. Em estruturas de grande porte, como pontes, túneis e barragens é exigido que, com certa periodicidade, ocorra inspeções visuais com objetivo de detectar, diagnosticar a causa e quando possível, reparar a fissura. Nos casos que não é possível reparar a fissura, se deve acompanhar o seu comportamento. Muitas técnicas computacionais para a detecção de fissuras têm sido propostas mas suas aplicações são limitadas pois as imagens de fissuras tendem a variar muito e neste caso, extrair informações como a localização da fissura em uma imagem requer que seja realizada uma segmentação a nível de pixel. Neste contexto, esse trabalho apresenta uma proposta utilizando o Detectron2, inspirado na rede neural convolucional Mask R-CNN, que oferece suporte para detecção de objetos, segmentação de instâncias, segmentação de panorâmica, e segmentação de semântica.","PeriodicalId":110864,"journal":{"name":"Anais Estendidos da XXXIV Conference on Graphics, Patterns and Images (SIBRAPI Estendido 2021)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124585636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信