Daniel Lopes Soares Lima, A. Pessoa, A. C. D. Paiva, António Cunha, Geraldo Braz Júnior, J. Almeida
{"title":"Classification of Video Capsule Endoscopy Images Using Visual Transformers","authors":"Daniel Lopes Soares Lima, A. Pessoa, A. C. D. Paiva, António Cunha, Geraldo Braz Júnior, J. Almeida","doi":"10.1109/BHI56158.2022.9926791","DOIUrl":null,"url":null,"abstract":"Cancers related to the gastrointestinal tract have a high incidence rate in the population, with a high mortality rate. Videos obtained through endoscopic capsules are essential for evaluating anomalies that can progress to cancer. However, due to their duration, which can reach 10 hours, they demand great attention from the medical specialist in their analysis. Machine learning techniques have been successfully applied in developing computer-aided diagnostic systems since the 1990s, where Convolutional Neural Networks (CNNs) have become very successful for pattern recognition in images. CNNs use convolutions to extract features from the analyzed data, operating in a fixed-size window and thus having problems capturing pixel-level relationships considering the spatial and temporal domains. Otherwise, transformers use attention mechanisms, where data is structured in a vector space that can aggregate information from adjacent data to determine meaning in a given context. This work proposes a computational method for analyzing images extracted from videos obtained by endoscopic capsules, using a transformer-based model that helps diagnose of gastrointestinal tract abnormalities. Preliminary results are promising. The classification task of 11 classes evaluated on the publicly available Kvasir-Capsule dataset yielded an average value of 99.70% of accuracy, 99.64% of precision, 99.86% of sensitivity, and 99.54% of f1-score.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BHI56158.2022.9926791","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Cancers related to the gastrointestinal tract have a high incidence rate in the population, with a high mortality rate. Videos obtained through endoscopic capsules are essential for evaluating anomalies that can progress to cancer. However, due to their duration, which can reach 10 hours, they demand great attention from the medical specialist in their analysis. Machine learning techniques have been successfully applied in developing computer-aided diagnostic systems since the 1990s, where Convolutional Neural Networks (CNNs) have become very successful for pattern recognition in images. CNNs use convolutions to extract features from the analyzed data, operating in a fixed-size window and thus having problems capturing pixel-level relationships considering the spatial and temporal domains. Otherwise, transformers use attention mechanisms, where data is structured in a vector space that can aggregate information from adjacent data to determine meaning in a given context. This work proposes a computational method for analyzing images extracted from videos obtained by endoscopic capsules, using a transformer-based model that helps diagnose of gastrointestinal tract abnormalities. Preliminary results are promising. The classification task of 11 classes evaluated on the publicly available Kvasir-Capsule dataset yielded an average value of 99.70% of accuracy, 99.64% of precision, 99.86% of sensitivity, and 99.54% of f1-score.