Real-Time, Multi-Task Mobile Application for Automatic Bleeding and Non-Bleeding Frame Analysis in Video Capsule Endoscopy Using an Ensemble of Faster R-CNN and LinkNet
IF 2.5 4区 计算机科学Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
{"title":"Real-Time, Multi-Task Mobile Application for Automatic Bleeding and Non-Bleeding Frame Analysis in Video Capsule Endoscopy Using an Ensemble of Faster R-CNN and LinkNet","authors":"Divyansh Nautiyal, Manas Dhir, Tanisha Singh, Anushka Saini, Palak Handa","doi":"10.1002/ima.70171","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Real-time, multi-task mobile application for automatic bleeding and non-bleeding frame analysis in video capsule endoscopy (VCE) frames is critical for early diagnosis but is currently underexplored. This study presents a mobile application using Flutter that can automatically classify VCE frames as bleeding and non-bleeding, and further identify and segment bleeding areas in real time. The application utilizes an ensemble deep learning model that integrates Faster Region-based Convolutional Neural Network (R-CNN) for frame-level classification and LinkNet for pixel-level segmentation. Faster R-CNN first detects and classifies VCE frames as bleeding or non-bleeding, and subsequently, LinkNet segments the bleeding regions within the frames identified as bleeding. Both models were trained and validated using the publicly available WCEBleedGen dataset. To evaluate the effectiveness of the proposed ensemble, a comparative analysis was conducted with existing studies and state-of-the-art (SOTA) models in the field. For detection, the performance of Faster R-CNN was compared with two You Only Look Once (YOLO) variants: YOLOv5 and YOLOv12. For segmentation, LinkNet was compared with SegNet and UNet. Evaluation metrics included mean Average Precision at 0.5 ([email protected]), Dice coefficient, and Eigen class activation maps. The mobile application achieved an average inference time of 2.88 s per frame and 23.33 s for a batch of 10 frames. Overall, the ensemble model attained a [email protected] of 0.92 and a Dice coefficient of 0.96, outperforming existing studies. For SOTA models, Faster R-CNN outperformed YOLO variants by achieving a 25% higher [email protected], and LinkNet achieved a 26% higher Dice coefficient than SegNet and 5% higher than UNet on the validation dataset and achieved more focused Eigen maps for different bleeding areas. This study represents the first attempt to develop a real-time, multi-task mobile application for VCE bleeding analysis. The application is open-source and freely available at https://github.com/misahub2023/VCE-BleedGen-Application, supporting accessibility, reproducibility, and future research in this field.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70171","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Real-time, multi-task mobile application for automatic bleeding and non-bleeding frame analysis in video capsule endoscopy (VCE) frames is critical for early diagnosis but is currently underexplored. This study presents a mobile application using Flutter that can automatically classify VCE frames as bleeding and non-bleeding, and further identify and segment bleeding areas in real time. The application utilizes an ensemble deep learning model that integrates Faster Region-based Convolutional Neural Network (R-CNN) for frame-level classification and LinkNet for pixel-level segmentation. Faster R-CNN first detects and classifies VCE frames as bleeding or non-bleeding, and subsequently, LinkNet segments the bleeding regions within the frames identified as bleeding. Both models were trained and validated using the publicly available WCEBleedGen dataset. To evaluate the effectiveness of the proposed ensemble, a comparative analysis was conducted with existing studies and state-of-the-art (SOTA) models in the field. For detection, the performance of Faster R-CNN was compared with two You Only Look Once (YOLO) variants: YOLOv5 and YOLOv12. For segmentation, LinkNet was compared with SegNet and UNet. Evaluation metrics included mean Average Precision at 0.5 ([email protected]), Dice coefficient, and Eigen class activation maps. The mobile application achieved an average inference time of 2.88 s per frame and 23.33 s for a batch of 10 frames. Overall, the ensemble model attained a [email protected] of 0.92 and a Dice coefficient of 0.96, outperforming existing studies. For SOTA models, Faster R-CNN outperformed YOLO variants by achieving a 25% higher [email protected], and LinkNet achieved a 26% higher Dice coefficient than SegNet and 5% higher than UNet on the validation dataset and achieved more focused Eigen maps for different bleeding areas. This study represents the first attempt to develop a real-time, multi-task mobile application for VCE bleeding analysis. The application is open-source and freely available at https://github.com/misahub2023/VCE-BleedGen-Application, supporting accessibility, reproducibility, and future research in this field.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.