A hybrid U-Net model with attention and advanced convolutional learning modules for simultaneous gland segmentation and cancer grade prediction in colorectal histopathological images
{"title":"A hybrid U-Net model with attention and advanced convolutional learning modules for simultaneous gland segmentation and cancer grade prediction in colorectal histopathological images","authors":"Manju Dabass , Jyoti Dabass , Sharda Vashisth , Rekha Vig","doi":"10.1016/j.ibmed.2023.100094","DOIUrl":null,"url":null,"abstract":"<div><p>In this proposed research work, a computerized Hybrid U-Net model for supplying colon glandular morphometric and cancer grade information is demonstrated. The solution is put forth by incorporating three distinctive structural elements—Advanced Convolutional Learning Modules, Attention Modules, and Multi-Scalar Transitional Modules—into the conventional U-Net architecture. By combining these modules, complex multi-level convolutional feature learning further encompassed with target-specified attention and increased effective receptive-field-size are produced. Three publicly accessible datasets—CRAG, GlaS challenge, LC-25000 dataset, and an internal, proprietary dataset Hospital Colon (HosC)—are used in experiments. The suggested model also produced competitive results for the gland detection and segmentation task in terms of Object-Dice Index as ((0.950 for CRAG), (GlaS: (0.951 for Test A & 0.902 for Test B)), (0.954 for LC-25000), (0.920 for HosC)), F1-score as ((0.921 for CRAG), (GlaS: (0.945 for Test A & 0.923 for Test B)), (0.913 for LC-25000), (0.955 for HosC)), and Object-Hausdorff Distance ((90.43 for CRAG), (GlaS: (23.11 for Test A & 71.47 for Test B)), (96.24 for LC-25000), (85.41 for HosC)). Pathologists evaluated the generated segmented glandular areas and assigned a mean score as ((9.25 for CRAG), (GlaS: (9.32 for Test A & 9.28 for Test B)), (9.12 for LC-25000) (9.14 for HosC)). The proposed model successfully completed the task of determining the cancer grade with the following results: Precision as ((0.9689 for CRAG), (0.9721 for GlaS), (1.0 for LC-25000), (1.0 for HosC)), Specificity (0.8895 for CRAG), (0.9710 for GlaS), (1.0 for LC-25000), (1.0 for HosC)), and Sensitivity ((0.9677 for CRAG), (0.9722 for GlaS), (0.9995 for LC-25000), (0.9932 for HosC)). Additionally, the Gradient-Weighted class activation mappings are provided to highlight the critical regions that the suggested model believes are essential for accurately predicting cancer. These visualizations are further reviewed by skilled pathologists and assigned with the mean scores as ((9.37 for CRAG), (9.29 for GlaS), (9.09 for LC-25000), and (9.91 for HosC)). By offering a referential opinion during the morphological assessment and diagnosis formulation in histopathology images, these results will help the pathologists and contribute towards reducing inadvertent human mistake and accelerating the cancer detection procedure.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100094"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S266652122300008X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In this proposed research work, a computerized Hybrid U-Net model for supplying colon glandular morphometric and cancer grade information is demonstrated. The solution is put forth by incorporating three distinctive structural elements—Advanced Convolutional Learning Modules, Attention Modules, and Multi-Scalar Transitional Modules—into the conventional U-Net architecture. By combining these modules, complex multi-level convolutional feature learning further encompassed with target-specified attention and increased effective receptive-field-size are produced. Three publicly accessible datasets—CRAG, GlaS challenge, LC-25000 dataset, and an internal, proprietary dataset Hospital Colon (HosC)—are used in experiments. The suggested model also produced competitive results for the gland detection and segmentation task in terms of Object-Dice Index as ((0.950 for CRAG), (GlaS: (0.951 for Test A & 0.902 for Test B)), (0.954 for LC-25000), (0.920 for HosC)), F1-score as ((0.921 for CRAG), (GlaS: (0.945 for Test A & 0.923 for Test B)), (0.913 for LC-25000), (0.955 for HosC)), and Object-Hausdorff Distance ((90.43 for CRAG), (GlaS: (23.11 for Test A & 71.47 for Test B)), (96.24 for LC-25000), (85.41 for HosC)). Pathologists evaluated the generated segmented glandular areas and assigned a mean score as ((9.25 for CRAG), (GlaS: (9.32 for Test A & 9.28 for Test B)), (9.12 for LC-25000) (9.14 for HosC)). The proposed model successfully completed the task of determining the cancer grade with the following results: Precision as ((0.9689 for CRAG), (0.9721 for GlaS), (1.0 for LC-25000), (1.0 for HosC)), Specificity (0.8895 for CRAG), (0.9710 for GlaS), (1.0 for LC-25000), (1.0 for HosC)), and Sensitivity ((0.9677 for CRAG), (0.9722 for GlaS), (0.9995 for LC-25000), (0.9932 for HosC)). Additionally, the Gradient-Weighted class activation mappings are provided to highlight the critical regions that the suggested model believes are essential for accurately predicting cancer. These visualizations are further reviewed by skilled pathologists and assigned with the mean scores as ((9.37 for CRAG), (9.29 for GlaS), (9.09 for LC-25000), and (9.91 for HosC)). By offering a referential opinion during the morphological assessment and diagnosis formulation in histopathology images, these results will help the pathologists and contribute towards reducing inadvertent human mistake and accelerating the cancer detection procedure.