{"title":"RunicNet: Leveraging CNNs With Attention Mechanisms for Cervical Cancer Cell Classification.","authors":"Erin Beate Bjørkeli, Morteza Esmaeili","doi":"10.1177/11795972251351815","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Early detection through routine screening methods, such as the Papanicolaou (Pap) test, is crucial for reducing cervical cancer mortality. However, the Pap smear method faces challenges including subjective interpretation, significant variability in diagnostic confidence, and high susceptibility to human errors-leading to both false negatives (missed abnormalities) and false positives (unnecessary follow-up procedures). Providing a first opinion could improve the screening examination pipeline and greatly aid the specialist's confidence in reporting. Artificial intelligence (AI)-based approaches have shown promise in automating cell classification, reducing human error, and identifying subtle abnormalities that may be missed by experts.</p><p><strong>Methods: </strong>In this study, we present RunicNet, a CNN-based architecture with attention mechanisms designed to classify Pap smear cell images. RunicNet integrates attention mechanisms such as High-Frequency Attention Blocks-enhanced Residual Blocks for improved feature extraction, Pixel Attention for computational efficiency, and a Gated-Dconv Feed-Forward Network to refine image representation. The model was trained on a dataset of 85 080 cell images, employing data augmentation and class balancing techniques to address dataset imbalances.</p><p><strong>Results: </strong>Evaluated on a separate testing dataset, RunicNet achieved a weighted F1-score of 0.78, significantly outperforming baseline models such as ResNet-18 (F1-score of 0.53) and a fully connected CNN (F1-score of 0.66).</p><p><strong>Discussion: </strong>The findings support the potential of attention-based CNN models like RunicNet to significantly improve the accuracy and efficiency of cervical cancer screening. Integrating such AI systems into clinical workflows may enhance early detection and reduce diagnostic variability in Pap smear analysis.</p>","PeriodicalId":42484,"journal":{"name":"Biomedical Engineering and Computational Biology","volume":"16 ","pages":"11795972251351815"},"PeriodicalIF":3.1000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12271656/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering and Computational Biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/11795972251351815","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Early detection through routine screening methods, such as the Papanicolaou (Pap) test, is crucial for reducing cervical cancer mortality. However, the Pap smear method faces challenges including subjective interpretation, significant variability in diagnostic confidence, and high susceptibility to human errors-leading to both false negatives (missed abnormalities) and false positives (unnecessary follow-up procedures). Providing a first opinion could improve the screening examination pipeline and greatly aid the specialist's confidence in reporting. Artificial intelligence (AI)-based approaches have shown promise in automating cell classification, reducing human error, and identifying subtle abnormalities that may be missed by experts.
Methods: In this study, we present RunicNet, a CNN-based architecture with attention mechanisms designed to classify Pap smear cell images. RunicNet integrates attention mechanisms such as High-Frequency Attention Blocks-enhanced Residual Blocks for improved feature extraction, Pixel Attention for computational efficiency, and a Gated-Dconv Feed-Forward Network to refine image representation. The model was trained on a dataset of 85 080 cell images, employing data augmentation and class balancing techniques to address dataset imbalances.
Results: Evaluated on a separate testing dataset, RunicNet achieved a weighted F1-score of 0.78, significantly outperforming baseline models such as ResNet-18 (F1-score of 0.53) and a fully connected CNN (F1-score of 0.66).
Discussion: The findings support the potential of attention-based CNN models like RunicNet to significantly improve the accuracy and efficiency of cervical cancer screening. Integrating such AI systems into clinical workflows may enhance early detection and reduce diagnostic variability in Pap smear analysis.