{"title":"Robust assessment of cervical precancerous lesions from pre- and post-acetic acid cervicography by combining deep learning and medical guidelines","authors":"Siti Nurmaini , Patiyus Agustiyansyah , Muhammad Naufal Rachmatullah , Firdaus Firdaus , Annisa Darmawahyuni , Bambang Tutuko , Ade Iriani Sapitri , Anggun Islami , Akhiar Wista Arum , Rizal Sanif , Irawan Sastradinata , Legiran Legiran , Radiyati Umi Partan","doi":"10.1016/j.imu.2024.101609","DOIUrl":null,"url":null,"abstract":"<div><div>Cervical cancer remains a major public health challenge, particularly in low-resource settings where access to regular screening and expert medical evaluation is limited. Traditional visual inspection with acetic acid (VIA) has been widely used for cervical cancer screening but is subjective and highly dependent on the expertise of the healthcare provider. This study presents a comprehensive methodology for decision-making regarding cervical precancerous lesions using cervicograms taken before and after the application of acetic acid. By leveraging the power of the deep learning (DL) model with You Only Look Once (Yolo) version 8, Slicing Aided Hyper Inference (SAHI), and oncology medical guidelines, the system aims to improve the accuracy and consistency of VIA assessments. The method involves training a Yolov8xl model on our cervicogram dataset, annotated by two oncologists using VIA screening results, to distinguish between the cervical area, columnar area, and lesions. The model is designed to process cervicography images taken both before and after the application of acetic acid, capturing the dynamic changes in tissue appearance indicative of precancerous conditions. The automated evaluation system demonstrated high sensitivity and specificity in detecting cervical lesions with 90.78 % accuracy, 91.67 % sensitivity, and 90.96 % specificity, outperforming other existing methods. This work represents a significant step towards deploying AI-driven solutions in cervical cancer screening, potentially reducing the global burden of the disease. It can be integrated into existing screening programs, providing a valuable tool for early detection and intervention, especially in regions with limited access to trained medical personnel.</div></div>","PeriodicalId":13953,"journal":{"name":"Informatics in Medicine Unlocked","volume":"52 ","pages":"Article 101609"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Informatics in Medicine Unlocked","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352914824001667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Cervical cancer remains a major public health challenge, particularly in low-resource settings where access to regular screening and expert medical evaluation is limited. Traditional visual inspection with acetic acid (VIA) has been widely used for cervical cancer screening but is subjective and highly dependent on the expertise of the healthcare provider. This study presents a comprehensive methodology for decision-making regarding cervical precancerous lesions using cervicograms taken before and after the application of acetic acid. By leveraging the power of the deep learning (DL) model with You Only Look Once (Yolo) version 8, Slicing Aided Hyper Inference (SAHI), and oncology medical guidelines, the system aims to improve the accuracy and consistency of VIA assessments. The method involves training a Yolov8xl model on our cervicogram dataset, annotated by two oncologists using VIA screening results, to distinguish between the cervical area, columnar area, and lesions. The model is designed to process cervicography images taken both before and after the application of acetic acid, capturing the dynamic changes in tissue appearance indicative of precancerous conditions. The automated evaluation system demonstrated high sensitivity and specificity in detecting cervical lesions with 90.78 % accuracy, 91.67 % sensitivity, and 90.96 % specificity, outperforming other existing methods. This work represents a significant step towards deploying AI-driven solutions in cervical cancer screening, potentially reducing the global burden of the disease. It can be integrated into existing screening programs, providing a valuable tool for early detection and intervention, especially in regions with limited access to trained medical personnel.
子宫颈癌仍然是一项重大的公共卫生挑战,特别是在资源匮乏、定期筛查和专家医疗评估机会有限的地区。传统的醋酸目视检查(VIA)已广泛用于宫颈癌筛查,但它是主观的,高度依赖于医疗保健提供者的专业知识。本研究提出了一种综合的方法,用于决策关于宫颈癌前病变使用前后采取的宫颈造影醋酸。通过利用深度学习(DL)模型(You Only Look Once (Yolo) version 8)、切片辅助超推断(SAHI)和肿瘤医学指南的强大功能,该系统旨在提高VIA评估的准确性和一致性。该方法包括在我们的宫颈图数据集上训练一个Yolov8xl模型,由两名肿瘤学家使用VIA筛查结果进行注释,以区分宫颈区域、柱状区域和病变。该模型旨在处理应用醋酸前后拍摄的宫颈造影图像,捕捉指示癌前病变的组织外观的动态变化。该系统检测宫颈病变的准确率为90.78%,灵敏度为91.67%,特异性为90.96%,优于现有的其他方法。这项工作是朝着在宫颈癌筛查中部署人工智能驱动解决方案迈出的重要一步,有可能减轻该疾病的全球负担。它可以整合到现有的筛查方案中,为早期发现和干预提供宝贵的工具,特别是在缺乏训练有素的医务人员的地区。
期刊介绍:
Informatics in Medicine Unlocked (IMU) is an international gold open access journal covering a broad spectrum of topics within medical informatics, including (but not limited to) papers focusing on imaging, pathology, teledermatology, public health, ophthalmological, nursing and translational medicine informatics. The full papers that are published in the journal are accessible to all who visit the website.