Ju Hwan Lee , Seong Je Oh , Kyungsu Kim , Chae Yeon Lim , Seung Hong Choi , Myung Jin Chung
{"title":"Improved unsupervised 3D lung lesion detection and localization by fusing global and local features: Validation in 3D low-dose computed tomography","authors":"Ju Hwan Lee , Seong Je Oh , Kyungsu Kim , Chae Yeon Lim , Seung Hong Choi , Myung Jin Chung","doi":"10.1016/j.media.2025.103559","DOIUrl":null,"url":null,"abstract":"<div><div>Unsupervised anomaly detection (UAD) is crucial in low-dose computed tomography (LDCT). Recent AI technologies, leveraging global features, have enabled effective UAD with minimal training data of normal patients. However, this approach, devoid of utilizing local features, exhibits vulnerability in detecting deep lesions within the lungs. In other words, while the conventional use of global features can achieve high specificity, it often comes with limited sensitivity. Developing a UAD AI model with high sensitivity is essential to prevent false negatives, especially in screening patients with diseases demonstrating high mortality rates. We have successfully pioneered a new LDCT UAD AI model that leverages local features, achieving a previously unattainable increase in sensitivity compared to global methods (17.5% improvement). Furthermore, by integrating this approach with conventional global-based techniques, we have successfully consolidated the advantages of each model – high sensitivity from the local model and high specificity from the global model – into a single, unified, trained model (17.6% and 33.5% improvement, respectively). Without the need for additional training, we anticipate achieving significant diagnostic efficacy in various LDCT applications, where both high sensitivity and specificity are essential, using our fixed model. Code is available at <span><span>https://github.com/kskim-phd/Fusion-UADL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"103 ","pages":"Article 103559"},"PeriodicalIF":10.7000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525001069","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised anomaly detection (UAD) is crucial in low-dose computed tomography (LDCT). Recent AI technologies, leveraging global features, have enabled effective UAD with minimal training data of normal patients. However, this approach, devoid of utilizing local features, exhibits vulnerability in detecting deep lesions within the lungs. In other words, while the conventional use of global features can achieve high specificity, it often comes with limited sensitivity. Developing a UAD AI model with high sensitivity is essential to prevent false negatives, especially in screening patients with diseases demonstrating high mortality rates. We have successfully pioneered a new LDCT UAD AI model that leverages local features, achieving a previously unattainable increase in sensitivity compared to global methods (17.5% improvement). Furthermore, by integrating this approach with conventional global-based techniques, we have successfully consolidated the advantages of each model – high sensitivity from the local model and high specificity from the global model – into a single, unified, trained model (17.6% and 33.5% improvement, respectively). Without the need for additional training, we anticipate achieving significant diagnostic efficacy in various LDCT applications, where both high sensitivity and specificity are essential, using our fixed model. Code is available at https://github.com/kskim-phd/Fusion-UADL.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.