{"title":"Lung image segmentation with improved U-Net, V-Net and Seg-Net techniques.","authors":"Fuat Turk, Mahmut Kılıçaslan","doi":"10.7717/peerj-cs.2700","DOIUrl":null,"url":null,"abstract":"<p><p>Tuberculosis remains a significant health challenge worldwide, affecting a large population. Therefore, accurate diagnosis of this disease is a critical issue. With advancements in computer systems, imaging devices, and rapid progress in machine learning, tuberculosis diagnosis is being increasingly performed through image analysis. This study proposes three segmentation models based on U-Net, V-Net, and Seg-Net architectures to improve tuberculosis detection using the Shenzhen and Montgomery databases. These deep learning-based methods aim to enhance segmentation accuracy by employing advanced preprocessing techniques, attention mechanisms, and non-local blocks. Experimental results indicate that the proposed models outperform traditional approaches, particularly in terms of the Dice coefficient and accuracy values. The models have demonstrated robust performance on popular datasets. As a result, they contribute to more precise and reliable lung region segmentation, which is crucial for the accurate diagnosis of respiratory diseases like tuberculosis. In evaluations using various performance metrics, the proposed U-Net and V-Net models achieved Dice coefficient scores of 96.43% and 96.42%, respectively, proving their competitiveness and effectiveness in medical image analysis. These findings demonstrate that the Dice coefficient values of the proposed U-Net and V-Net models are more effective in tuberculosis segmentation than Seg-Net and other traditional methods.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2700"},"PeriodicalIF":3.5000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888921/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2700","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Tuberculosis remains a significant health challenge worldwide, affecting a large population. Therefore, accurate diagnosis of this disease is a critical issue. With advancements in computer systems, imaging devices, and rapid progress in machine learning, tuberculosis diagnosis is being increasingly performed through image analysis. This study proposes three segmentation models based on U-Net, V-Net, and Seg-Net architectures to improve tuberculosis detection using the Shenzhen and Montgomery databases. These deep learning-based methods aim to enhance segmentation accuracy by employing advanced preprocessing techniques, attention mechanisms, and non-local blocks. Experimental results indicate that the proposed models outperform traditional approaches, particularly in terms of the Dice coefficient and accuracy values. The models have demonstrated robust performance on popular datasets. As a result, they contribute to more precise and reliable lung region segmentation, which is crucial for the accurate diagnosis of respiratory diseases like tuberculosis. In evaluations using various performance metrics, the proposed U-Net and V-Net models achieved Dice coefficient scores of 96.43% and 96.42%, respectively, proving their competitiveness and effectiveness in medical image analysis. These findings demonstrate that the Dice coefficient values of the proposed U-Net and V-Net models are more effective in tuberculosis segmentation than Seg-Net and other traditional methods.
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.