International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
A Novel Edge-Enhanced Networks for Optic Disc and Optic Cup Segmentation
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-20 DOI: 10.1002/ima.70019
Mingtao Liu, Yunyu Wang, Yuxuan Li, Shunbo Hu, Guodong Wang, Jing Wang
{"title":"A Novel Edge-Enhanced Networks for Optic Disc and Optic Cup Segmentation","authors":"Mingtao Liu,&nbsp;Yunyu Wang,&nbsp;Yuxuan Li,&nbsp;Shunbo Hu,&nbsp;Guodong Wang,&nbsp;Jing Wang","doi":"10.1002/ima.70019","DOIUrl":"https://doi.org/10.1002/ima.70019","url":null,"abstract":"<div>\u0000 \u0000 <p>Optic disc and optic cup segmentation plays a key role in early diagnosis of glaucoma which is a serious eye disease that can cause damage to the optic nerve, retina, and may cause permanent blindness. Deep learning-based models are used to improve the efficiency and accuracy of fundus image segmentation. However, most approaches currently still have limitations in accurately segmenting optic disc and optic cup, which suffer from the lack of feature abstraction representation and blurring of segmentation in edge regions. This paper proposes a novel edge enhancement network called EE-TransUNet to tackle this challenge. It incorporates the Cascaded Convolutional Fusion block before each decoder layer. This enhances the abstract representation of features and preserves the information of the original features, thereby improving the model's nonlinear fitting ability. Additionally, the Channel Shuffling Multiple Expansion Fusion block is incorporated into the skip connections of the model. This block enhances the network's ability to perceive and characterize image features, thereby improving segmentation accuracy at the edges of the optic cup and optic disc. We validate the effectiveness of the method by conducting experiments on three publicly available datasets, RIM-ONE-v3, REFUGUE and DRISHTI-GS. The Dice coefficients on the test set are 0.871, 0.9056, 0.9068 for the optic cup region and 0.9721, 0.967, 0.9774 for the optic disc region, respectively. The proposed method achieves competitive results compared to other state-of-the-art methods. Our code is available at: https://github.com/wangyunyuwyy/EE-TransUNet.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142868846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Pulmonary Lobe Segmentation in CT Images Based on Oriented Derivative of Stick Filter and Surface Fitting Model
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-19 DOI: 10.1002/ima.70011
Yuanyuan Peng, Jiawei Liao, Xuemei Xu, Zixu Zhang, Siqiang Zhu
{"title":"Interactive Pulmonary Lobe Segmentation in CT Images Based on Oriented Derivative of Stick Filter and Surface Fitting Model","authors":"Yuanyuan Peng,&nbsp;Jiawei Liao,&nbsp;Xuemei Xu,&nbsp;Zixu Zhang,&nbsp;Siqiang Zhu","doi":"10.1002/ima.70011","DOIUrl":"https://doi.org/10.1002/ima.70011","url":null,"abstract":"<div>\u0000 \u0000 <p>Automated approaches for pulmonary lobe segmentation frequently encounter difficulties when applied to clinically significant cases, primarily stemming from factors such as incomplete and blurred pulmonary fissures, unpredictable pathological deformation, indistinguishable pulmonary arteries and veins, and severe damage to the lung trachea. To address these challenges, an interactive and intuitive approach utilizing an oriented derivative of stick (ODoS) filter and a surface fitting model is proposed to effectively extract and repair incomplete pulmonary fissures for accurate lung lobe segmentation in computed tomography (CT) images. First, an ODoS filter was employed in a two-dimensional (2D) space to enhance the visibility of pulmonary fissures using a triple-stick template to match the curvilinear structures across various orientations. Second, a three-dimensional (3D) post-processing pipeline based on a direction partition and integration approach was implemented for the initial detection of pulmonary fissures. Third, a coarse-to-fine segmentation strategy is utilized to eliminate extraneous clutter and rectify missed pulmonary fissures, thereby generating accurate pulmonary fissure segmentation. Finally, considering that pulmonary fissures serve as physical boundaries of the lung lobes, a multi-projection technique and surface fitting model were combined to generate a comprehensive fissure surface for pulmonary lobe segmentation. To assess the effectiveness of our approach, we actively participated in an internationally recognized lung lobe segmentation challenge known as LObe and Lung Analysis 2011 (LOLA11), which encompasses 55 CT scans. The validity of the proposed methodology was confirmed by its successful application to a publicly accessible challenge dataset. Overall, our method achieved an average intersection over union (IoU) of 0.913 for lung lobe segmentation, ranking seventh among all participants so far. Furthermore, experimental outcomes demonstrated excellent performance compared with other methods, as evidenced by both visual examination and quantitative evaluation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relation Explore Convolutional Block Attention Module for Skin Lesion Classification 用于皮肤病变分类的卷积块注意力模块的关系探索
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-19 DOI: 10.1002/ima.70002
Qichen Su, Haza Nuzly Abdull Hamed, Dazhuo Zhou
{"title":"Relation Explore Convolutional Block Attention Module for Skin Lesion Classification","authors":"Qichen Su,&nbsp;Haza Nuzly Abdull Hamed,&nbsp;Dazhuo Zhou","doi":"10.1002/ima.70002","DOIUrl":"https://doi.org/10.1002/ima.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin cancer remains a significant global health concern, demanding accurate and efficient diagnostic solutions. Despite advances in convolutional neural networks for computer vision, automated skin lesion diagnosis remains challenging due to the small lesion region in images and limited inter-class variation. Accurate classification depends on precise lesion localization and recognition of fine-grained visual differences. To address these challenges, this paper proposes an enhancement to the Convolutional Block Attention Module, referred to as Relation Explore Convolutional Block Attention Module. This enhancement improves upon the existing module by utilizing multiple combinations of pooling-based attentions, enabling the model to better learn and leverage complex interactions during training. Extensive experiments are conducted to investigate the performance of skin lesion diagnosis when integrating Relation Explore Convolutional Block Attention Module with ResNet50 at different stages. The best-performing model achieves outstanding classification results on the publicly available HAM10000 dataset, with an Accuracy of 97.63%, Precision of 88.98%, Sensitivity of 82.86%, Specificity of 97.65%, and F1-score of 85.46%, using fivefold cross-validation. The high performance of this model, alongside the clear interpretability provided by its attention maps, builds trust in automated systems. This trust empowers clinicians to make well-informed decisions, significantly enhancing the potential for improved patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microaneurysm Detection With Multiscale Attention and Trident RPN 利用多尺度注意力和三叉戟 RPN 检测微动脉瘤
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-19 DOI: 10.1002/ima.70015
Jiawen Lin, Shilin Liu, Meiyan Mao, Susu Chen
{"title":"Microaneurysm Detection With Multiscale Attention and Trident RPN","authors":"Jiawen Lin,&nbsp;Shilin Liu,&nbsp;Meiyan Mao,&nbsp;Susu Chen","doi":"10.1002/ima.70015","DOIUrl":"https://doi.org/10.1002/ima.70015","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR) is the most serious and common complication of diabetes. Microaneurysm (MA) detection is of great importance for DR screening by providing the earliest indicator of presence of DR. Extremely small size of MAs, low color contrast in fundus images, and the interference from blood vessels and other lesions with similar characteristics make MA detection still challenging. In this paper, a novel two-stage MA detector with multiscale attention and trident Region proposal network (RPN) is proposed. A scale selection pyramid network based on the attention mechanism is established to improve detection performance on the small objects by reducing the gradient inconsistency between low and high level features. Meanwhile, a trident RPN with three-branch parallel feature enhance head is designed to promote more distinguishing learning, further reducing the misrecognition. The proposed method is validated on IDRiD, e-ophtha, and ROC datasets with the average scores of 0.516, 0.646, and 0.245, respectively, achieving the best or nearly optimal performance compared to the state-of-the-arts. Besides, the proposed MA detector illustrates a more balanced performance on the three datasets, showing strong generalization.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C-TUnet: A CNN-Transformer Architecture-Based Ultrasound Breast Image Classification Network
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-17 DOI: 10.1002/ima.70014
Ying Wu, Faming Li, Bo Xu
{"title":"C-TUnet: A CNN-Transformer Architecture-Based Ultrasound Breast Image Classification Network","authors":"Ying Wu,&nbsp;Faming Li,&nbsp;Bo Xu","doi":"10.1002/ima.70014","DOIUrl":"https://doi.org/10.1002/ima.70014","url":null,"abstract":"<div>\u0000 \u0000 <p>Ultrasound breast image classification plays a crucial role in the early detection of breast cancer, particularly in differentiating benign from malignant lesions. Traditional methods face limitations in feature extraction and global information capture, often resulting in lower accuracy for complex and noisy ultrasound images. This paper introduces a novel ultrasound breast image classification network, C-TUnet, which combines a convolutional neural network (CNN) with a Transformer architecture. In this model, the CNN module initially extracts key features from ultrasound images, followed by the Transformer module, which captures global context information to enhance classification accuracy. Experimental results demonstrate that the proposed model achieves excellent classification performance on public datasets, showing clear advantages over traditional methods. Our analysis confirms the effectiveness of combining CNN and Transformer modules—a strategy that not only boosts the accuracy and robustness of ultrasound breast image classification but also offers a reliable tool for clinical diagnostics, holding substantial potential for real-world application.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOv8 Outperforms Traditional CNN Models in Mammography Classification: Insights From a Multi-Institutional Dataset YOLOv8 在乳腺 X 射线摄影分类中的表现优于传统 CNN 模型:来自多机构数据集的启示
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-16 DOI: 10.1002/ima.70008
Erfan AkbarnezhadSany, Hossein EntezariZarch, Mohammad AlipoorKermani, Baharak Shahin, Mohsen Cheki, Aida Karami, Samaneh Zahedi, Zahra AhmadPour, Sadegh Ahmadi-Mazhin, Ali Rahimnezhad, Sahar Sayfollahi, Salar Bijari, Melika Shojaee, Seyed Masoud Rezaeijo
{"title":"YOLOv8 Outperforms Traditional CNN Models in Mammography Classification: Insights From a Multi-Institutional Dataset","authors":"Erfan AkbarnezhadSany,&nbsp;Hossein EntezariZarch,&nbsp;Mohammad AlipoorKermani,&nbsp;Baharak Shahin,&nbsp;Mohsen Cheki,&nbsp;Aida Karami,&nbsp;Samaneh Zahedi,&nbsp;Zahra AhmadPour,&nbsp;Sadegh Ahmadi-Mazhin,&nbsp;Ali Rahimnezhad,&nbsp;Sahar Sayfollahi,&nbsp;Salar Bijari,&nbsp;Melika Shojaee,&nbsp;Seyed Masoud Rezaeijo","doi":"10.1002/ima.70008","DOIUrl":"https://doi.org/10.1002/ima.70008","url":null,"abstract":"<div>\u0000 \u0000 <p>This study evaluates the efficacy of four deep learning methods—YOLOv8, VGG16, ResNet101, and EfficientNet—for classifying mammography images into normal, benign, and malignant categories using a large-scale, multi-institutional dataset. Each dataset was divided into training and testing groups with an 80%/20% split, ensuring that all examinations from the same patient were consistently allocated to the same split. The training set for the malignant class contained 10 220 images, the benign class 6086 images, and the normal class 8526 images. For testing, the malignant class had 1441 images, the benign class 1124 images, and the normal class 1881 images. All models were fine-tuned using transfer learning and standardized to 224 × 224 pixels with data augmentation techniques to improve robustness. Among the models, YOLOv8 demonstrated the highest performance, achieving an AUC of 93.33% for the training dataset and 91% for the testing dataset. It also exhibited superior accuracy (91.82% training, 86.68% testing), F1-score (91.11% training, 84.86% testing), and specificity (95.80% training, 93.32% testing). ResNet101, VGG16, and EfficientNet also performed well, with ResNet101 achieving an AUC of 91.67% (training) and 90.00% (testing). Grad-CAM visualizations were used to identify the regions most influential in model decision-making. This multi-model evaluation highlights YOLOv8's potential for accurately classifying mammograms, while demonstrating that all models contribute valuable insights for improving breast cancer detection. Future clinical trials will focus on refining these models to assist healthcare professionals in delivering accurate and timely diagnoses.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DBE-Net: A Dual-Branch Boundary Enhancement Network for Pathological Image Segmentation DBE-Net:用于病理图像分割的双分支边界增强网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-16 DOI: 10.1002/ima.70017
Zefeng Liu, Zhenyu Liu
{"title":"DBE-Net: A Dual-Branch Boundary Enhancement Network for Pathological Image Segmentation","authors":"Zefeng Liu,&nbsp;Zhenyu Liu","doi":"10.1002/ima.70017","DOIUrl":"https://doi.org/10.1002/ima.70017","url":null,"abstract":"<div>\u0000 \u0000 <p>Pathological image segmentation provides support for the accurate assessment of lesion area by precisely segmenting various tissues and cellular structures in pathological images. Due to the unclear boundaries between targets and backgrounds, as well as the information loss during upsampling and downsampling operations, it remains a challenging task to identify boundary details, especially in differentiating between adjacent tissues, minor lesions, or clustered cell nuclei. In this paper, a Dual-branch Boundary Enhancement Network (DBE-Net) is proposed to improve the sensitivity of the model to the boundary. Firstly, the proposed method includes a main task and an auxiliary task. The main task focuses on segmenting the target object and the auxiliary task is dedicated to extracting boundary information. Secondly, a feature processing architecture is established which includes three modules: Feature Preservation (FP), Feature Fusion (FF), and Hybrid Attention Fusion (HAF) module. The FP module and the FF module are used to provide original information for the encoder and fuse information from every layer of the decoder. The HAF is introduced to replace the skip connections between the encoder and decoder. Finally, a boundary-dependent loss function is designed to simultaneously optimize both tasks for the dual-branch network. The proposed loss function enhances the dependence of the main task on the boundary information supplied by the auxiliary task. The proposed method has been validated on three datasets, including Glas, CoCaHis, and CoNSep dataset.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Depthwise Residual Network for Knee Meniscus Segmentation From Magnetic Resonance Imaging
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-16 DOI: 10.1002/ima.70006
Anita Thengade, A. M. Rajurkar, Sanjay N. Talbar
{"title":"Deep Depthwise Residual Network for Knee Meniscus Segmentation From Magnetic Resonance Imaging","authors":"Anita Thengade,&nbsp;A. M. Rajurkar,&nbsp;Sanjay N. Talbar","doi":"10.1002/ima.70006","DOIUrl":"https://doi.org/10.1002/ima.70006","url":null,"abstract":"<div>\u0000 \u0000 <p>The menisci within the knee are essential for various anatomical functions, including load-bearing, joint stability, cartilage protection, shock absorption, and lubrication. Magnetic resonance imaging (MRI) provides highly detailed images of internal organs and soft tissues, which are indispensable for physicians and radiologists assessing the meniscus. Given the multitude of images in each MRI sequence and diverse MRI data, the segmentation of the meniscus presents considerable challenges through image processing methods. The region-specific characteristics of the meniscus can vary from one image to another within the sequence. Consequently, achieving automatic and accurate segmentation of meniscus in knee MRI images is a crucial step in meniscus analysis. This paper introduces the “UNet with depthwise residual network” (DR-UNet), a depthwise convolutional neural network, designed specifically for meniscus segmentation in MRI images. The proposed architecture significantly improves the accuracy of meniscus segmentation compared to different segmentation networks. The training and testing phases utilized fat suppression turbo-spin-echo (FS TSE) MRI sequences collected from 100 distinct knee joints using a Siemens 3 Tesla MRI machine. Additionally, we employed data augmentation techniques to expand the dataset strategically, addressing the challenge of a substantial training dataset requirement. The DR-UNet model demonstrated impressive meniscus segmentation performance, achieving a Dice similarity coefficient range of 0.743–0.9646 and a Jaccard index range of 0.653–0.869, thereby showcasing its advanced segmentation capabilities.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesizing Images With Annotations for Medical Image Segmentation Using Diffusion Probabilistic Model
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-14 DOI: 10.1002/ima.70007
Zengan Huang, Qinzhu Yang, Mu Tian, Yi Gao
{"title":"Synthesizing Images With Annotations for Medical Image Segmentation Using Diffusion Probabilistic Model","authors":"Zengan Huang,&nbsp;Qinzhu Yang,&nbsp;Mu Tian,&nbsp;Yi Gao","doi":"10.1002/ima.70007","DOIUrl":"https://doi.org/10.1002/ima.70007","url":null,"abstract":"<p>To alleviate the burden of manual annotation, there are numerous excellent segmentation models for images segmentation being developed. However, the performance of these data-driven segmentation models is frequently constrained by the availability of samples sizes of pair medical images and segmentation annotations. Therefore, to address this challenge, this study introduces the medical image segmentation augmentation diffusion model (MEDSAD). MEDSAD solves the problem of annotation scarcity by utilizing a given simple annotation to generate paired medical images. To improve stability, we used the traditional diffusion model for this study. To exert better control over the texture synthesis in the medical images generated by MEDSAD, the texture style injection (TSI) mechanism is introduced. Additionally, we propose the feature frequency domain attention (FFDA) module to mitigate the adverse effects of high-frequency noise during generation. The efficacy of MEDSAD is substantiated through the validation of three distinct medical segmentation tasks encompassing magnetic resonance (MR) and ultrasound (US) imaging modalities, focusing on the segmentation of breast tumors, brain tumors, and nerve structures. The findings demonstrate the MEDSAD model's proficiency in synthesizing medical image pairs based on provided annotations, thereby facilitating a notable augmentation in performance for subsequent segmentation tasks. Moreover, the improvement in performance becomes greater as the quantity of synthetic available data samples increases. This underscores the robust generalization capability and efficacy intrinsic to the MEDSAD model, potentially offering avenues for future explorations in data-driven model training and research.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Three-Step Automated Segmentation Method for Early Cervical Cancer MRI Images Based on Deep Learning
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-13 DOI: 10.1002/ima.23207
Liu Xiong, Chunxia Chen, Yongping Lin, Zhiyu Song, Jialin Su
{"title":"A Three-Step Automated Segmentation Method for Early Cervical Cancer MRI Images Based on Deep Learning","authors":"Liu Xiong,&nbsp;Chunxia Chen,&nbsp;Yongping Lin,&nbsp;Zhiyu Song,&nbsp;Jialin Su","doi":"10.1002/ima.23207","DOIUrl":"https://doi.org/10.1002/ima.23207","url":null,"abstract":"<div>\u0000 \u0000 <p>Tumor detection and segmentation are essential for cervical cancer (CC) treatment and diagnosis. This study presents a model that segmented the tumor, uterus, and vagina based on deep learning automatically on magnetic resonance imaging (MRI) images of patients with CC. The tumor detection dataset consists of 68 CC patients' diffusion-weighted magnetic resonance imaging (DWI) images. The segmented dataset consists of 73 CC patients' T2-weighted imaging (T2WI) images. First, the three clear images of the patient's DWI images are detected using a single-shot multibox detector (SSD). Second, the serial number of the clearest image is obtained by scores, while the corresponding T2WI image with the same serial number is selected. Third, the selected images are segmented by employing the semantic segmentation (U-Net) model with the squeeze-and-excitation (SE) block and attention gate (SE-ATT-Unet). Three segmentation models are implemented to automatically segment the tumor, uterus, and vagina separately by adding different attention mechanisms at different locations. The target detection accuracy of the model is 92.32%, and the selection accuracy is 90.9%. The dice similarity coefficient (DSC) on the tumor is 92.20%, pixel accuracy (PA) is 93.08%, and the mean Hausdorff distance (HD) is 3.41 mm. The DSC on the uterus is 93.63%, PA is 91.75%, and the mean HD is 9.79 mm. The DSC on the vagina is 75.70%, PA is 85.46%, and the mean HD is 10.52 mm. The results show that the proposed method accurately selects images for segmentation, and the SE-ATT-Unet is effective in segmenting different regions on MRI images.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信