Journal of imaging informatics in medicine最新文献

筛选
英文 中文
A Thyroid Nodule Ultrasound Image Grading Model Integrating Medical Prior Knowledge.
Journal of imaging informatics in medicine Pub Date : 2025-03-10 DOI: 10.1007/s10278-024-01120-y
Hua Chen, Chong Liu, Xiaoshi Cheng, Chenjun Jiang, Ying Wang
{"title":"A Thyroid Nodule Ultrasound Image Grading Model Integrating Medical Prior Knowledge.","authors":"Hua Chen, Chong Liu, Xiaoshi Cheng, Chenjun Jiang, Ying Wang","doi":"10.1007/s10278-024-01120-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01120-y","url":null,"abstract":"<p><p>In recent years, there has been increasing research on computer-aided diagnosis (CAD) using deep learning and image processing techniques. Still, most studies have focused on the benign-malignant classification of nodules. In this study, we propose an integrated architecture for grading thyroid nodules based on the Chinese Thyroid Imaging Reporting and Data System (C-TIRADS). The method combines traditional handcrafted features with deep features in the extraction process. In the preprocessing stage, a pseudo-artifact removal algorithm based on the fast marching method (FMM) is employed, followed by a hybrid median filtering for noise reduction. Contrast-limited adaptive histogram equalization is used for contrast enhancement to restore and enhance the information in ultrasound images. In the feature extraction stage, the improved ShuffleNetV2 network with multi-head self-attention mechanism is selected, and its extracted features are fused with medical prior knowledge features. Finally, a multi-class classification task is performed using the eXtreme Gradient Boosting (XGBoost) classifier. The dataset used in this study consists of 922 original images, including 149 examples belonging to class 2, 140 examples to class 3, 156 examples to class 4A, 114 examples to class 4B, 123 examples to class 4C, and 240 examples to class 5. The model is trained for 2000 epochs. The accuracy, precision, recall, F1 score, and AUC value of the proposed method are 97.17%, 97.65%, 97.17%, 0.9834, and 0.9855, respectively. The results demonstrate that the fusion of medical prior knowledge based on C-TIRADS and deep features from convolutional neural networks can effectively improve the overall performance of thyroid nodule diagnosis, providing a new feasible solution for developing clinical CAD systems for thyroid nodule ultrasound diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I-BrainNet: Deep Learning and Internet of Things (DL/IoT)-Based Framework for the Classification of Brain Tumor.
Journal of imaging informatics in medicine Pub Date : 2025-03-10 DOI: 10.1007/s10278-025-01470-1
Abdullahi Umar Ibrahim, Glodie Mpia Engo, Ibrahim Ame, Chidi Wilson Nwekwo, Fadi Al-Turjman
{"title":"I-BrainNet: Deep Learning and Internet of Things (DL/IoT)-Based Framework for the Classification of Brain Tumor.","authors":"Abdullahi Umar Ibrahim, Glodie Mpia Engo, Ibrahim Ame, Chidi Wilson Nwekwo, Fadi Al-Turjman","doi":"10.1007/s10278-025-01470-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01470-1","url":null,"abstract":"<p><p>Brain tumor is categorized as one of the most fatal form of cancer due to its location and difficulty in terms of diagnostics. Medical expert relies on two key approaches which include biopsy and MRI. However, these techniques have several setbacks which include the need of medical experts, inaccuracy, miss-diagnosis as a result of anxiety or workload which may lead to patient morbidity and mortality. This opens a gap for the need of precise diagnosis and staging to guide appropriate clinical decisions. In this study, we proposed the application of deep learning (DL)-based techniques for the classification of MRI vs non-MRI and tumor vs no tumor. In order to accurately discriminate between classes, we acquired brain tumor multimodal image (CT and MRI) datasets, which comprises of 9616 MRI and CT scans in which 8000 are selected for discrimination between MRI and non-MRI and 4000 for the discrimination between tumor and no tumor cases. The acquired images undergo image pre-processing, data split, data augmentation and model training. The images are trained using 4 DL networks which include MobileNetV2, ResNet, Ineptionv3 and VGG16. Performance evaluation of the DL architectures and comparative analysis has shown that pre-trained MobileNetV2 achieved the best result across all metrics with 99.94% accuracy for the discrimination between MRI and non-MRI and 99.00% for the discrimination between tumor and no tumor. Moreover, I-BrainNet which is a DL/IoT-based framework is developed for the real-time classification of brain tumor.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SADiff: A Sinogram-Aware Diffusion Model for Low-Dose CT Image Denoising.
Journal of imaging informatics in medicine Pub Date : 2025-03-10 DOI: 10.1007/s10278-025-01469-8
Farzan Niknejad Mazandarani, Paul Babyn, Javad Alirezaie
{"title":"SADiff: A Sinogram-Aware Diffusion Model for Low-Dose CT Image Denoising.","authors":"Farzan Niknejad Mazandarani, Paul Babyn, Javad Alirezaie","doi":"10.1007/s10278-025-01469-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01469-8","url":null,"abstract":"<p><p>CT image denoising is a crucial task in medical imaging systems, aimed at enhancing the quality of acquired visual signals. The emergence of diffusion models in machine learning has revolutionized the generation of high-quality CT images. However, diffusion-based CT image denoising methods suffer from two key shortcomings. First, they do not incorporate image formation priors from CT imaging, which limits their adaptability to the CT image denoising task. Second, they are trained on CT images with varying structures and textures at the signal phase, which hinders the model generalization capability. To address the first limitation, we propose a novel conditioning module for our diffusion model that leverages image formation priors from the sinogram domain to generate rich features. To tackle the second issue, we introduce a two-phase training mechanism in which the network gradually learns different anatomical textures and structures. Extensive experimental results demonstrate the effectiveness of both approaches in enhancing CT image quality, with improvements of up to 17% in PSNR and 38% in SSIM, highlighting their superiority over state-of-the-art methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Automatic Grading of Blunt Liver Trauma in Contrast-Enhanced Ultrasound Using Label-Noise-Resistant Models.
Journal of imaging informatics in medicine Pub Date : 2025-03-10 DOI: 10.1007/s10278-025-01466-x
Tianci Zhang, Rui Li, Zhaoming Zhong, Xuan Zhang, Tuo Liu, Guang-Quan Zhou, Faqin Lv
{"title":"Robust Automatic Grading of Blunt Liver Trauma in Contrast-Enhanced Ultrasound Using Label-Noise-Resistant Models.","authors":"Tianci Zhang, Rui Li, Zhaoming Zhong, Xuan Zhang, Tuo Liu, Guang-Quan Zhou, Faqin Lv","doi":"10.1007/s10278-025-01466-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01466-x","url":null,"abstract":"<p><p>Recently, contrast-enhanced ultrasound (CEUS) has presented a potential value in the diagnosis of liver trauma, the leading cause of death in blunt abdominal trauma. However, the inherent speckle noise and the complicated visual characteristics of blunt liver trauma in CEUS images make the diagnosis highly dependent on the expertise of radiologists, which is subjective and time-consuming. Moreover, the intra- and inter-observer variance inevitably influences the accuracy of diagnosis using CEUS. In this study, we propose a Label-Noisy-Resistant CNN-Transformer Hybrid Architecture (LNRHA) for CUES liver trauma classification. Firstly, a CNN-Transformer-based Self-Contextual Dual Transformer (SCDT) module, a shared feature encoder followed by the dual-perspective Transformer-based modules, is developed to perceive the semantics of trauma lesions from neighbor-contextual and self-attention perspectives. Moreover, to mitigate the annotation noise due to intra- and inter-observer variance, we design a Confidence-Based Label Filter (CLF) module to distinguish potential label noise data based on the ensemble of the SCDT. The uncertainty of the detected noisy data is gradually penalized using a newly designed loss function, making full use of all the data while avoiding overfitting to misleading information, thus improving the classification performance. Extensive experimental results on an in-house liver trauma CEUS dataset show that our network architecture can achieve promising performance. Significantly, the experimental results of our LNRHA method on label noise data also outperform most state-of-the-art classification methods, suggesting its effectiveness in diagnosing liver trauma.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subtraction of Temporally Sequential Digital Mammograms: Prediction and Localization of Near-Term Breast Cancer Occurrence.
Journal of imaging informatics in medicine Pub Date : 2025-03-07 DOI: 10.1007/s10278-025-01456-z
Kosmia Loizidou, Galateia Skouroumouni, Gabriella Savvidou, Anastasia Constantinidou, Eleni Orphanidou Vlachou, Anneza Yiallourou, Costas Pitris, Christos Nikolaou
{"title":"Subtraction of Temporally Sequential Digital Mammograms: Prediction and Localization of Near-Term Breast Cancer Occurrence.","authors":"Kosmia Loizidou, Galateia Skouroumouni, Gabriella Savvidou, Anastasia Constantinidou, Eleni Orphanidou Vlachou, Anneza Yiallourou, Costas Pitris, Christos Nikolaou","doi":"10.1007/s10278-025-01456-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01456-z","url":null,"abstract":"<p><p>The objective is to predict a possible near-term occurrence of a breast mass after two consecutive screening rounds with normal mammograms. For the purposes of this study, conducted between 2020 and 2024, three consecutive rounds of mammograms were collected from 75 women, 46 to 79 years old. Successive screenings had an average interval of <math><mo>∼</mo></math> 2 years. In each case, two mammographic views of each breast were collected, resulting in a dataset with a total of 450 images (3 × 2 × 75). The most recent mammogram was considered the \"future\" screening round and provided the location of a biopsy-confirmed malignant mass, serving as the ground truth for the training. The two normal previous mammograms (\"prior\" and \"current\") were processed and a new subtracted image was created for the prediction. Region segmentation and post-processing were, then, applied, along with image feature extraction and selection. The selected features were incorporated into several classifiers and by applying leave-one-patient-out and k-fold cross-validation per patient, the regions of interest were characterized as benign or possible future malignancy. Study participants included 75 women (mean age, 62.5 ± 7.2; median age, 62 years). Feature selection from benign and possible future malignancy areas revealed that 14 features provided the best classification. The most accurate classification performance was achieved using ensemble voting, with 98.8% accuracy, 93.6% sensitivity, 98.8% specificity, and 0.96 AUC. Given the success of this algorithm, its clinical application could enable earlier diagnosis and improve prognosis for patients identified as at risk.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143589194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of TransUnet Deep Learning Model for Automatic Segmentation of Cervical Cancer in Small-Field T2WI Images.
Journal of imaging informatics in medicine Pub Date : 2025-03-04 DOI: 10.1007/s10278-025-01464-z
Zengqiang Shi, Feifei Zhang, Xiong Zhang, Ru Pan, Yabao Cheng, Huang Song, Qiwei Kang, Jianbo Guo, Xin Peng, Yulin Li
{"title":"Application of TransUnet Deep Learning Model for Automatic Segmentation of Cervical Cancer in Small-Field T2WI Images.","authors":"Zengqiang Shi, Feifei Zhang, Xiong Zhang, Ru Pan, Yabao Cheng, Huang Song, Qiwei Kang, Jianbo Guo, Xin Peng, Yulin Li","doi":"10.1007/s10278-025-01464-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01464-z","url":null,"abstract":"<p><p>Effective segmentation of cervical cancer tissue from magnetic resonance (MR) images is crucial for automatic detection, staging, and treatment planning of cervical cancer. This study develops an innovative deep learning model to enhance the automatic segmentation of cervical cancer lesions. We obtained 4063 T2WI small-field sagittal, coronal, and oblique axial images from 222 patients with pathologically confirmed cervical cancer. Using this dataset, we employed a convolutional neural network (CNN) along with TransUnet models for segmentation training and evaluation of cervical cancer tissues. In this approach, CNNs are leveraged to extract local information from MR images, whereas Transformers capture long-range dependencies related to shape and structural information, which are critical for precise segmentation. Furthermore, we developed three distinct segmentation models based on coronal, axial, and sagittal T2WI within a small field of view using multidirectional MRI techniques. The dice similarity coefficient (DSC) and mean Hausdorff distance (AHD) were used to assess the performance of the models in terms of segmentation accuracy. The average DSC and AHD values obtained using the TransUnet model were 0.7628 and 0.8687, respectively, surpassing those obtained using the U-Net model by margins of 0.0033 and 0.3479, respectively. The proposed TransUnet segmentation model significantly enhances the accuracy of cervical cancer tissue delineation compared to alternative models, demonstrating superior performance in overall segmentation efficacy. This methodology can improve clinical diagnostic efficiency as an automated image analysis tool tailored for cervical cancer diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Pipeline for Adrenal Gland Segmentation: Integration of a Hybrid Post-Processing Technique with Deep Learning. 肾上腺分割的新管道:将混合后处理技术与深度学习相结合。
Journal of imaging informatics in medicine Pub Date : 2025-03-04 DOI: 10.1007/s10278-025-01449-y
Michael Fayemiwo, Bryan Gardiner, Jim Harkin, Liam McDaid, Punit Prakash, Michael Dennedy
{"title":"A Novel Pipeline for Adrenal Gland Segmentation: Integration of a Hybrid Post-Processing Technique with Deep Learning.","authors":"Michael Fayemiwo, Bryan Gardiner, Jim Harkin, Liam McDaid, Punit Prakash, Michael Dennedy","doi":"10.1007/s10278-025-01449-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01449-y","url":null,"abstract":"<p><p>Accurate segmentation of adrenal glands from CT images is essential for enhancing computer-aided diagnosis and surgical planning. However, the small size, irregular shape, and proximity to surrounding tissues make this task highly challenging. This study introduces a novel pipeline that significantly improves the segmentation of left and right adrenal glands by integrating advanced pre-processing techniques and a robust post-processing framework. Utilising a 2D UNet architecture with various backbones (VGG16, ResNet34, InceptionV3), the pipeline leverages test-time augmentation (TTA) and targeted removal of unconnected regions to enhance accuracy and robustness. Our results demonstrate a substantial improvement, with a 38% increase in the Dice similarity coefficient for the left adrenal gland and an 11% increase for the right adrenal gland on the AMOS dataset, achieved by the InceptionV3 model. Additionally, the pipeline significantly reduces false positives, underscoring its potential for clinical applications and its superiority over existing methods. These advancements make our approach a crucial contribution to the field of medical image segmentation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Landscape of 2D Deep Learning Segmentation Networks Applied to CT Scan from Lung Cancer Patients: A Systematic Review. 应用于肺癌患者 CT 扫描的二维深度学习分割网络的景观:系统回顾
Journal of imaging informatics in medicine Pub Date : 2025-03-04 DOI: 10.1007/s10278-025-01458-x
Somayeh Sadat Mehrnia, Zhino Safahi, Amin Mousavi, Fatemeh Panahandeh, Arezoo Farmani, Ren Yuan, Arman Rahmim, Mohammad R Salmanpour
{"title":"Landscape of 2D Deep Learning Segmentation Networks Applied to CT Scan from Lung Cancer Patients: A Systematic Review.","authors":"Somayeh Sadat Mehrnia, Zhino Safahi, Amin Mousavi, Fatemeh Panahandeh, Arezoo Farmani, Ren Yuan, Arman Rahmim, Mohammad R Salmanpour","doi":"10.1007/s10278-025-01458-x","DOIUrl":"https://doi.org/10.1007/s10278-025-01458-x","url":null,"abstract":"<p><strong>Background: </strong>The increasing rates of lung cancer emphasize the need for early detection through computed tomography (CT) scans, enhanced by deep learning (DL) to improve diagnosis, treatment, and patient survival. This review examines current and prospective applications of 2D- DL networks in lung cancer CT segmentation, summarizing research, highlighting essential concepts and gaps; Methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines, a systematic search of peer-reviewed studies from 01/2020 to 12/2024 on data-driven population segmentation using structured data was conducted across databases like Google Scholar, PubMed, Science Direct, IEEE (Institute of Electrical and Electronics Engineers) and ACM (Association for Computing Machinery) library. 124 studies met the inclusion criteria and were analyzed.</p><p><strong>Results: </strong>The LIDC-LIDR dataset was the most frequently used; The finding particularly relies on supervised learning with labeled data. The UNet model and its variants were the most frequently used models in medical image segmentation, achieving Dice Similarity Coefficients (DSC) of up to 0.9999. The reviewed studies primarily exhibit significant gaps in addressing class imbalances (67%), underuse of cross-validation (21%), and poor model stability evaluations (3%). Additionally, 88% failed to address the missing data, and generalizability concerns were only discussed in 34% of cases.</p><p><strong>Conclusions: </strong>The review emphasizes the importance of Convolutional Neural Networks, particularly UNet, in lung CT analysis and advocates for a combined 2D/3D modeling approach. It also highlights the need for larger, diverse datasets and the exploration of semi-supervised and unsupervised learning to enhance automated lung cancer diagnosis and early detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial-Temporal Information Fusion for Thyroid Nodule Segmentation in Dynamic Contrast-Enhanced MRI: A Novel Approach.
Journal of imaging informatics in medicine Pub Date : 2025-03-04 DOI: 10.1007/s10278-025-01463-0
Binze Han, Qian Yang, Xuetong Tao, Meini Wu, Long Yang, Wenming Deng, Wei Cui, Dehong Luo, Qian Wan, Zhou Liu, Na Zhang
{"title":"Spatial-Temporal Information Fusion for Thyroid Nodule Segmentation in Dynamic Contrast-Enhanced MRI: A Novel Approach.","authors":"Binze Han, Qian Yang, Xuetong Tao, Meini Wu, Long Yang, Wenming Deng, Wei Cui, Dehong Luo, Qian Wan, Zhou Liu, Na Zhang","doi":"10.1007/s10278-025-01463-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01463-0","url":null,"abstract":"<p><p>This study aims to develop a novel segmentation method that utilizes spatio-temporal information for segmenting two-dimensional thyroid nodules on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Leveraging medical morphology knowledge of the thyroid gland, we designed a semi-supervised segmentation model that first segments the thyroid gland, guiding the model to focus exclusively on the thyroid region. This approach reduces the complexity of nodule segmentation by filtering out irrelevant regions and artifacts. Then, we introduced a method to explicitly extract temporal information from DCE-MRI data and integrated this with spatial information. The fusion of spatial and temporal features enhances the model's robustness and accuracy, particularly in complex imaging scenarios. Experimental results demonstrate that the proposed method significantly improves segmentation performance across multiple state-of-the-art models. The Dice similarity coefficient (DSC) increased by 8.41%, 7.05%, 9.39%, 11.53%, 20.94%, 17.94%, and 15.65% for U-Net, U-Net +  + , SegNet, TransUnet, Swin-Unet, SSTrans-Net, and VM-Unet, respectively, and significantly improved the segmentation accuracy of nodules of different sizes. These results highlight the effectiveness of our spatial-temporal approach in achieving accurate and reliable thyroid nodule segmentation, offering a promising framework for clinical applications and future research in medical image analysis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Tumor Segmentation in Breast-Conserving Surgery Using Deep Learning on Breast Tomosynthesis.
Journal of imaging informatics in medicine Pub Date : 2025-03-03 DOI: 10.1007/s10278-025-01457-y
Wen-Pei Wu, Yu-Wen Chen, Hwa-Koon Wu, Dar-Ren Chen, Yu-Len Huang
{"title":"Automated Tumor Segmentation in Breast-Conserving Surgery Using Deep Learning on Breast Tomosynthesis.","authors":"Wen-Pei Wu, Yu-Wen Chen, Hwa-Koon Wu, Dar-Ren Chen, Yu-Len Huang","doi":"10.1007/s10278-025-01457-y","DOIUrl":"https://doi.org/10.1007/s10278-025-01457-y","url":null,"abstract":"<p><p>Breast cancer is one of the leading causes of cancer-related deaths among women worldwide, with approximately 2.3 million diagnoses and 685,000 deaths in 2020. Early-stage breast cancer is often managed through breast-conserving surgery (BCS) combined with radiation therapy, which aims to preserve the breast's appearance while reducing recurrence risks. This study aimed to enhance intraoperative tumor segmentation using digital breast tomosynthesis (DBT) during BCS. A deep learning model, specifically an improved U-Net architecture incorporating a convolutional block attention module (CBAM), was utilized to delineate tumor margins with high precision. The system was evaluated on 51 patient cases by comparing automated segmentation with manually delineated contours and pathological assessments. Results showed that the proposed method achieved promising accuracy, with Intersection over Union (IoU) and Dice coefficients of 0.866 and 0.928, respectively, demonstrating its potential to improve intraoperative margin assessment and surgical outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信