Journal of imaging informatics in medicine最新文献

筛选
英文 中文
A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images. 基于 CT 图像的稳健深度学习方法与不确定性估计用于肾细胞癌的病理分类
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-23 DOI: 10.1007/s10278-024-01276-7
Ni Yao, Hang Hu, Kaicong Chen, Huan Huang, Chen Zhao, Yuan Guo, Boya Li, Jiaofen Nan, Yanting Li, Chuang Han, Fubao Zhu, Weihua Zhou, Li Tian
{"title":"A Robust Deep Learning Method with Uncertainty Estimation for the Pathological Classification of Renal Cell Carcinoma Based on CT Images.","authors":"Ni Yao, Hang Hu, Kaicong Chen, Huan Huang, Chen Zhao, Yuan Guo, Boya Li, Jiaofen Nan, Yanting Li, Chuang Han, Fubao Zhu, Weihua Zhou, Li Tian","doi":"10.1007/s10278-024-01276-7","DOIUrl":"10.1007/s10278-024-01276-7","url":null,"abstract":"<p><p>This study developed and validated a deep learning-based diagnostic model with uncertainty estimation to aid radiologists in the preoperative differentiation of pathological subtypes of renal cell carcinoma (RCC) based on computed tomography (CT) images. Data from 668 consecutive patients with pathologically confirmed RCC were retrospectively collected from Center 1, and the model was trained using fivefold cross-validation to classify RCC subtypes into clear cell RCC (ccRCC), papillary RCC (pRCC), and chromophobe RCC (chRCC). An external validation with 78 patients from Center 2 was conducted to evaluate the performance of the model. In the fivefold cross-validation, the area under the receiver operating characteristic curve (AUC) for the classification of ccRCC, pRCC, and chRCC was 0.868 (95% CI, 0.826-0.923), 0.846 (95% CI, 0.812-0.886), and 0.839 (95% CI, 0.802-0.88), respectively. In the external validation set, the AUCs were 0.856 (95% CI, 0.838-0.882), 0.787 (95% CI, 0.757-0.818), and 0.793 (95% CI, 0.758-0.831) for ccRCC, pRCC, and chRCC, respectively. The model demonstrated robust performance in predicting the pathological subtypes of RCC, while the incorporated uncertainty emphasized the importance of understanding model confidence. The proposed approach, integrated with uncertainty estimation, offers clinicians a dual advantage: accurate RCC subtype predictions complemented by diagnostic confidence metrics, thereby promoting informed decision-making for patients with RCC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1323-1333"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR Image Fusion-Based Parotid Gland Tumor Detection. 基于磁共振图像融合的腮腺肿瘤检测
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-26 DOI: 10.1007/s10278-024-01137-3
Kubilay Muhammed Sunnetci, Esat Kaba, Fatma Beyazal Celiker, Ahmet Alkan
{"title":"MR Image Fusion-Based Parotid Gland Tumor Detection.","authors":"Kubilay Muhammed Sunnetci, Esat Kaba, Fatma Beyazal Celiker, Ahmet Alkan","doi":"10.1007/s10278-024-01137-3","DOIUrl":"10.1007/s10278-024-01137-3","url":null,"abstract":"<p><p>The differentiation of benign and malignant parotid gland tumors is of major significance as it directly affects the treatment process. In addition, it is also a vital task in terms of early and accurate diagnosis of parotid gland tumors and the determination of treatment planning accordingly. As in other diseases, the differentiation of tumor types involves several challenging, time-consuming, and laborious processes. In the study, Magnetic Resonance (MR) images of 114 patients with parotid gland tumors are used for training and testing purposes by Image Fusion (IF). After the Apparent Diffusion Coefficient (ADC), Contrast-enhanced T1-w (T1C-w), and T2-w sequences are cropped, IF (ADC, T1C-w), IF (ADC, T2-w), IF (T1C-w, T2-w), and IF (ADC, T1C-w, T2-w) datasets are obtained for different combinations of these sequences using a two-dimensional Discrete Wavelet Transform (DWT)-based fusion technique. For each of these four datasets, ResNet18, GoogLeNet, and DenseNet-201 architectures are trained separately, and thus, 12 models are obtained in the study. A Graphical User Interface (GUI) application that contains the most successful of these trained architectures for each data is also designed to support the users. The designed GUI application not only allows the fusing of different sequence images but also predicts whether the label of the fused image is benign or malignant. The results show that the DenseNet-201 models for IF (ADC, T1C-w), IF (ADC, T2-w), and IF (ADC, T1C-w, T2-w) are better than the others, with accuracies of 95.45%, 95.96%, and 92.93%, respectively. It is also noted in the study that the most successful model for IF (T1C-w, T2-w) is ResNet18, and its accuracy is equal to 94.95%.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1846-1859"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092857/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web-Based DICOM Viewers: A Survey and a Performance Classification. 基于网络的 DICOM 浏览器:调查与性能分类
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-30 DOI: 10.1007/s10278-024-01216-5
Hugo Pereira, Luis Romero, Pedro Miguel Faria
{"title":"Web-Based DICOM Viewers: A Survey and a Performance Classification.","authors":"Hugo Pereira, Luis Romero, Pedro Miguel Faria","doi":"10.1007/s10278-024-01216-5","DOIUrl":"10.1007/s10278-024-01216-5","url":null,"abstract":"<p><p>The standard for managing image data in healthcare is the DICOM (Digital Imaging and Communications in Medicine) protocol. DICOM web-viewers provide flexible and accessible platforms for their users to view and analyze DICOM images remotely. This article presents a comprehensive evaluation of various web-based DICOM viewers, emphasizing their performance in different rendering scenarios, browsers, and operating systems. The study includes a total of 16 web-based viewers, of which 12 were surveyed, and 7 were compared performance-wise based on the availability of an online demo. The criteria for examination include accessibility features, such as available information or requirements for usage, interface features, such as loading capabilities or cloud storage, two-dimensional (2D) viewing features, such as the ability to perform measurements or alter the viewing window, and three-dimensional (3D) viewing features, such as volume rendering or secondary reconstruction. Only 4 of the viewers allow for the viewing of local DICOM files in 3D (other than MPR(Multiplanar reconstruction)). Premium software offers a large amount of features with overall good performance. One of the free alternatives demonstrated the best efficiency in both 2D and 3D rendering but faces challenges with missing 3D rendering features in its interface, which is still in development. Other free options exhibited slower performance, especially in 2D rendering but have more ready-to-use features on their web app. The evaluation also underscores the importance of browser choice, with some browsers performing much better than the competition, and highlights the significance of hardware when dealing with rendering tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1304-1322"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images. 从 [18F]-Florbetaben PET 图像中自动搜索用于心脏淀粉样变性分类的神经架构。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-02 DOI: 10.1007/s10278-024-01275-8
Filippo Bargagna, Donato Zigrino, Lisa Anita De Santi, Dario Genovesi, Michele Scipioni, Brunella Favilli, Giuseppe Vergaro, Michele Emdin, Assuero Giorgetti, Vincenzo Positano, Maria Filomena Santarelli
{"title":"Automated Neural Architecture Search for Cardiac Amyloidosis Classification from [18F]-Florbetaben PET Images.","authors":"Filippo Bargagna, Donato Zigrino, Lisa Anita De Santi, Dario Genovesi, Michele Scipioni, Brunella Favilli, Giuseppe Vergaro, Michele Emdin, Assuero Giorgetti, Vincenzo Positano, Maria Filomena Santarelli","doi":"10.1007/s10278-024-01275-8","DOIUrl":"10.1007/s10278-024-01275-8","url":null,"abstract":"<p><p>Medical image classification using convolutional neural networks (CNNs) is promising but often requires extensive manual tuning for optimal model definition. Neural architecture search (NAS) automates this process, reducing human intervention significantly. This study applies NAS to [18F]-Florbetaben PET cardiac images for classifying cardiac amyloidosis (CA) sub-types (amyloid light chain (AL) and transthyretin amyloid (ATTR)) and controls. Following data preprocessing and augmentation, an evolutionary cell-based NAS approach with a fixed network macro-structure is employed, automatically deriving cells' micro-structure. The algorithm is executed five times, evaluating 100 mutating architectures per run on an augmented dataset of 4048 images (originally 597), totaling 5000 architectures evaluated. The best network (NAS-Net) achieves 76.95% overall accuracy. K-fold analysis yields mean ± SD percentages of sensitivity, specificity, and accuracy on the test dataset: AL subjects (98.7 ± 2.9, 99.3 ± 1.1, 99.7 ± 0.7), ATTR-CA subjects (93.3 ± 7.8, 78.0 ± 2.9, 70.9 ± 3.7), and controls (35.8 ± 14.6, 77.1 ± 2.0, 96.7 ± 4.4). NAS-derived network performance rivals manually determined networks in the literature while using fewer parameters, validating its automatic approach's efficacy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1452-1466"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts. 基于深度学习的早期缺血性梗塞分割和容积估算的双能量 CT。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-09 DOI: 10.1007/s10278-024-01294-5
Peter Kamel, Mazhar Khalid, Rachel Steger, Adway Kanhere, Pranav Kulkarni, Vishwa Parekh, Paul H Yi, Dheeraj Gandhi, Uttam Bodanapally
{"title":"Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts.","authors":"Peter Kamel, Mazhar Khalid, Rachel Steger, Adway Kanhere, Pranav Kulkarni, Vishwa Parekh, Paul H Yi, Dheeraj Gandhi, Uttam Bodanapally","doi":"10.1007/s10278-024-01294-5","DOIUrl":"10.1007/s10278-024-01294-5","url":null,"abstract":"<p><p>Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1484-1495"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092874/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Machine Learning on MRI Radiomics to Diagnose Parotid Tumours Before Comparing Performance with Radiologists: A Pilot Study. 在磁共振成像放射组学中使用机器学习诊断腮腺肿瘤,然后再与放射科医生进行比较:试点研究。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-10 DOI: 10.1007/s10278-024-01255-y
Samy Ammari, Arnaud Quillent, Víctor Elvira, François Bidault, Gabriel C T E Garcia, Dana M Hartl, Corinne Balleyguier, Nathalie Lassau, Émilie Chouzenoux
{"title":"Using Machine Learning on MRI Radiomics to Diagnose Parotid Tumours Before Comparing Performance with Radiologists: A Pilot Study.","authors":"Samy Ammari, Arnaud Quillent, Víctor Elvira, François Bidault, Gabriel C T E Garcia, Dana M Hartl, Corinne Balleyguier, Nathalie Lassau, Émilie Chouzenoux","doi":"10.1007/s10278-024-01255-y","DOIUrl":"10.1007/s10278-024-01255-y","url":null,"abstract":"<p><p>The parotid glands are the largest of the major salivary glands. They can harbour both benign and malignant tumours. Preoperative work-up relies on MR images and fine needle aspiration biopsy, but these diagnostic tools have low sensitivity and specificity, often leading to surgery for diagnostic purposes. The aim of this paper is (1) to develop a machine learning algorithm based on MR images characteristics to automatically classify parotid gland tumours and (2) compare its results with the diagnoses of junior and senior radiologists in order to evaluate its utility in routine practice. While automatic algorithms applied to parotid tumours classification have been developed in the past, we believe that our study is one of the first to leverage four different MRI sequences and propose a comparison with clinicians. In this study, we leverage data coming from a cohort of 134 patients treated for benign or malignant parotid tumours. Using radiomics extracted from the MR images of the gland, we train a random forest and a logistic regression to predict the corresponding histopathological subtypes. On the test set, the best results are given by the random forest: we obtain a 0.720 accuracy, a 0.860 specificity, and a 0.720 sensitivity over all histopathological subtypes, with an average AUC of 0.838. When considering the discrimination between benign and malignant tumours, the algorithm results in a 0.760 accuracy and a 0.769 AUC, both on test set. Moreover, the clinical experiment shows that our model helps to improve diagnostic abilities of junior radiologists as their sensitivity and accuracy raised by 6 % when using our proposed method. This algorithm may be useful for training of physicians. Radiomics with a machine learning algorithm may help improve discrimination between benign and malignant parotid tumours, decreasing the need for diagnostic surgery. Further studies are warranted to validate our algorithm for routine use.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1496-1508"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142402604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Deep-Learning Algorithm Interpreting Kidney, Ureter, and Bladder (KUB) X-Rays to Detect Colon Cancer. 应用深度学习算法解读肾脏、输尿管和膀胱 (KUB) X 射线以检测结肠癌。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-31 DOI: 10.1007/s10278-024-01309-1
Ling Lee, Chin Lin, Chia-Jung Hsu, Heng-Hsiu Lin, Tzu-Chiao Lin, Yu-Hong Liu, Je-Ming Hu
{"title":"Applying Deep-Learning Algorithm Interpreting Kidney, Ureter, and Bladder (KUB) X-Rays to Detect Colon Cancer.","authors":"Ling Lee, Chin Lin, Chia-Jung Hsu, Heng-Hsiu Lin, Tzu-Chiao Lin, Yu-Hong Liu, Je-Ming Hu","doi":"10.1007/s10278-024-01309-1","DOIUrl":"10.1007/s10278-024-01309-1","url":null,"abstract":"<p><p>Early screening is crucial in reducing the mortality of colorectal cancer (CRC). Current screening methods, including fecal occult blood tests (FOBT) and colonoscopy, are primarily limited by low patient compliance and the invasive nature of the procedures. Several advanced imaging techniques such as computed tomography (CT) and histological imaging have been integrated with artificial intelligence (AI) to enhance the detection of CRC. There are still limitations because of the challenges associated with image acquisition and the cost. Kidney, ureter, and bladder (KUB) radiograph which is inexpensive and widely used for abdominal assessments in emergency settings and shows potential for detecting CRC when enhanced using advanced techniques. This study aimed to develop a deep learning model (DLM) to detect CRC using KUB radiographs. This retrospective study was conducted using data from the Tri-Service General Hospital (TSGH) between January 2011 and December 2020, including patients with at least one KUB radiograph. Patients were divided into development (n = 28,055), tuning (n = 11,234), and internal validation (n = 16,875) sets. An additional 15,876 patients were collected from a community hospital as the external validation set. A 121-layer DenseNet convolutional network was trained to classify KUB images for CRC detection. The model performance was evaluated using receiver operating characteristic curves, with sensitivity, specificity, and area under the curve (AUC) as metrics. The AUC, sensitivity, and specificity of the DLM in the internal and external validation sets achieved 0.738, 61.3%, and 74.4%, as well as 0.656, 47.7%, and 72.9%, respectively. The model performed better for high-grade CRC, with AUCs of 0.744 and 0.674 in the internal and external sets, respectively. Stratified analysis showed superior performance in females aged 55-64 with high-grade cancers. AI-positive predictions were associated with a higher long-term risk of all-cause mortality in both validation cohorts. AI-enhanced KUB X-ray analysis can enhance CRC screening coverage and effectiveness, providing a cost-effective alternative to traditional methods. Further prospective studies are necessary to validate these findings and fully integrate this technology into clinical practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1606-1616"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142560009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Machine Learning Model Based on Global Mammographic Radiomic Features Can Predict Which Normal Mammographic Cases Radiology Trainees Find Most Difficult. 基于全局乳腺放射学特征的机器学习模型可预测放射学受训者认为最困难的正常乳腺病例。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-15 DOI: 10.1007/s10278-024-01291-8
Somphone Siviengphanom, Patrick C Brennan, Sarah J Lewis, Phuong Dung Trieu, Ziba Gandomkar
{"title":"A Machine Learning Model Based on Global Mammographic Radiomic Features Can Predict Which Normal Mammographic Cases Radiology Trainees Find Most Difficult.","authors":"Somphone Siviengphanom, Patrick C Brennan, Sarah J Lewis, Phuong Dung Trieu, Ziba Gandomkar","doi":"10.1007/s10278-024-01291-8","DOIUrl":"10.1007/s10278-024-01291-8","url":null,"abstract":"<p><p>This study aims to investigate whether global mammographic radiomic features (GMRFs) can distinguish hardest- from easiest-to-interpret normal cases for radiology trainees (RTs). Data from 137 RTs were analysed, with each interpreting seven educational self-assessment test sets comprising 60 cases (40 normal and 20 cancer). The study only examined normal cases. Difficulty scores were computed based on the percentage of readers who incorrectly classified each case, leading to their classification as hardest- or easiest-to-interpret based on whether their difficulty scores fell within and above the 75th or within and below the 25th percentile, respectively (resulted in 140 cases in total used). Fifty-nine low-density and 81 high-density cases were identified. Thirty-four GMRFs were extracted for each case. A random forest machine learning model was trained to differentiate between hardest- and easiest-to-interpret normal cases and validated using leave-one-out-cross-validation approach. The model's performance was evaluated using the area under receiver operating characteristic curve (AUC). Significant features were identified through feature importance analysis. Difference between hardest- and easiest-to-interpret cases among 34 GMRFs and in difficulty level between low- and high-density cases was tested using Kruskal-Wallis. The model achieved AUC = 0.75 with cluster prominence and range emerging as the most useful features. Fifteen GMRFs differed significantly (p < 0.05) between hardest- and easiest-to-interpret cases. Difficulty level among low- vs high-density cases did not differ significantly (p = 0.12). GMRFs can predict hardest-to-interpret normal cases for RTs, underscoring the importance of GMRFs in identifying the most difficult normal cases for RTs and facilitating customised training programmes tailored to trainees' learning needs.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1904-1913"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092920/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142485253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BSNEU-net: Block Feature Map Distortion and Switchable Normalization-Based Enhanced Union-net for Acute Leukemia Detection on Heterogeneous Dataset. BSNEU-net:基于块特征图失真和可切换归一化的增强联合网,用于在异构数据集上检测急性白血病。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-25 DOI: 10.1007/s10278-024-01252-1
Rabul Saikia, Roopam Deka, Anupam Sarma, Salam Shuleenda Devi
{"title":"BSNEU-net: Block Feature Map Distortion and Switchable Normalization-Based Enhanced Union-net for Acute Leukemia Detection on Heterogeneous Dataset.","authors":"Rabul Saikia, Roopam Deka, Anupam Sarma, Salam Shuleenda Devi","doi":"10.1007/s10278-024-01252-1","DOIUrl":"10.1007/s10278-024-01252-1","url":null,"abstract":"<p><p>Acute leukemia is characterized by the swift proliferation of immature white blood cells (WBC) in the blood and bone marrow. It is categorized into acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), depending on whether the cell-line origin is lymphoid or myeloid, respectively. Deep learning (DL) and artificial intelligence (AI) are revolutionizing medical sciences by assisting clinicians with rapid illness identification, reducing workload, and enhancing diagnostic accuracy. This paper proposes a DL-based novel BSNEU-net framework to detect acute leukemia. It comprises 4 Union Blocks (UB) and incorporates block feature map distortion (BFMD) with switchable normalization (SN) in each UB. The UB employs union convolution to extract more discriminant features. The BFMD is adapted to acquire more generalized patterns to minimize overfitting, whereas SN layers are appended to improve the model's convergence and generalization capabilities. The uniform utilization of batch normalization across convolution layers is sensitive to the mini-batch dimension changes, which is effectively remedied by incorporating an SN layer. Here, a new dataset comprising 2400 blood smear images of ALL, AML, and healthy cases is proposed, as DL methodologies necessitate a sizeable and well-annotated dataset to combat overfitting issues. Further, a heterogeneous dataset comprising 2700 smear images is created by combining four publicly accessible benchmark datasets of ALL, AML, and healthy cases. The BSNEU-net model achieved excellent performance with 99.37% accuracy on the novel dataset and 99.44% accuracy on the heterogeneous dataset. The comparative analysis signifies the superiority of the proposed methodology with comparing schemes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1334-1361"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Approaches for Brain Tumor Detection and Classification Using MRI Images (2020 to 2024): A Systematic Review. 利用磁共振成像进行脑肿瘤检测和分类的深度学习方法(2020 年至 2024 年):系统综述。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-30 DOI: 10.1007/s10278-024-01283-8
Sara Bouhafra, Hassan El Bahi
{"title":"Deep Learning Approaches for Brain Tumor Detection and Classification Using MRI Images (2020 to 2024): A Systematic Review.","authors":"Sara Bouhafra, Hassan El Bahi","doi":"10.1007/s10278-024-01283-8","DOIUrl":"10.1007/s10278-024-01283-8","url":null,"abstract":"<p><p>Brain tumor is a type of disease caused by uncontrolled cell proliferation in the brain leading to serious health issues such as memory loss and motor impairment. Therefore, early diagnosis of brain tumors plays a crucial role to extend the survival of patients. However, given the busy nature of the work of radiologists and aiming to reduce the likelihood of false diagnoses, advancing technologies including computer-aided diagnosis and artificial intelligence have shown an important role in assisting radiologists. In recent years, a number of deep learning-based methods have been applied for brain tumor detection and classification using MRI images and achieved promising results. The main objective of this paper is to present a detailed review of the previous researches in this field. In addition, This work summarizes the existing limitations and significant highlights. The study systematically reviews 60 articles researches published between 2020 and January 2024, extensively covering methods such as transfer learning, autoencoders, transformers, and attention mechanisms. The key findings formulated in this paper provide an analytic comparison and future directions. The review aims to provide a comprehensive understanding of automatic techniques that may be useful for professionals and academic communities working on brain tumor classification and detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1403-1433"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信