Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Foundational Segmentation Models and Clinical Data Mining Enable Accurate Computer Vision for Lung Cancer. 基础分割模型和临床数据挖掘实现了肺癌的精确计算机视觉。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-22 DOI: 10.1007/s10278-024-01304-6
Nathaniel C Swinburne, Christopher B Jackson, Andrew M Pagano, Joseph N Stember, Javin Schefflein, Brett Marinelli, Prashanth Kumar Panyam, Arthur Autz, Mohapar S Chopra, Andrei I Holodny, Michelle S Ginsberg
{"title":"Foundational Segmentation Models and Clinical Data Mining Enable Accurate Computer Vision for Lung Cancer.","authors":"Nathaniel C Swinburne, Christopher B Jackson, Andrew M Pagano, Joseph N Stember, Javin Schefflein, Brett Marinelli, Prashanth Kumar Panyam, Arthur Autz, Mohapar S Chopra, Andrei I Holodny, Michelle S Ginsberg","doi":"10.1007/s10278-024-01304-6","DOIUrl":"10.1007/s10278-024-01304-6","url":null,"abstract":"<p><p>This study aims to assess the effectiveness of integrating Segment Anything Model (SAM) and its variant MedSAM into the automated mining, object detection, and segmentation (MODS) methodology for developing robust lung cancer detection and segmentation models without post hoc labeling of training images. In a retrospective analysis, 10,000 chest computed tomography scans from patients with lung cancer were mined. Line measurement annotations were converted to bounding boxes, excluding boxes < 1 cm or > 7 cm. The You Only Look Once object detection architecture was used for teacher-student learning to label unannotated lesions on the training images. Subsequently, a final tumor detection model was trained and employed with SAM and MedSAM for tumor segmentation. Model performance was assessed on a manually annotated test dataset, with additional evaluations conducted on an external lung cancer dataset before and after detection model fine-tuning. Bootstrap resampling was used to calculate 95% confidence intervals. Data mining yielded 10,789 line annotations, resulting in 5403 training boxes. The baseline detection model achieved an internal F1 score of 0.847, improving to 0.860 after self-labeling. Tumor segmentation using the final detection model attained internal Dice similarity coefficients (DSCs) of 0.842 (SAM) and 0.822 (MedSAM). After fine-tuning, external validation showed an F1 of 0.832 and DSCs of 0.802 (SAM) and 0.804 (MedSAM). Integrating foundational segmentation models into the MODS framework results in high-performing lung cancer detection and segmentation models using only mined clinical data. Both SAM and MedSAM hold promise as foundational segmentation models for radiology images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1552-1562"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultra-High-Resolution Photon-Counting Detector CT Benefits Visualization of Abdominal Arteries: A Comparison to Standard-Reconstruction. 超高分辨率光子计数探测器 CT 对腹部动脉可视化的益处:与标准重建的比较
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-25 DOI: 10.1007/s10278-024-01232-5
Huan Zhang, Yue Xing, Lingyun Wang, Yangfan Hu, Zhihan Xu, Haoda Chen, Junjie Lu, Jiarui Yang, Bei Ding, Weiguo Hu, Jingyu Zhong
{"title":"Ultra-High-Resolution Photon-Counting Detector CT Benefits Visualization of Abdominal Arteries: A Comparison to Standard-Reconstruction.","authors":"Huan Zhang, Yue Xing, Lingyun Wang, Yangfan Hu, Zhihan Xu, Haoda Chen, Junjie Lu, Jiarui Yang, Bei Ding, Weiguo Hu, Jingyu Zhong","doi":"10.1007/s10278-024-01232-5","DOIUrl":"10.1007/s10278-024-01232-5","url":null,"abstract":"<p><p>This study aimed to investigate the potential benefit of ultra-high-resolution (UHR) photon-counting detector CT (PCD-CT) angiography in visualization of abdominal arteries in comparison to standard-reconstruction (SR) images of virtual monoenergetic images (VMI) at low kiloelectron volt (keV). We prospectively included 47 and 47 participants to undergo contrast-enhanced abdominal CT scans within UHR mode on a PCD-CT system using full-dose (FD) and low-dose (LD) protocols, respectively. The data were reconstructed into six series of images: FD_UHR_Br48, FD_UHR_Bv56, FD_UHR_Bv60, FD_SR_Bv40, LD_UHR_Bv48, and LD_SR_Bv40. The UHR reconstructions were performed with three kernels (Bv48, Bv56, and Bv60) within 0.2 mm. The SR were virtual monoenergetic imaging reconstruction with Bv40 kernel at 40-keV within 1 mm. Each series of axial images were reconstructed into coronal and volume-rendered images. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of seven arteries were measured. Three radiologists assessed the image quality, and visibility of nine arteries on all the images. SNR and CNR values of SR images were significantly higher than those of UHR images (P < 0.001). The SR images have higher ratings in image noise (P < 0.001), but the FD_UHR_Bv56 and FD_UHR_Bv60 images has higher rating in vessel sharpness (P < 0.001). The overall quality was not significantly different among FD_VMI_40keV, LD_VMI_40keV, FD_UHR_Bv48, and LD_UHR_Bv48 images (P > 0.05) but higher than those of FD_UHR_Bv56 and FD_UHR_Bv60 images (P < 0.001). There is no significant difference of nine abdominal arteries among six series of images of axial, coronal and volume-rendered images (P > 0.05). To conclude, 1-mm SR image of VMI at 40-keV is superior to 0.2-mm UHR regardless of which kernel is used to visualize abdominal arteries, while 0.2-mm UHR image using a relatively smooth kernel may allow similar image quality and artery visibility when thinner slice image is warranted.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1891-1903"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092866/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDS-Net: A Synchronized Dual-Stage Network for Predicting Patients Within 4.5-h Thrombolytic Treatment Window Using MRI. SDS-Net:利用磁共振成像预测 4.5 小时溶栓治疗窗口期内患者的同步双级网络。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-28 DOI: 10.1007/s10278-024-01308-2
Xiaoyu Zhang, Ying Luan, Ying Cui, Yi Zhang, Chunqiang Lu, Yujie Zhou, Ying Zhang, Huiming Li, Shenghong Ju, Tianyu Tang
{"title":"SDS-Net: A Synchronized Dual-Stage Network for Predicting Patients Within 4.5-h Thrombolytic Treatment Window Using MRI.","authors":"Xiaoyu Zhang, Ying Luan, Ying Cui, Yi Zhang, Chunqiang Lu, Yujie Zhou, Ying Zhang, Huiming Li, Shenghong Ju, Tianyu Tang","doi":"10.1007/s10278-024-01308-2","DOIUrl":"10.1007/s10278-024-01308-2","url":null,"abstract":"<p><p>Timely and precise identification of acute ischemic stroke (AIS) within 4.5 h is imperative for effective treatment decision-making. This study aims to construct a novel network that utilizes limited datasets to recognize AIS patients within this critical window. We conducted a retrospective analysis of 265 AIS patients who underwent both fluid attenuation inversion recovery (FLAIR) and diffusion-weighted imaging (DWI) within 24 h of symptom onset. Patients were categorized based on the time since stroke onset (TSS) into two groups: TSS ≤ 4.5 h and TSS > 4.5 h. The TSS was calculated as the time from stroke onset to MRI completion. We proposed a synchronized dual-stage network (SDS-Net) and a sequential dual-stage network (Dual-stage Net), which were comprised of infarct voxel identification and TSS classification stages. The models were trained on 181 patients and validated on an independent external cohort of 84 patients using metrics of area under the curve (AUC), sensitivity, specificity, and accuracy. A DeLong test was used to statistically compare the performance of the two models. SDS-Net achieved an accuracy of 0.844 with an AUC of 0.914 in the validation dataset, outperforming the Dual-stage Net, which had an accuracy of 0.822 and an AUC of 0.846. In the external test dataset, SDS-Net further demonstrated superior performance with an accuracy of 0.800 and an AUC of 0.879, compared to the accuracy of 0.694 and AUC of 0.744 of Dual-stage Net (P = 0.049). SDS-Net is a robust and reliable tool for identifying AIS patients within a 4.5-h treatment window using MRI. This model can assist clinicians in making timely treatment decisions, potentially improving patient outcomes.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1681-1689"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification. 用于糖尿病视网膜病变分类的自监督等变量细化分类网络
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-19 DOI: 10.1007/s10278-024-01270-z
Jiacheng Fan, Tiejun Yang, Heng Wang, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao
{"title":"A Self-Supervised Equivariant Refinement Classification Network for Diabetic Retinopathy Classification.","authors":"Jiacheng Fan, Tiejun Yang, Heng Wang, Huiyao Zhang, Wenjie Zhang, Mingzhu Ji, Jianyu Miao","doi":"10.1007/s10278-024-01270-z","DOIUrl":"10.1007/s10278-024-01270-z","url":null,"abstract":"<p><p>Diabetic retinopathy (DR) is a retinal disease caused by diabetes. If there is no intervention, it may even lead to blindness. Therefore, the detection of diabetic retinopathy is of great significance for preventing blindness in patients. Most of the existing DR detection methods use supervised methods, which usually require a large number of accurate pixel-level annotations. To solve this problem, we propose a self-supervised Equivariant Refinement Classification Network (ERCN) for DR classification. First, we use an unsupervised contrast pre-training network to learn a more generalized representation. Secondly, the class activation map (CAM) is refined by self-supervision learning. It first uses a spatial masking method to suppress low-confidence predictions, and then uses the feature similarity between pixels to encourage fine-grained activation to achieve more accurate positioning of the lesion. We propose a hybrid equivariant regularization loss to alleviate the degradation caused by the local minimum in the CAM refinement process. To further improve the classification accuracy, we propose an attention-based multi-instance learning (MIL), which weights each element of the feature map as an instance, which is more effective than the traditional patch-based instance extraction method. We evaluate our method on the EyePACS and DAVIS datasets and achieved 87.4% test accuracy in the EyePACS dataset and 88.7% test accuracy in the DAVIS dataset. It shows that the proposed method achieves better performance in DR detection compared with other state-of-the-art methods in self-supervised DR detection.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1796-1811"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092899/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony. 增强图像信息学领域女性的能力:面对冒名顶替综合症、解决微观诽谤以及努力实现工作与生活的和谐。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-09 DOI: 10.1007/s10278-024-01285-6
Mana Moassefi, Nikki Fennell, Mindy Yang, Jennifer B Gunter, Teri M Sippel Schmit, Tessa S Cook
{"title":"Empowering Women in Imaging Informatics: Confronting Imposter Syndrome, Addressing Microaggressions, and Striving for Work-Life Harmony.","authors":"Mana Moassefi, Nikki Fennell, Mindy Yang, Jennifer B Gunter, Teri M Sippel Schmit, Tessa S Cook","doi":"10.1007/s10278-024-01285-6","DOIUrl":"10.1007/s10278-024-01285-6","url":null,"abstract":"<p><p>For the past 6 years, the Society for Imaging Informatics in Medicine (SIIM) annual meeting has provided a forum for women in imaging informatics to discuss the unique challenges they face. These sessions have evolved into a platform for understanding, sharing experiences, and developing practical strategies. The 2023 session was organized into three focus groups devoted to discussing imposter syndrome, workplace microaggressions, and work-life balance. This paper summarizes these discussions and highlights the significant themes and narratives that emerged. We aim to contribute to the larger conversation on gender equity in the informatics field, emphasizing the importance of understanding and addressing the challenges faced by women in informatics. By documenting these sessions, we seek to inspire actionable change towards a more inclusive and equitable future for everyone in imaging informatics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1291-1296"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population. 儿童不是小大人:解决成人深度学习 CT 器官分割模型在儿科人群中通用性有限的问题。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-19 DOI: 10.1007/s10278-024-01273-w
Devina Chatterjee, Adway Kanhere, Florence X Doo, Jerry Zhao, Andrew Chan, Alexander Welsh, Pranav Kulkarni, Annie Trang, Vishwa S Parekh, Paul H Yi
{"title":"Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population.","authors":"Devina Chatterjee, Adway Kanhere, Florence X Doo, Jerry Zhao, Andrew Chan, Alexander Welsh, Pranav Kulkarni, Annie Trang, Vishwa S Parekh, Paul H Yi","doi":"10.1007/s10278-024-01273-w","DOIUrl":"10.1007/s10278-024-01273-w","url":null,"abstract":"<p><p>Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1628-1641"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142305610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust [18F]-PSMA-1007 Radiomics Ensemble Model for Prostate Cancer Risk Stratification. 用于前列腺癌风险分层的强大[18F]-PSMA-1007放射组学集合模型。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-30 DOI: 10.1007/s10278-024-01281-w
Giovanni Pasini, Alessandro Stefano, Cristina Mantarro, Selene Richiusa, Albert Comelli, Giorgio Ivan Russo, Maria Gabriella Sabini, Sebastiano Cosentino, Massimo Ippolito, Giorgio Russo
{"title":"A Robust [<sup>18</sup>F]-PSMA-1007 Radiomics Ensemble Model for Prostate Cancer Risk Stratification.","authors":"Giovanni Pasini, Alessandro Stefano, Cristina Mantarro, Selene Richiusa, Albert Comelli, Giorgio Ivan Russo, Maria Gabriella Sabini, Sebastiano Cosentino, Massimo Ippolito, Giorgio Russo","doi":"10.1007/s10278-024-01281-w","DOIUrl":"10.1007/s10278-024-01281-w","url":null,"abstract":"<p><p>The aim of this study is to investigate the role of [<sup>18</sup>F]-PSMA-1007 PET in differentiating high- and low-risk prostate cancer (PCa) through a robust radiomics ensemble model. This retrospective study included 143 PCa patients who underwent [<sup>18</sup>F]-PSMA-1007 PET/CT imaging. PCa areas were manually contoured on PET images and 1781 image biomarker standardization initiative (IBSI)-compliant radiomics features were extracted. A 30 times iterated preliminary analysis pipeline, comprising of the least absolute shrinkage and selection operator (LASSO) for feature selection and fivefold cross-validation for model optimization, was adopted to identify the most robust features to dataset variations, select candidate models for ensemble modelling, and optimize hyperparameters. Thirteen subsets of selected features, 11 generated from the preliminary analysis plus two additional subsets, the first based on the combination of robust and fine-tuning features, and the second only on fine-tuning features were used to train the model ensemble. Accuracy, area under curve (AUC), sensitivity, specificity, precision, and f-score values were calculated to provide models' performance. Friedman test, followed by post hoc tests corrected with Dunn-Sidak correction for multiple comparisons, was used to verify if statistically significant differences were found in the different ensemble models over the 30 iterations. The model ensemble trained with the combination of robust and fine-tuning features obtained the highest average accuracy (79.52%), AUC (85.75%), specificity (84.29%), precision (82.85%), and f-score (78.26%). Statistically significant differences (p < 0.05) were found for some performance metrics. These findings support the role of [<sup>18</sup>F]-PSMA-1007 PET radiomics in improving risk stratification for PCa, by reducing dependence on biopsies.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1388-1402"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of Bipolar Disorder and Schizophrenia Based on Brain CT and Deep Learning Methods. 基于脑 CT 和深度学习方法的双相情感障碍和精神分裂症识别。
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-09-26 DOI: 10.1007/s10278-024-01279-4
Meilin Li, Xingyu Hou, Wanying Yan, Dawei Wang, Ruize Yu, Xixiang Li, Fuyan Li, Jinming Chen, Lingzhen Wei, Jiahao Liu, Huaizhen Wang, Qingshi Zeng
{"title":"Identification of Bipolar Disorder and Schizophrenia Based on Brain CT and Deep Learning Methods.","authors":"Meilin Li, Xingyu Hou, Wanying Yan, Dawei Wang, Ruize Yu, Xixiang Li, Fuyan Li, Jinming Chen, Lingzhen Wei, Jiahao Liu, Huaizhen Wang, Qingshi Zeng","doi":"10.1007/s10278-024-01279-4","DOIUrl":"10.1007/s10278-024-01279-4","url":null,"abstract":"<p><p>With the increasing prevalence of mental illness, accurate clinical diagnosis of mental illness is crucial. Compared with MRI, CT has the advantages of wide application, low price, short scanning time, and high patient cooperation. This study aims to construct a deep learning (DL) model based on CT images to make identification of bipolar disorder (BD) and schizophrenia (SZ). A total of 506 patients (BD = 227, SZ = 279) and 179 healthy controls (HC) was collected from January 2022 to May 2023 at two hospitals, and divided into an internal training set and an internal validation set according to a ratio of 4:1. An additional 65 patients (BD = 35, SZ = 30) and 40 HC were recruited from different hospitals, and served as an external test set. All subjects accepted the conventional brain CT examination. The DenseMD model for identify BD and SZ using multiple instance learning was developed and compared with other classical DL models. The results showed that DenseMD performed excellently with an accuracy of 0.745 in the internal validation set, whereas the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.672, 0.664, and 0.679, respectively. For the external test set, DenseMD again outperformed other models with an accuracy of 0.724; however, the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.657, 0.638, and 0.676, respectively. Therefore, the potential of DL models for identification of BD and SZ based on brain CT images was established, and identification ability of the DenseMD model was better than other classical DL models.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1653-1660"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing Challenges in Skin Cancer Diagnosis: A Convolutional Swin Transformer Approach. 应对皮肤癌诊断中的挑战:卷积斯温变换器方法
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-22 DOI: 10.1007/s10278-024-01290-9
Sudha Paraddy, Virupakshappa
{"title":"Addressing Challenges in Skin Cancer Diagnosis: A Convolutional Swin Transformer Approach.","authors":"Sudha Paraddy, Virupakshappa","doi":"10.1007/s10278-024-01290-9","DOIUrl":"10.1007/s10278-024-01290-9","url":null,"abstract":"<p><p>Skin cancer is one of the top three hazardous cancer types, and it is caused by the abnormal proliferation of tumor cells. Diagnosing skin cancer accurately and early is crucial for saving patients' lives. However, it is a challenging task due to various significant issues, including lesion variations in texture, shape, color, and size; artifacts (hairs); uneven lesion boundaries; and poor contrast. To solve these issues, this research proposes a novel Convolutional Swin Transformer (CSwinformer) method for segmenting and classifying skin lesions accurately. The framework involves phases such as data preprocessing, segmentation, and classification. In the first phase, Gaussian filtering, Z-score normalization, and augmentation processes are executed to remove unnecessary noise, re-organize the data, and increase data diversity. In the phase of segmentation, we design a new model \"Swinformer-Net\" integrating Swin Transformer and U-Net frameworks, to accurately define a region of interest. At the final phase of classification, the segmented outcome is input into the newly proposed module \"Multi-Scale Dilated Convolutional Neural Network meets Transformer (MD-CNNFormer),\" where the data samples are classified into respective classes. We use four benchmark datasets-HAM10000, ISBI 2016, PH2, and Skin Cancer ISIC for evaluation. The results demonstrated the designed framework's better efficiency against the traditional approaches. The proposed method provided classification accuracy of 98.72%, pixel accuracy of 98.06%, and dice coefficient of 97.67%, respectively. The proposed method offered a promising solution in skin lesion segmentation and classification, supporting clinicians to accurately diagnose skin cancer.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1755-1775"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142485254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Bladder Cancer Tissues Prediction Using Vision Transformer. 利用视觉转换器进行膀胱癌组织比较预测
Journal of imaging informatics in medicine Pub Date : 2025-06-01 Epub Date: 2024-10-25 DOI: 10.1007/s10278-024-01228-1
Kubilay Muhammed Sunnetci, Faruk Enes Oguz, Mahmut Nedim Ekersular, Nadide Gulsah Gulenc, Mahmut Ozturk, Ahmet Alkan
{"title":"Comparative Bladder Cancer Tissues Prediction Using Vision Transformer.","authors":"Kubilay Muhammed Sunnetci, Faruk Enes Oguz, Mahmut Nedim Ekersular, Nadide Gulsah Gulenc, Mahmut Ozturk, Ahmet Alkan","doi":"10.1007/s10278-024-01228-1","DOIUrl":"10.1007/s10278-024-01228-1","url":null,"abstract":"<p><p>Bladder cancer, often asymptomatic in the early stages, is a type of cancer where early detection is crucial. Herein, endoscopic images are meticulously evaluated by experts, and sometimes even by different disciplines, to identify tissue types. It is believed that the time spent by experts can be utilized for patient treatment with the creation of a computer-aided decision support system. For this purpose, in this study, it is evaluated that the performances of three models proposed using the bladder tissue dataset. The first model is a convolutional neural network (CNN)-based deep learning (DL) network, and the second is a model named hybrid cnn-machine learning (ML) or DL + ML, which involves classifying deep features obtained from a CNN-based network with ML. The last one, and the one that achieved the best performance metrics, is a vision transformer (ViT) architecture. Furthermore, a graphical user interface (GUI) is provided for an accessible decision support system. As a result, accuracy and F1 score values for DL, DL + ML, and ViT models are 0.9086-0.8971-0.9257 and 0.8884-0.8496-0.8931, respectively.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1722-1733"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信