Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Artificial Intelligence for Otosclerosis Detection: A Pilot Study. 人工智能检测耳硬化症:试点研究
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-26 DOI: 10.1007/s10278-024-01079-w
Antoine Emin, Sophie Daubié, Loïc Gaillandre, Arthur Aouad, Jean Baptiste Pialat, Valentin Favier, Florent Carsuzaa, Stéphane Tringali, Maxime Fieux
{"title":"Artificial Intelligence for Otosclerosis Detection: A Pilot Study.","authors":"Antoine Emin, Sophie Daubié, Loïc Gaillandre, Arthur Aouad, Jean Baptiste Pialat, Valentin Favier, Florent Carsuzaa, Stéphane Tringali, Maxime Fieux","doi":"10.1007/s10278-024-01079-w","DOIUrl":"10.1007/s10278-024-01079-w","url":null,"abstract":"<p><p>The gold standard for otosclerosis diagnosis, aside from surgery, is high-resolution temporal bone computed tomography (TBCT), but it can be compromised by the small size of the lesions. Many artificial intelligence (AI) algorithms exist, but they are not yet used in daily practice for otosclerosis diagnosis. The aim was to evaluate the diagnostic performance of AI in the detection of otosclerosis. This case-control study included patients with otosclerosis surgically confirmed (2010-2020) and control patients who underwent TBCT and for whom radiological data were available. The AI algorithm interpreted the TBCT to assign a positive or negative diagnosis of otosclerosis. A double-blind reading was then performed by two trained radiologists, and the diagnostic performances were compared according to the best combination of sensitivity and specificity (Youden index). A total of 274 TBCT were included (174 TBCT cases and 100 TBCT controls). For the AI algorithm, the best combination of sensitivity and specificity was 79% and 98%, with an ideal diagnostic probability value estimated by the Youden index at 59%. For radiological analysis, sensitivity was 84% and specificity 98%. The diagnostic performance of the AI algorithm was comparable to that of a trained radiologist, although the sensitivity at the estimated ideal threshold was lower.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2931-2939"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612047/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modality Reference and Feature Mutual-Projection for 3D Brain MRI Image Super-Resolution. 用于三维脑磁共振成像超分辨率的跨模态参考和特征相互投影
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-03 DOI: 10.1007/s10278-024-01139-1
Lulu Wang, Wanqi Zhang, Wei Chen, Zhongshi He, Yuanyuan Jia, Jinglong Du
{"title":"Cross-Modality Reference and Feature Mutual-Projection for 3D Brain MRI Image Super-Resolution.","authors":"Lulu Wang, Wanqi Zhang, Wei Chen, Zhongshi He, Yuanyuan Jia, Jinglong Du","doi":"10.1007/s10278-024-01139-1","DOIUrl":"10.1007/s10278-024-01139-1","url":null,"abstract":"<p><p>High-resolution (HR) magnetic resonance imaging (MRI) can reveal rich anatomical structures for clinical diagnoses. However, due to hardware and signal-to-noise ratio limitations, MRI images are often collected with low resolution (LR) which is not conducive to diagnosing and analyzing clinical diseases. Recently, deep learning super-resolution (SR) methods have demonstrated great potential in enhancing the resolution of MRI images; however, most of them did not take the cross-modality and internal priors of MR seriously, which hinders the SR performance. In this paper, we propose a cross-modality reference and feature mutual-projection (CRFM) method to enhance the spatial resolution of brain MRI images. Specifically, we feed the gradients of HR MRI images from referenced imaging modality into the SR network to transform true clear textures to LR feature maps. Meanwhile, we design a plug-in feature mutual-projection (FMP) method to capture the cross-scale dependency and cross-modality similarity details of MRI images. Finally, we fuse all feature maps with parallel attentions to produce and refine the HR features adaptively. Extensive experiments on MRI images in the image domain and k-space show that our CRFM method outperforms existing state-of-the-art MRI SR methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2838-2851"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MedYOLO: A Medical Image Object Detection Framework. MedYOLO:医学图像对象检测框架。
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-06 DOI: 10.1007/s10278-024-01138-2
Joseph Sobek, Jose R Medina Inojosa, Betsy J Medina Inojosa, S M Rassoulinejad-Mousavi, Gian Marco Conte, Francisco Lopez-Jimenez, Bradley J Erickson
{"title":"MedYOLO: A Medical Image Object Detection Framework.","authors":"Joseph Sobek, Jose R Medina Inojosa, Betsy J Medina Inojosa, S M Rassoulinejad-Mousavi, Gian Marco Conte, Francisco Lopez-Jimenez, Bradley J Erickson","doi":"10.1007/s10278-024-01138-2","DOIUrl":"10.1007/s10278-024-01138-2","url":null,"abstract":"<p><p>Artificial intelligence-enhanced identification of organs, lesions, and other structures in medical imaging is typically done using convolutional neural networks (CNNs) designed to make voxel-accurate segmentations of the region of interest. However, the labels required to train these CNNs are time-consuming to generate and require attention from subject matter experts to ensure quality. For tasks where voxel-level precision is not required, object detection models offer a viable alternative that can reduce annotation effort. Despite this potential application, there are few options for general-purpose object detection frameworks available for 3-D medical imaging. We report on MedYOLO, a 3-D object detection framework using the one-shot detection method of the YOLO family of models and designed for use with medical imaging. We tested this model on four different datasets: BRaTS, LIDC, an abdominal organ Computed tomography (CT) dataset, and an ECG-gated heart CT dataset. We found our models achieve high performance on a diverse range of structures even without hyperparameter tuning, reaching mean average precision (mAP) at intersection over union (IoU) 0.5 of 0.861 on BRaTS, 0.715 on the abdominal CT dataset, and 0.995 on the heart CT dataset. However, the models struggle with some structures, failing to converge on LIDC resulting in a mAP@0.5 of 0.0.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3208-3216"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Gastric Cancer: CT Radiomics Prediction of Lymph Modes Metastasis After Neoadjuvant Chemotherapy. 晚期胃癌:新辅助化疗后淋巴转移的 CT 放射线组学预测
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-17 DOI: 10.1007/s10278-024-01148-0
Jia Sun, Zhilong Wang, Haitao Zhu, Qi Yang, Yingshi Sun
{"title":"Advanced Gastric Cancer: CT Radiomics Prediction of Lymph Modes Metastasis After Neoadjuvant Chemotherapy.","authors":"Jia Sun, Zhilong Wang, Haitao Zhu, Qi Yang, Yingshi Sun","doi":"10.1007/s10278-024-01148-0","DOIUrl":"10.1007/s10278-024-01148-0","url":null,"abstract":"<p><p>This study aims to create and assess machine learning models for predicting lymph node metastases following neoadjuvant treatment in advanced gastric cancer (AGC) using baseline and restaging computed tomography (CT). We evaluated CT images and pathological data from 158 patients with resected stomach cancer from two institutions in this retrospective analysis. Patients were eligible for inclusion if they had histologically proven gastric cancer. They had received neoadjuvant chemotherapy, with at least 15 lymph nodes removed. All patients received baseline and preoperative abdominal CT and had complete clinicopathological reports. They were divided into two cohorts: (a) the primary cohort (n = 125) for model creation and (b) the testing cohort (n = 33) for evaluating models' capacity to predict the existence of lymph node metastases. The diagnostic ability of the radiomics-model for lymph node metastasis was compared to traditional CT morphological diagnosis by radiologist. The radiomics model based on the baseline and preoperative CT images produced encouraging results in the training group (AUC 0.846) and testing cohort (AUC 0.843). In the training cohort, the sensitivity and specificity were 81.3% and 77.8%, respectively, whereas in the testing cohort, they were 84% and 75%. The diagnostic sensitivity and specificity of the radiologist were 70% and 42.2% (using baseline CT) and 46.3% and 62.2% (using preoperative CT). In particular, the specificity of radiomics model was higher than that of conventional CT in diagnosing N0 cases (no lymph node metastasis). The CT-based radiomics model could assess lymph node metastasis more accurately than traditional CT imaging in AGC patients following neoadjuvant chemotherapy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2910-2919"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale Lesion Feature Fusion and Location-Aware for Chest Multi-disease Detection. 多尺度病变特征融合和位置感知用于胸部多病检测
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-05-17 DOI: 10.1007/s10278-024-01133-7
Yubo Yuan, Lijun Liu, Xiaobing Yang, Li Liu, Qingsong Huang
{"title":"Multi-scale Lesion Feature Fusion and Location-Aware for Chest Multi-disease Detection.","authors":"Yubo Yuan, Lijun Liu, Xiaobing Yang, Li Liu, Qingsong Huang","doi":"10.1007/s10278-024-01133-7","DOIUrl":"10.1007/s10278-024-01133-7","url":null,"abstract":"<p><p>Accurately identifying and locating lesions in chest X-rays has the potential to significantly enhance diagnostic efficiency, quality, and interpretability. However, current methods primarily focus on detecting of specific diseases in chest X-rays, disregarding the presence of multiple diseases in a single chest X-ray scan. Moreover, the diversity in lesion locations and attributes introduces complexity in accurately discerning specific traits for each lesion, leading to diminished accuracy when detecting multiple diseases. To address these issues, we propose a novel detection framework that enhances multi-scale lesion feature extraction and fusion, improving lesion position perception and subsequently boosting chest multi-disease detection performance. Initially, we construct a multi-scale lesion feature extraction network to tackle the uniqueness of various lesion features and locations, strengthening the global semantic correlation between lesion features and their positions. Following this, we introduce an instance-aware semantic enhancement network that dynamically amalgamates instance-specific features with high-level semantic representations across various scales. This adaptive integration effectively mitigates the loss of detailed information within lesion regions. Additionally, we perform lesion region feature mapping using candidate boxes to preserve crucial positional information, enhancing the accuracy of chest disease detection across multiple scales. Experimental results on the VinDr-CXR dataset reveal a 6% increment in mean average precision (mAP) and an 8.4% improvement in mean recall (mR) when compared to state-of-the-art baselines. This demonstrates the effectiveness of the model in accurately detecting multiple chest diseases by capturing specific features and location information.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2752-2767"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140961415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Analysis Using the Fluorescence Imaging of Nuclear Staining (FINS) Algorithm. 使用核染色荧光成像(FINS)算法进行图像分析。
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-17 DOI: 10.1007/s10278-024-01097-8
Laura R Bramwell, Jack Spencer, Ryan Frankum, Emad Manni, Lorna W Harries
{"title":"Image Analysis Using the Fluorescence Imaging of Nuclear Staining (FINS) Algorithm.","authors":"Laura R Bramwell, Jack Spencer, Ryan Frankum, Emad Manni, Lorna W Harries","doi":"10.1007/s10278-024-01097-8","DOIUrl":"10.1007/s10278-024-01097-8","url":null,"abstract":"<p><p>Finding appropriate image analysis techniques for a particular purpose can be difficult. In the context of the analysis of immunocytochemistry images, where the key information lies in the number of nuclei containing co-localised fluorescent signals from a marker of interest, researchers often opt to use manual counting techniques because of the paucity of available tools. Here, we present the development and validation of the Fluorescence Imaging of Nuclear Staining (FINS) algorithm for the quantification of fluorescent signals from immunocytochemically stained cells. The FINS algorithm is based on a variational segmentation of the nuclear stain channel and an iterative thresholding procedure to count co-localised fluorescent signals from nuclear proteins in other channels. We present experimental results comparing the FINS algorithm to the manual counts of seven researchers across a dataset of three human primary cell types which are immunocytochemically stained for a nuclear marker (DAPI), a biomarker of cellular proliferation (Ki67), and a biomarker of DNA damage (γH2AX). The quantitative performance of the algorithm is analysed in terms of consistency with the manual count data and acquisition time. The FINS algorithm produces data consistent with that achieved by manual counting but improves the process by reducing subjectivity and time. The algorithm is simple to use, based on software that is omnipresent in academia, and allows data review with its simple, intuitive user interface. We hope that, as the FINS tool is open-source and is custom-built for this specific application, it will streamline the analysis of immunocytochemical images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3077-3089"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11641597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Alzheimer's Disease Stages and Frontotemporal Dementia in Time Courses of Resting-State fMRI Data Using a Machine Learning Approach. 利用机器学习方法检测静息态 fMRI 数据时间序列中的阿尔茨海默病分期和前颞叶痴呆症
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-05-23 DOI: 10.1007/s10278-024-01101-1
Mohammad Amin Sadeghi, Daniel Stevens, Shinjini Kundu, Rohan Sanghera, Richard Dagher, Vivek Yedavalli, Craig Jones, Haris Sair, Licia P Luna
{"title":"Detecting Alzheimer's Disease Stages and Frontotemporal Dementia in Time Courses of Resting-State fMRI Data Using a Machine Learning Approach.","authors":"Mohammad Amin Sadeghi, Daniel Stevens, Shinjini Kundu, Rohan Sanghera, Richard Dagher, Vivek Yedavalli, Craig Jones, Haris Sair, Licia P Luna","doi":"10.1007/s10278-024-01101-1","DOIUrl":"10.1007/s10278-024-01101-1","url":null,"abstract":"<p><p>Early, accurate diagnosis of neurodegenerative dementia subtypes such as Alzheimer's disease (AD) and frontotemporal dementia (FTD) is crucial for the effectiveness of their treatments. However, distinguishing these conditions becomes challenging when symptoms overlap or the conditions present atypically. Resting-state fMRI (rs-fMRI) studies have demonstrated condition-specific alterations in AD, FTD, and mild cognitive impairment (MCI) compared to healthy controls (HC). Here, we used machine learning to build a diagnostic classification model based on these alterations. We curated all rs-fMRIs and their corresponding clinical information from the ADNI and FTLDNI databases. Imaging data underwent preprocessing, time course extraction, and feature extraction in preparation for the analyses. The imaging features data and clinical variables were fed into gradient-boosted decision trees with fivefold nested cross-validation to build models that classified four groups: AD, FTD, HC, and MCI. The mean and 95% confidence intervals for model performance metrics were calculated using the unseen test sets in the cross-validation rounds. The model built using only imaging features achieved 74.4% mean balanced accuracy, 0.94 mean macro-averaged AUC, and 0.73 mean macro-averaged F1 score. It accurately classified FTD (F1 = 0.99), HC (F1 = 0.99), and MCI (F1 = 0.86) fMRIs but mostly misclassified AD scans as MCI (F1 = 0.08). Adding clinical variables to model inputs raised balanced accuracy to 91.1%, macro-averaged AUC to 0.99, macro-averaged F1 score to 0.92, and improved AD classification accuracy (F1 = 0.74). In conclusion, a multimodal model based on rs-fMRI and clinical data accurately differentiates AD-MCI vs. FTD vs. HC.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2768-2783"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Caries Based on CBCT: A Deep Learning Network Interpretability Study. 基于 CBCT 的龋齿分类:深度学习网络可解释性研究。
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-05-28 DOI: 10.1007/s10278-024-01143-5
Surong Chen, Yan Yang, Weiwei Wu, Ruonan Wei, Zezhou Wang, Franklin R Tay, Jingyu Hu, Jingzhi Ma
{"title":"Classification of Caries Based on CBCT: A Deep Learning Network Interpretability Study.","authors":"Surong Chen, Yan Yang, Weiwei Wu, Ruonan Wei, Zezhou Wang, Franklin R Tay, Jingyu Hu, Jingzhi Ma","doi":"10.1007/s10278-024-01143-5","DOIUrl":"10.1007/s10278-024-01143-5","url":null,"abstract":"<p><p>This study aimed to create a caries classification scheme based on cone-beam computed tomography (CBCT) and develop two deep learning models to improve caries classification accuracy. A total of 2713 axial slices were obtained from CBCT images of 204 carious teeth. Both classification models were trained and tested using the same pretrained classification networks on the dataset, including ResNet50_vd, MobileNetV3_large_ssld, and ResNet50_vd_ssld. The first model was used directly to classify the original images (direct classification model). The second model incorporated a presegmentation step for interpretation (interpretable classification model). Performance evaluation metrics including accuracy, precision, recall, and F1 score were calculated. The Local Interpretable Model-agnostic Explanations (LIME) method was employed to elucidate the decision-making process of the two models. In addition, a minimum distance between caries and pulp was introduced for determining the treatment strategies for type II carious teeth. The direct model that utilized the ResNet50_vd_ssld network achieved top accuracy, precision, recall, and F1 score of 0.700, 0.786, 0.606, and 0.616, respectively. Conversely, the interpretable model consistently yielded metrics surpassing 0.917, irrespective of the network employed. The LIME algorithm confirmed the interpretability of the classification models by identifying key image features for caries classification. Evaluation of treatment strategies for type II carious teeth revealed a significant negative correlation (p < 0.01) with the minimum distance. These results demonstrated that the CBCT-based caries classification scheme and the two classification models appeared to be acceptable tools for the diagnosis and categorization of dental caries.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3160-3173"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612060/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Skin Cancer Diagnosis Using Swin Transformer with Hybrid Shifted Window-Based Multi-head Self-attention and SwiGLU-Based MLP. 使用基于混合移位窗口的多头自注意和基于 SwiGLU 的 MLP 的 Swin Transformer 增强皮肤癌诊断。
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-05 DOI: 10.1007/s10278-024-01140-8
Ishak Pacal, Melek Alaftekin, Ferhat Devrim Zengul
{"title":"Enhancing Skin Cancer Diagnosis Using Swin Transformer with Hybrid Shifted Window-Based Multi-head Self-attention and SwiGLU-Based MLP.","authors":"Ishak Pacal, Melek Alaftekin, Ferhat Devrim Zengul","doi":"10.1007/s10278-024-01140-8","DOIUrl":"10.1007/s10278-024-01140-8","url":null,"abstract":"<p><p>Skin cancer is one of the most frequently occurring cancers worldwide, and early detection is crucial for effective treatment. Dermatologists often face challenges such as heavy data demands, potential human errors, and strict time limits, which can negatively affect diagnostic outcomes. Deep learning-based diagnostic systems offer quick, accurate testing and enhanced research capabilities, providing significant support to dermatologists. In this study, we enhanced the Swin Transformer architecture by implementing the hybrid shifted window-based multi-head self-attention (HSW-MSA) in place of the conventional shifted window-based multi-head self-attention (SW-MSA). This adjustment enables the model to more efficiently process areas of skin cancer overlap, capture finer details, and manage long-range dependencies, while maintaining memory usage and computational efficiency during training. Additionally, the study replaces the standard multi-layer perceptron (MLP) in the Swin Transformer with a SwiGLU-based MLP, an upgraded version of the gated linear unit (GLU) module, to achieve higher accuracy, faster training speeds, and better parameter efficiency. The modified Swin model-base was evaluated using the publicly accessible ISIC 2019 skin dataset with eight classes and was compared against popular convolutional neural networks (CNNs) and cutting-edge vision transformer (ViT) models. In an exhaustive assessment on the unseen test dataset, the proposed Swin-Base model demonstrated exceptional performance, achieving an accuracy of 89.36%, a recall of 85.13%, a precision of 88.22%, and an F1-score of 86.65%, surpassing all previously reported research and deep learning models documented in the literature.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"3174-3192"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Celebrating 10 Years of the HIMSS-SIIM Enterprise Imaging Community and Enterprise Imaging Informatics. 庆祝 HIMSS-SIIM 企业成像社区和企业成像信息学成立 10 周年。
Journal of imaging informatics in medicine Pub Date : 2024-12-01 Epub Date: 2024-06-10 DOI: 10.1007/s10278-024-01141-7
Christopher J Roth, Cheryl A Petersilge, Dawn Cram, Kim Garriott, Lou Lannum, Cheryl K Carey, Nikki Medina, Tammy Kwiatkoski, James T Whitfill, Alexander J Towbin
{"title":"Celebrating 10 Years of the HIMSS-SIIM Enterprise Imaging Community and Enterprise Imaging Informatics.","authors":"Christopher J Roth, Cheryl A Petersilge, Dawn Cram, Kim Garriott, Lou Lannum, Cheryl K Carey, Nikki Medina, Tammy Kwiatkoski, James T Whitfill, Alexander J Towbin","doi":"10.1007/s10278-024-01141-7","DOIUrl":"10.1007/s10278-024-01141-7","url":null,"abstract":"<p><p>In response to the growing recognition of enterprise imaging as a critical component of healthcare's digital transformation, in 2014, the Healthcare Information and Management Systems Society (HIMSS) and the Society for Imaging Informatics in Medicine (SIIM) signed a Memorandum of Understanding to form the HIMSS-SIIM Enterprise Imaging Community (HSEIC). At the time of the agreement, the two organizations decided to collaborate to lead enterprise imaging development, advancement, and adoption. This paper celebrates the past 10 years of the HSEIC's thought leadership, industry partnerships, and impact while also looking ahead to identify enterprise imaging challenges to solve in the next decade.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"2722-2728"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信