Journal of imaging informatics in medicine最新文献

筛选
英文 中文
DECODE-3DViz: Efficient WebGL-Based High-Fidelity Visualization of Large-Scale Images using Level of Detail and Data Chunk Streaming.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01430-9
Mohammed A AboArab, Vassiliki T Potsika, Andrzej Skalski, Maciej Stanuch, George Gkois, Igor Koncar, David Matejevic, Alexis Theodorou, Sylvia Vagena, Fragiska Sigala, Dimitrios I Fotiadis
{"title":"DECODE-3DViz: Efficient WebGL-Based High-Fidelity Visualization of Large-Scale Images using Level of Detail and Data Chunk Streaming.","authors":"Mohammed A AboArab, Vassiliki T Potsika, Andrzej Skalski, Maciej Stanuch, George Gkois, Igor Koncar, David Matejevic, Alexis Theodorou, Sylvia Vagena, Fragiska Sigala, Dimitrios I Fotiadis","doi":"10.1007/s10278-025-01430-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01430-9","url":null,"abstract":"<p><p>The DECODE-3DViz pipeline represents a major advancement in the web-based visualization of large-scale medical imaging data, particularly for peripheral artery computed tomography images. This research addresses the critical challenges of rendering high-resolution volumetric datasets via WebGL technology. By integrating progressive chunk streaming and level of detail (LOD) algorithms, DECODE-3DViz optimizes the rendering process for real-time interaction and high-fidelity visualization. The system efficiently manages WebGL texture size constraints and browser memory limitations, ensuring smooth performance even with extensive datasets. A comparative evaluation against state-of-the-art visualization tools demonstrates DECODE-3DViz's superior performance, achieving up to a 98% reduction in rendering time compared with that of competitors and maintaining a high frame rate of up to 144 FPS. Furthermore, the system exhibits exceptional GPU memory efficiency, utilizing as little as 2.6 MB on desktops, which is significantly less than the over 100 MB required by other tools. User feedback, collected through a comprehensive questionnaire, revealed high satisfaction with the tool's performance, particularly in areas such as structure definition and diagnostic capability, with an average score of 4.3 out of 5. These enhancements enable detailed and accurate visualizations of the peripheral vasculature, improving diagnostic accuracy and supporting better clinical outcomes. The DECODE-3DViz tool is open source and can be accessed at https://github.com/mohammed-abo-arab/3D_WebGL_VolumeRendering.git .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating the Amount of Air Inside the Stomach for Detecting Cancers on Gastric Radiographs Using Artificial Intelligence: an Observational, Cross-sectional Study.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01441-6
Chiharu Kai, Takahiro Irie, Yuuki Kobayashi, Hideaki Tamori, Satoshi Kondo, Akifumi Yoshida, Yuta Hirono, Ikumi Sato, Kunihiko Oochi, Satoshi Kasai
{"title":"Estimating the Amount of Air Inside the Stomach for Detecting Cancers on Gastric Radiographs Using Artificial Intelligence: an Observational, Cross-sectional Study.","authors":"Chiharu Kai, Takahiro Irie, Yuuki Kobayashi, Hideaki Tamori, Satoshi Kondo, Akifumi Yoshida, Yuta Hirono, Ikumi Sato, Kunihiko Oochi, Satoshi Kasai","doi":"10.1007/s10278-025-01441-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01441-6","url":null,"abstract":"<p><p>Gastric radiography is an important tool for early detection of cancer. During gastric radiography, the stomach is monitored using barium and effervescent granules. However, stomach compression and physiological phenomena during the examination can cause air to escape the stomach. When the stomach contracts, physicians cannot accurately observe its condition, which may result in missed lesions. Notably, no research using artificial intelligence (AI) has explored the use of gastric radiography to estimate the amount of air in the stomach. Therefore, this study aimed to develop an AI system to estimate the amount of air inside the stomach using gastric radiographs. In this observational, cross-sectional study, we collected data from 300 cases who underwent medical screening and estimated the images with poor stomach air volume. We used pre-trained models of vision transformer (ViT) and convolutional neural network (CNN). Instead of retraining, dimensionality reduction was performed on the output features using principal component analysis, and LightGBM performed discriminative processing. The combination of ViT and CNN resulted in the highest accuracy (F-value 0.792, accuracy 0.943, sensitivity 0.738, specificity 0.978). High accuracy was maintained in the prone position, where air inside the stomach could be easily released. Combining ViT and CNN from gastric radiographs accurately identified cases of poor stomach air volume. The system was highly accurate in the prone position and proved clinically useful. The developed AI can be used to provide high-quality images to physicians and to prevent missed lesions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Wasserstein Generative Adversarial Network with Dense U-Net-Based Discriminator for Preclinical fMRI Denoising.
Journal of imaging informatics in medicine Pub Date : 2025-02-12 DOI: 10.1007/s10278-025-01434-5
Sima Soltanpour, Arnold Chang, Dan Madularu, Praveen Kulkarni, Craig Ferris, Chris Joslin
{"title":"3D Wasserstein Generative Adversarial Network with Dense U-Net-Based Discriminator for Preclinical fMRI Denoising.","authors":"Sima Soltanpour, Arnold Chang, Dan Madularu, Praveen Kulkarni, Craig Ferris, Chris Joslin","doi":"10.1007/s10278-025-01434-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01434-5","url":null,"abstract":"<p><p>Functional magnetic resonance imaging (fMRI) is extensively used in clinical and preclinical settings to study brain function; however, fMRI data is inherently noisy due to physiological processes, hardware, and external noise. Denoising is one of the main preprocessing steps in any fMRI analysis pipeline. This process is challenging in preclinical data in comparison to clinical data due to variations in brain geometry, image resolution, and low signal-to-noise ratios. In this paper, we propose a structure-preserved algorithm based on a 3D Wasserstein generative adversarial network with a 3D dense U-net-based discriminator called 3D U-WGAN. We apply a 4D data configuration to effectively denoise temporal and spatial information in analyzing preclinical fMRI data. GAN-based denoising methods often utilize a discriminator to identify significant differences between denoised and noise-free images, focusing on global or local features. To refine the fMRI denoising model, our method employs a 3D dense U-Net discriminator to learn both global and local distinctions. To tackle potential oversmoothing, we introduce an adversarial loss and enhance perceptual similarity by measuring feature space distances. Experiments illustrate that 3D U-WGAN significantly improves image quality in resting-state and task preclinical fMRI data, enhancing signal-to-noise ratio without introducing excessive structural changes in existing methods. The proposed method outperforms state-of-the-art methods when applied to simulated and real data in a fMRI analysis pipeline.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143412209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep-Learning Approach for Vocal Fold Pose Estimation in Videoendoscopy.
Journal of imaging informatics in medicine Pub Date : 2025-02-12 DOI: 10.1007/s10278-025-01431-8
Francesca Pia Villani, Maria Chiara Fiorentino, Lorenzo Federici, Cesare Piazza, Emanuele Frontoni, Alberto Paderno, Sara Moccia
{"title":"A Deep-Learning Approach for Vocal Fold Pose Estimation in Videoendoscopy.","authors":"Francesca Pia Villani, Maria Chiara Fiorentino, Lorenzo Federici, Cesare Piazza, Emanuele Frontoni, Alberto Paderno, Sara Moccia","doi":"10.1007/s10278-025-01431-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01431-8","url":null,"abstract":"<p><p>Accurate vocal fold (VF) pose estimation is crucial for diagnosing larynx diseases that can eventually lead to VF paralysis. The videoendoscopic examination is used to assess VF motility, usually estimating the change in the anterior glottic angle (AGA). This is a subjective and time-consuming procedure requiring extensive expertise. This research proposes a deep learning framework to estimate VF pose from laryngoscopy frames acquired in the actual clinical practice. The framework performs heatmap regression relying on three anatomically relevant keypoints as a prior for AGA computation, which is estimated from the coordinates of the predicted points. The assessment of the proposed framework is performed using a newly collected dataset of 471 laryngoscopy frames from 124 patients, 28 of whom with cancer. The framework was tested in various configurations and compared with other state-of-the-art approaches (direct keypoints regression and glottal segmentation) for both pose estimation, and AGA evaluation. The proposed framework obtained the lowest root mean square error (RMSE) computed on all the keypoints (5.09, 6.56, and 6.40 pixels, respectively) among all the models tested for VF pose estimation. Also for the AGA evaluation, heatmap regression reached the lowest mean average error (MAE) ( <math><mrow><mn>5</mn> <mo>.</mo> <msup><mn>87</mn> <mo>∘</mo></msup> </mrow> </math> ). Results show that relying on keypoints heatmap regression allows to perform VF pose estimation with a small error, overcoming drawbacks of state-of-the-art algorithms, especially in challenging images such as pathologic subjects, presence of noise, and occlusion.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143412210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of MRI Texture Analysis on Complication Rate in MRI-Guided Liver Biopsies.
Journal of imaging informatics in medicine Pub Date : 2025-02-11 DOI: 10.1007/s10278-025-01439-0
Jakob Leonhardi, Maike Niebur, Anne-Kathrin Höhn, Sebastian Ebel, Manuel Florian Struck, Hans-Michael Tautenhahn, Daniel Seehofer, Silke Zimmermann, Timm Denecke, Hans-Jonas Meyer
{"title":"Impact of MRI Texture Analysis on Complication Rate in MRI-Guided Liver Biopsies.","authors":"Jakob Leonhardi, Maike Niebur, Anne-Kathrin Höhn, Sebastian Ebel, Manuel Florian Struck, Hans-Michael Tautenhahn, Daniel Seehofer, Silke Zimmermann, Timm Denecke, Hans-Jonas Meyer","doi":"10.1007/s10278-025-01439-0","DOIUrl":"https://doi.org/10.1007/s10278-025-01439-0","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI)-derived texture features are quantitative imaging parameters that may have valuable associations with clinical aspects. Their prognostic ability in patients undergoing percutaneous MRI-guided liver biopsy to identify associations with post-interventional bleeding complications and biopsy success rate has not been sufficiently investigated. The patient sample consisted 79 patients (32 females, 40.5%) with a mean age of 58.7 ± 12.4 years. Clinical parameters evaluated included comorbidities, pre-existing liver disease, known cancer diagnosis, and hemostaseological parameters. Several puncture-related parameters such as biopsy angle, distance of needle entry to capsule, and lesion were analyzed. MRI texture features of the target lesion were extracted from the planning sequence of the MRI-guided liver biopsy. Mann-Whitney U test and Fisher's exact test were used for group comparison; multivariate regression model was used for outcome prediction. Overall, the diagnostic outcome of biopsy was malignant in 38 cases (48.1%) and benign in 32 cases (40.5%). A total of 11 patients (13.9%) had post-interventional bleeding, while nine patients (11.4%) had a negative biopsy result. Several texture features were statistically significantly different between patients with and without hemorrhage. The texture feature GrVariance (1.37 ± 0.78 vs. 0.80 ± 0.35, p = 0.007) reached the highest statistical significance. Regarding unsuccessful biopsy results, S(1,1)DifEntrp (0.80 ± 0.10 vs. 0.89 ± 0.12, p = 0.022) and S(0,4)DifEntrp (1.14 ± 0.10 vs. 1.22 ± 0.11, p = 0.021) reached statistical significance between groups. Several MRI texture features of the target lesion were associated with bleeding complications or negative biopsy after MRI-guided percutaneous liver biopsy. This could be used to identify at-risk patients at the beginning of the procedure and should be further analyzed.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143401146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Prediction of Pituitary Macroadenoma Consistency: Utilizing Demographic Data and Brain MRI Parameters.
Journal of imaging informatics in medicine Pub Date : 2025-02-07 DOI: 10.1007/s10278-025-01417-6
Fernanda Veloso Pereira, Davi Ferreira, Heraldo Garmes, Denise Engelbrecht Zantut-Wittmann, Fabio Rogério, Mateus Dal Fabbro, Cleiton Formentin, Carlos Henrique Quartucci Forster, Fabiano Reis
{"title":"Machine Learning Prediction of Pituitary Macroadenoma Consistency: Utilizing Demographic Data and Brain MRI Parameters.","authors":"Fernanda Veloso Pereira, Davi Ferreira, Heraldo Garmes, Denise Engelbrecht Zantut-Wittmann, Fabio Rogério, Mateus Dal Fabbro, Cleiton Formentin, Carlos Henrique Quartucci Forster, Fabiano Reis","doi":"10.1007/s10278-025-01417-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01417-6","url":null,"abstract":"<p><p>Consistency of pituitary macroadenomas is a key determinant in surgical outcomes, with non-soft consistency linked to more complications and incomplete resections. This study aimed to develop a machine learning model to predict the consistency of pituitary macroadenomas to improve surgical planning and outcomes. A retrospective study of patients with pituitary macroadenomas was conducted. Data included brain magnetic resonance imaging findings (diameter and apparent diffusion coefficient), patient demographics (age and sex), and tumor consistency. Seventy patients were evaluated, 59 with soft consistency and 11 with non-soft consistency. The support vector machine (SVM) was the best model with ROC AUC score of 83.3% [95% CI 65.8, 97.6], AP AUC of 69.8% [95% CI 41.3, 91.1], sensitivity of 73.1% [95% CI 44.4, 100], specificity of 89.8% [95% CI 82, 96.7], F1 score of 0.63 [95% CI 0.36, 0.83], and Matthews correlation coefficient score of 0.57 [95% CI 0.29, 0.79]. These findings indicate a significant improvement over random classification, as confirmed by a permutation test (p < 0.05). Additionally, the model had a 67.4% probability of outperforming the second-best model in cross-validation, as determined through Bayesian analysis, and demonstrated statistical significance (p < 0.05) compared to non-ensemble models. Using explainability heuristics, both 2D and 3D probability maps highlighted areas with a higher probability of non-soft consistency. The attributes most influential in the correct classification by our best model were male sex and age ≤ 42.25 years. Despite some limitations, the SVM model showed promise in predicting tumor consistency, which could aid in surgical planning. To address concerns about generalizability, we have created an open-access repository to promote future external validation studies and collaboration with other research centers, with the goal of enhancing model prediction through transfer learning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating Prostate Cancer Grading: A Novel Deep Learning Framework for Automatic Prostate Cancer Grade Assessment using Classification and Segmentation.
Journal of imaging informatics in medicine Pub Date : 2025-02-06 DOI: 10.1007/s10278-025-01429-2
Saidul Kabir, Rusab Sarmun, Rafif Mahmood Al Saady, Semir Vranic, M Murugappan, Muhammad E H Chowdhury
{"title":"Automating Prostate Cancer Grading: A Novel Deep Learning Framework for Automatic Prostate Cancer Grade Assessment using Classification and Segmentation.","authors":"Saidul Kabir, Rusab Sarmun, Rafif Mahmood Al Saady, Semir Vranic, M Murugappan, Muhammad E H Chowdhury","doi":"10.1007/s10278-025-01429-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01429-2","url":null,"abstract":"<p><p>Prostate Cancer (PCa) is the second most common cancer in men and affects more than a million people each year. Grading prostate cancer is based on the Gleason grading system, a subjective and labor-intensive method for evaluating prostate tissue samples. The variability in diagnostic approaches underscores the urgent need for more reliable methods. By integrating deep learning technologies and developing automated systems, diagnostic precision can be improved, and human error minimized. The present work introduces a three-stage framework-based innovative deep-learning system for assessing PCa severity using the PANDA challenge dataset. After a meticulous selection process, 2699 usable cases were narrowed down from the initial 5160 cases after extensive data cleaning. There are three stages in the proposed framework: classification of PCa grades using deep neural networks (DNNs), segmentation of PCa grades, and computation of International Society for Urological Pathology (ISUP) grades using machine learning classifiers. Four classes of patches were classified and segmented (benign, Gleason 3, Gleason 4, and Gleason 5). Patch sampling at different sizes (500 × 500 and 1000 × 1000 pixels) was used to optimize the classification and segmentation processes. The segmentation performance of the proposed network is enhanced by a Self-organized operational neural network (Self-ONN) based DeepLabV3 architecture. Based on these predictions, the distribution percentages of each cancer grade within the whole slide images (WSI) were calculated. These features were then concatenated into machine learning classifiers to predict the final ISUP PCa grade. EfficientNet_b0 achieved the highest F1-score of 83.83% for classification, while DeepLabV3 + architecture based on self-ONN and EfficientNet encoder achieved the highest Dice Similarity Coefficient (DSC) score of 84.9% for segmentation. Using the RandomForest (RF) classifier, the proposed framework achieved a quadratic weighted kappa (QWK) score of 0.9215. Deep learning frameworks are being developed to grade PCa automatically and have shown promising results. In addition, it provides a prospective approach to a prognostic tool that can produce clinically significant results efficiently and reliably. Further investigations are needed to evaluate the framework's adaptability and effectiveness across various clinical scenarios.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143257776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Ultrasound Image Pre-Processing on Radiomics Feature Quality: A Study on Shoulder Ultrasound.
Journal of imaging informatics in medicine Pub Date : 2025-02-06 DOI: 10.1007/s10278-025-01421-w
Matthaios Triantafyllou, Evangelia E Vassalou, Alexia Maria Goulianou, Theodoros H Tosounidis, Kostas Marias, Apostolos H Karantanas, Michail E Klontzas
{"title":"The Effect of Ultrasound Image Pre-Processing on Radiomics Feature Quality: A Study on Shoulder Ultrasound.","authors":"Matthaios Triantafyllou, Evangelia E Vassalou, Alexia Maria Goulianou, Theodoros H Tosounidis, Kostas Marias, Apostolos H Karantanas, Michail E Klontzas","doi":"10.1007/s10278-025-01421-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01421-w","url":null,"abstract":"<p><p>Radiomics, the extraction of quantitative features from medical images, has shown great promise in enhancing diagnostic and prognostic models, particularly in CT and MRI. However, its application in ultrasound (US) imaging, especially in musculoskeletal (MSK) imaging, remains underexplored. The inherent variability of ultrasound, influenced by operator dependency and various imaging settings, presents significant challenges to the reproducibility of radiomic features. This study aims to identify whether commonly used image pre-processing methods can increase the reproducibility of radiomics features, increasing the quality of analysis. This is performed with shoulder calcific tendinopathy as a case study. Ultrasound images from 84 patients with rotator cuff calcifications were retrospectively analysed. Three pre-processing techniques-Histogram Equalization, Standard CLAHE, and Advanced CLAHE-were applied to adjust image quality. Manual segmentation of calcifications was performed, followed by the extraction of 849 radiomic features. The reproducibility of these features was assessed using the intraclass correlation coefficient (ICC), comparing results across pre-processing methods within the dataset. The Advanced CLAHE pre-processing method consistently yielded the highest ICC values, indicating superior reproducibility of radiomic features compared to other methods. Wavelet-transformed features, particularly in the GLCM and GLRLM subgroups, demonstrated robust reproducibility across all pre-processing techniques. Shape features, however, continued to show lower reproducibility. Advanced CLAHE pre-processing significantly enhances the reproducibility of radiomic features in ultrasound imaging of calcifications. This study underscores the importance of pre-processing in achieving reliable radiomic analyses, particularly in operator-dependent imaging modalities like ultrasound.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143257778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficacy of Fine-Tuned Large Language Model in CT Protocol Assignment as Clinical Decision-Supporting System.
Journal of imaging informatics in medicine Pub Date : 2025-02-05 DOI: 10.1007/s10278-025-01433-6
Noriko Kanemaru, Koichiro Yasaka, Naomasa Okimoto, Mai Sato, Takuto Nomura, Yuichi Morita, Akira Katayama, Shigeru Kiryu, Osamu Abe
{"title":"Efficacy of Fine-Tuned Large Language Model in CT Protocol Assignment as Clinical Decision-Supporting System.","authors":"Noriko Kanemaru, Koichiro Yasaka, Naomasa Okimoto, Mai Sato, Takuto Nomura, Yuichi Morita, Akira Katayama, Shigeru Kiryu, Osamu Abe","doi":"10.1007/s10278-025-01433-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01433-6","url":null,"abstract":"<p><p>Accurate CT protocol assignment is crucial for optimizing medical imaging procedures. The integration of large language models (LLMs) may be helpful, but its efficacy as a clinical decision support system for protocoling tasks remains unknown. This study aimed to develop and evaluate fine-tuned LLM specifically designed for CT protocoling, as well as assess its performance, both standalone and in concurrent use, in terms of effectiveness and efficiency within radiological workflows. This retrospective study included radiology tests for contrast-enhanced chest and abdominal CT examinations (2829/498/941 for training/validation/testing). Inputs involve the clinical indication section, age, and anatomic coverage. The LLM was fine-tuned for 15 epochs, selecting the best model by macro sensitivity in validation. Performance was then evaluated on 800 randomly selected cases from the test dataset. Two radiology residents and two radiologists assigned CT protocols with and without referencing the output of LLM to evaluate its efficacy as a clinical decision support system. The LLM exhibited high accuracy metrics, with top-1 and top-2 accuracies of 0.923 and 0.963, respectively, and a macro sensitivity of 0.907. It processed each case in an average of 0.39 s. The LLM, as a clinical decision support tool, improved accuracy both for residents (0.913 vs. 0.936) and radiologists (0.920 vs. 0.926 without and with LLM, respectively), with the improvement for residents being statistically significant (p = 0.02). Additionally, it reduced reading times by 14% for residents and 12% for radiologists. These results indicate the potential of LLMs to improve CT protocoling efficiency and diagnostic accuracy in radiological practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143257777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Identification of Fetal Abdominal Planes from Ultrasound Images Based on Deep Learning.
Journal of imaging informatics in medicine Pub Date : 2025-02-05 DOI: 10.1007/s10278-025-01409-6
Ștefan Gabriel Ciobanu, Iuliana-Alina Enache, Cătălina Iovoaica-Rămescu, Elena Iuliana Anamaria Berbecaru, Andreea Vochin, Ionuț Daniel Băluță, Anca Maria Istrate-Ofițeru, Cristina Maria Comănescu, Rodica Daniela Nagy, Mircea-Sebastian Şerbănescu, Dominic Gabriel Iliescu, Eugen-Nicolae Țieranu
{"title":"Automatic Identification of Fetal Abdominal Planes from Ultrasound Images Based on Deep Learning.","authors":"Ștefan Gabriel Ciobanu, Iuliana-Alina Enache, Cătălina Iovoaica-Rămescu, Elena Iuliana Anamaria Berbecaru, Andreea Vochin, Ionuț Daniel Băluță, Anca Maria Istrate-Ofițeru, Cristina Maria Comănescu, Rodica Daniela Nagy, Mircea-Sebastian Şerbănescu, Dominic Gabriel Iliescu, Eugen-Nicolae Țieranu","doi":"10.1007/s10278-025-01409-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01409-6","url":null,"abstract":"<p><p>Fetal biometric assessments through ultrasound diagnostics are integral in obstetrics and gynecology, requiring considerable time investment. This study aimed to explore the potential of artificial intelligence (AI) architectures in automatically identifying fetal abdominal standard scanning planes and structures, particularly focusing on the abdominal circumference. Ultrasound images from a prospective cohort study were preprocessed using CV2 and Keras-OCR to eliminate textual elements and artifacts. Optical character recognition detected and removed textual components, followed by inpainting using adjacent pixels. Six deep learning neural networks, Xception and MobileNetV3Large, were employed to categorize fetal abdominal view planes. The dataset included nine classes, and the models were evaluated through a tenfold cross-validation cycle. The MobileNet3Large and EfficientV2S achieved accuracy rates of 79.89% and 79.19%, respectively. Data screening confirmed non-normal distribution, but the central limit theorem was applied for statistical analysis. ANOVA test revealed statistically significant differences between the models, while Tukey's post hoc tests showed no difference between MobileNet3Large and EfficientV2S, while outperforming the other networks. AI, specifically MobileNet3Large and EfficientV2S, demonstrated promise in identifying fetal abdominal view planes, showcasing potential benefits for prenatal ultrasound diagnostics. Further studies should compare these AI models with established methods for automatic abdominal circumference measurement to assess overall performance.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143257775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信