Journal of imaging informatics in medicine最新文献

筛选
英文 中文
NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors.
Journal of imaging informatics in medicine Pub Date : 2025-02-19 DOI: 10.1007/s10278-025-01440-7
Xuelian Yang, Yuanjun Wang, Li Sui
{"title":"NMTNet: A Multi-task Deep Learning Network for Joint Segmentation and Classification of Breast Tumors.","authors":"Xuelian Yang, Yuanjun Wang, Li Sui","doi":"10.1007/s10278-025-01440-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01440-7","url":null,"abstract":"<p><p>Segmentation and classification of breast tumors are two critical tasks since they provide significant information for computer-aided breast cancer diagnosis. Combining these tasks leverages their intrinsic relevance to enhance performance, but the variability and complexity of tumor characteristics remain challenging. We propose a novel multi-task deep learning network (NMTNet) for the joint segmentation and classification of breast tumors, which is based on a convolutional neural network (CNN) and U-shaped architecture. It mainly comprises a shared encoder, a multi-scale fusion channel refinement (MFCR) module, a segmentation branch, and a classification branch. First, ResNet18 is used as the backbone network in the encoding part to enhance the feature representation capability. Then, the MFCR module is introduced to enrich the feature depth and diversity. Besides, the segmentation branch combines a lesion region enhancement (LRE) module between the encoder and decoder parts, aiming to capture more detailed texture and edge information of irregular tumors to improve segmentation accuracy. The classification branch incorporates a fine-grained classifier that reuses valuable segmentation information to discriminate between benign and malignant tumors. The proposed NMTNet is evaluated on both ultrasound and magnetic resonance imaging datasets. It achieves segmentation dice scores of 90.30% and 91.50%, and Jaccard indices of 84.70% and 88.10% for each dataset, respectively. And the classification accuracy scores are 87.50% and 99.64% for the corresponding datasets, respectively. Experimental results demonstrate the superiority of NMTNet over state-of-the-art methods on breast tumor segmentation and classification tasks.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mainecoon: Implementing an Open-Source Web Viewer for DICOM Whole Slide Images with AI-Integrated PACS for Digital Pathology.
Journal of imaging informatics in medicine Pub Date : 2025-02-18 DOI: 10.1007/s10278-025-01425-6
Chao-Wei Hsu, Si-Wei Yang, Yu-Ting Lee, Kai-Hsuan Yao, Tzu-Hsuan Hsu, Pau-Choo Chung, Yuan-Chia Chu, Chen-Tsung Kuo, Chung-Yueh Lien
{"title":"Mainecoon: Implementing an Open-Source Web Viewer for DICOM Whole Slide Images with AI-Integrated PACS for Digital Pathology.","authors":"Chao-Wei Hsu, Si-Wei Yang, Yu-Ting Lee, Kai-Hsuan Yao, Tzu-Hsuan Hsu, Pau-Choo Chung, Yuan-Chia Chu, Chen-Tsung Kuo, Chung-Yueh Lien","doi":"10.1007/s10278-025-01425-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01425-6","url":null,"abstract":"<p><p>The rapid advancement of digital pathology comes with significant challenges due to the diverse data formats from various scanning devices creating substantial obstacles to integrating artificial intelligence (AI) into the pathology imaging workflow. To overcome performance challenges posed by large AI-generated annotations, we developed an open-source project named Mainecoon for whole slide images (WSIs) using the Digital Imaging and Communications in Medicine (DICOM) standard. Our solution incorporates an AI model to detect non-alcoholic steatohepatitis (NASH) features in liver biopsies, validated with the DICOM Workgroup 26 Connectathon dataset. AI-generated results are encoded using the Microscopy Bulk Simple Annotations standard, which provides a standardized method supporting both manual and AI-generated annotations, promoting seamless integration of structured metadata with WSIs. We proposed a method by leveraging streaming and batch processing, significantly improving data loading efficiency, reducing user waiting times, and enhancing frontend performance. The web services of the AI model were implemented via the Flask framework, integrated with our viewer and an open-source medical image archive, Raccoon, with secure authentication provided by Keycloak for OAuth 2.0 authentication and node authentication at the National Cheng Kung University Hospital. Our architecture has demonstrated robustness, interoperability, and practical applicability, addressing real-world digital pathology challenges effectively.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Domain Self-Supervised Deep Learning with Graph Convolution for Low-Dose Computed Tomography Reconstruction.
Journal of imaging informatics in medicine Pub Date : 2025-02-18 DOI: 10.1007/s10278-024-01314-4
Feng Yang, Feixiang Zhao, Yanhua Liu, Min Liu, Mingzhe Liu
{"title":"Dual-Domain Self-Supervised Deep Learning with Graph Convolution for Low-Dose Computed Tomography Reconstruction.","authors":"Feng Yang, Feixiang Zhao, Yanhua Liu, Min Liu, Mingzhe Liu","doi":"10.1007/s10278-024-01314-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01314-4","url":null,"abstract":"<p><p>X-ray computed tomography (CT) is a commonly used imaging modality in clinical practice. Recent years have seen increasing public concern regarding the ionizing radiation from CT. Low-dose CT (LDCT) has been proven to be effective in reducing patients' radiation exposure, but it results in CT images with low signal-to-noise ratio (SNR), failing to meet the image quality required for diagnosis. To enhance the SNR of LDCT images, numerous denoising strategies based on deep learning have been introduced, leading to notable advancements. Despite these advancements, most methods have relied on a supervised training paradigm. The challenge in acquiring aligned pairs of low-dose and normal-dose images in a clinical setting has limited their applicability. Recently, some self-supervised deep learning methods have enabled denoising using only noisy samples. However, these techniques are based on overly simplistic assumptions about noise and focus solely on CT sinogram denoising or image denoising, compromising their effectiveness. To address this, we introduce the Dual-Domain Self-supervised framework, termed DDoS, to accomplish effective LDCT denoising and reconstruction. The framework includes denoising in the sinogram domain, filtered back-projection reconstruction, and denoising in the image domain. By identifying the statistical characteristics of sinogram noise and CT image noise, we develop sinogram-denoising and CT image-denoising networks that are fully adapted to these characteristics. Both networks utilize a unified hybrid architecture that combines graph convolution and incorporates multiple channel attention modules, facilitating the extraction of local and non-local multi-scale features. Comprehensive experiments on two large-scale LDCT datasets demonstrate the superiority of DDoS framework over existing state-of-the-art methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Grading of Vesicoureteral Reflux (VUR) Using a Dual-Stream CNN Model with Deep Supervision.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01438-1
Guangjie Chen, Lixian Su, Shuxin Wang, Xiaoqing Liu, Wenqian Wu, Fandong Zhang, Yijun Zhao, Linfeng Zhu, Hongbo Zhang, Xiaohao Wang, Gang Yu
{"title":"Automated Grading of Vesicoureteral Reflux (VUR) Using a Dual-Stream CNN Model with Deep Supervision.","authors":"Guangjie Chen, Lixian Su, Shuxin Wang, Xiaoqing Liu, Wenqian Wu, Fandong Zhang, Yijun Zhao, Linfeng Zhu, Hongbo Zhang, Xiaohao Wang, Gang Yu","doi":"10.1007/s10278-025-01438-1","DOIUrl":"https://doi.org/10.1007/s10278-025-01438-1","url":null,"abstract":"<p><p>Vesicoureteral reflux (VUR) is a urinary system disorder characterized by the abnormal flow of urine from the bladder back into the ureters and kidneys, often leading to renal complications, particularly in children. Accurate grading of VUR, typically determined through voiding cystourethrography (VCUG), is crucial for effective clinical management and treatment planning. This study proposes a novel multi-head convolutional neural network for the automatic grading of VUR from VCUG images. The model employs a dual-stream architecture with a modified ResNet-50 backbone, enabling independent analysis of the left and right urinary tracts. Our approach categorizes VUR into three distinct classes: no reflux, mild to moderate reflux, and severe reflux. The incorporation of deep supervision within the network enhances feature learning and improves the model's ability to detect subtle variations in VUR patterns. Experimental results indicate that the proposed method effectively grades VUR, achieving an average area under the receiver operating characteristic curve of 0.82 and a patient-level accuracy of 0.84. This provides a reliable tool to support clinical decision-making in pediatric cases.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Approach to Classifying Histological Subtypes of Non-small Cell Lung Cancer (NSCLC): Combining Radiomics and Deep Learning Features from CT Images.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01442-5
Geon Oh, Yongha Gi, Jeongshim Lee, Hunjung Kim, Hong-Gyun Wu, Jong Min Park, Eunae Choi, Dongho Shin, Myonggeun Yoon, Boram Lee, Jaeman Son
{"title":"Hybrid Approach to Classifying Histological Subtypes of Non-small Cell Lung Cancer (NSCLC): Combining Radiomics and Deep Learning Features from CT Images.","authors":"Geon Oh, Yongha Gi, Jeongshim Lee, Hunjung Kim, Hong-Gyun Wu, Jong Min Park, Eunae Choi, Dongho Shin, Myonggeun Yoon, Boram Lee, Jaeman Son","doi":"10.1007/s10278-025-01442-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01442-5","url":null,"abstract":"<p><p>This study aimed to develop a hybrid model combining radiomics and deep learning features derived from computed tomography (CT) images to classify histological subtypes of non-small cell lung cancer (NSCLC). We analyzed CT images and radiomics features from 235 patients with NSCLC, including 110 with adenocarcinoma (ADC) and 112 with squamous cell carcinoma (SCC). The dataset was split into a training set (75%) and a test set (25%). External validation was conducted using the NSCLC-Radiomics database, comprising 24 patients each with ADC and SCC. A total of 1409 radiomics and 8192 deep features underwent principal component analysis (PCA) and ℓ2,1-norm minimization for feature reduction and selection. The optimal feature sets for classification included 27 radiomics features, 20 deep features, and 55 combined features (30 deep and 25 radiomics). The average area under the receiver operating characteristic curve (AUC) for radiomics, deep, and combined features were 0.6568, 0.6689, and 0.7209, respectively, across the internal and external test sets. Corresponding average accuracies were 0.6013, 0.6376, and 0.6564. The combined model demonstrated superior performance in classifying NSCLC subtypes, achieving higher AUC and accuracy in both test datasets. These results suggest that the proposed hybrid approach could enhance the accuracy and reliability of NSCLC subtype classification.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of a Commercial Artificial Intelligence Software in Unilateral Mammography: Simulating Total Mastectomy Scenarios. 商业人工智能软件在单侧乳腺造影术中的应用:模拟全乳房切除术场景
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01432-7
Ji Yeong An, Janie M Lee, Myoung-Jin Jang, Su Min Ha, Jung Min Chang
{"title":"Application of a Commercial Artificial Intelligence Software in Unilateral Mammography: Simulating Total Mastectomy Scenarios.","authors":"Ji Yeong An, Janie M Lee, Myoung-Jin Jang, Su Min Ha, Jung Min Chang","doi":"10.1007/s10278-025-01432-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01432-7","url":null,"abstract":"<p><p>This study was to evaluate the performance of commercially available artificial intelligence (AI) software in unilateral mammograms simulating postmastectomy surveillance compared with AI software used in bilateral mammograms from the same women serving as controls. A retrospective database search identified consecutive women who underwent breast cancer surgery between January 2021 and December 2021. AI software was applied to the mammogram immediately preceding breast cancer diagnosis in two modes: bilateral (the standard bilateral mammography dataset) and unilateral analyses (each breast's craniocaudal and mediolateral oblique views), and their outputs were reviewed. The sensitivity, specificity, and number of marks per breast were compared between the bilateral and unilateral analyses with -5% non-inferiority margin for the difference in sensitivity and specificity between the two modes. A total of 694 women (mean age, 55.2 ± 10.8 years) with unilateral or bilateral breast cancer contributed mammograms for analysis; each breast was then separately evaluated in the unilateral postmastectomy simulation (n = 1388), of which 730 had breast cancer (52.6%) (mean invasive size = 1.5 cm) and compared with bilateral mammography analysis. The sensitivity of unilateral analysis was not inferior to that of bilateral analysis (78.6% vs. 76.7%), with a difference of 1.9%. The specificity of unilateral analysis was inferior to that in the bilateral analysis (81.5% vs. 91.9%), with a difference of -10.5% being lower than the non-inferiority margin. The average number of AI marks per breast was 0.94 (unilateral [1298/1388] and bilateral analyses [1306/1388], respectively). AI software performance in simulated unilateral mammography analysis demonstrated non-inferior sensitivity and inferior specificity compared to bilateral mammography.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Combined Deep Learning Image Reconstruction and Metal Artifact Reduction Algorithm on CT Image Quality in Different Scanning Conditions for Maxillofacial Region with Metal Implants: A Phantom Study.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-024-01287-4
Gongxin Yang, Haowei Wang, Ling Liu, Qifan Ma, Huimin Shi, Ying Yuan
{"title":"Impact of Combined Deep Learning Image Reconstruction and Metal Artifact Reduction Algorithm on CT Image Quality in Different Scanning Conditions for Maxillofacial Region with Metal Implants: A Phantom Study.","authors":"Gongxin Yang, Haowei Wang, Ling Liu, Qifan Ma, Huimin Shi, Ying Yuan","doi":"10.1007/s10278-024-01287-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01287-4","url":null,"abstract":"<p><p>This study aims to investigate the impact of combining deep learning image reconstruction (DLIR) and metal artifacts reduction (MAR) algorithms on the quality of CT images with metal implants under different scanning conditions. Four images of the maxillofacial region in pigs were taken using different metal implants for evaluation. The scans were conducted at three different dose levels (CTDIvol: 20/10/5 mGy). The images were reconstructed using three different methods: filtered back projection (FBP), adaptive statistical iterative reconstruction with Veo at a 50% level (AV50), and DLIR at three levels (low, medium, and high). Regions of interest (ROIs) were identified in various tissues (near/far/reference fat, muscle, bone) both with and without metal implants and artifacts. Parameters such as standard deviation (SD), signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and metal artifact index (MAI) were calculated. Additionally, two experienced radiologists evaluated the subjective image quality (IQ) using a 5-point Likert scale. (1) Both observers rated MAR generated significantly lower artifact scores than non-MAR in all types of tissues (P < 0.01), except for the far shadow and bloom in bone (phantoms 1, 3, 4) and the far bloom in muscle (phantom 3) without significant differences (P = 1.0). (2) Under the same scanning condition, DLIR at three levels produced a smaller SD than those of FBP and AV50 (P < 0.05). (3) Compared to FBP and AV50, DLIR denoted a better reduction of MAI and improvement of SNR and CNR (P < 0.05) for most tissues between the four phantoms. (4) Subjective overall IQ was superior with the increasement of DLIR level (P < 0.05) and both observers agreed that DLIR produced better artifact reductions compared with FBP and AV50. The combination of DLIR and MAR algorithms can enhance image quality, significantly reduce metal artifacts, and offer high clinical value.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ischemic Stroke Lesion Core Segmentation from CT Perfusion Scans Using Attention ResUnet Deep Learning.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01407-8
Omar Ibrahim Alirr
{"title":"Ischemic Stroke Lesion Core Segmentation from CT Perfusion Scans Using Attention ResUnet Deep Learning.","authors":"Omar Ibrahim Alirr","doi":"10.1007/s10278-025-01407-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01407-8","url":null,"abstract":"<p><p>Accurate segmentation of ischemic stroke lesions is crucial for refining diagnosis, prognosis, and treatment planning. Manual identification is time-consuming and challenging, especially in urgent clinical scenarios. This paper presents an innovative deep learning-based system for automated segmentation of ischemic stroke lesions from Computed Tomography Perfusion (CTP) datasets. This paper introduces a deep learning-based system designed to segment ischemic stroke lesions from Computed Tomography Perfusion (CTP) datasets. The proposed approach integrates Edge Enhancing Diffusion (EED) filtering as a preprocessing step, acting as a form of hard attention to emphasize affected regions. Besides the Attention ResUnet (AttResUnet) architecture with a modified decoder path, incorporating spatial and channel attention mechanisms to capture long-range dependencies. The system was evaluated using the ISLES challenge 2018 dataset with a fivefold cross-validation approach. The proposed framework achieved a noteworthy average Dice Similarity Coefficient (DSC) score of 59%. This performance underscores the effectiveness of combining EED filtering with attention mechanisms in the AttResUnet architecture for accurate stroke lesion segmentation. The fold-wise analysis revealed consistent performance across different data subsets, with slight variations highlighting the model's generalizability. The proposed approach offers a reliable and generalizable tool for automated ischemic stroke lesion segmentation, potentially improving efficiency and accuracy in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DECODE-3DViz: Efficient WebGL-Based High-Fidelity Visualization of Large-Scale Images using Level of Detail and Data Chunk Streaming.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01430-9
Mohammed A AboArab, Vassiliki T Potsika, Andrzej Skalski, Maciej Stanuch, George Gkois, Igor Koncar, David Matejevic, Alexis Theodorou, Sylvia Vagena, Fragiska Sigala, Dimitrios I Fotiadis
{"title":"DECODE-3DViz: Efficient WebGL-Based High-Fidelity Visualization of Large-Scale Images using Level of Detail and Data Chunk Streaming.","authors":"Mohammed A AboArab, Vassiliki T Potsika, Andrzej Skalski, Maciej Stanuch, George Gkois, Igor Koncar, David Matejevic, Alexis Theodorou, Sylvia Vagena, Fragiska Sigala, Dimitrios I Fotiadis","doi":"10.1007/s10278-025-01430-9","DOIUrl":"https://doi.org/10.1007/s10278-025-01430-9","url":null,"abstract":"<p><p>The DECODE-3DViz pipeline represents a major advancement in the web-based visualization of large-scale medical imaging data, particularly for peripheral artery computed tomography images. This research addresses the critical challenges of rendering high-resolution volumetric datasets via WebGL technology. By integrating progressive chunk streaming and level of detail (LOD) algorithms, DECODE-3DViz optimizes the rendering process for real-time interaction and high-fidelity visualization. The system efficiently manages WebGL texture size constraints and browser memory limitations, ensuring smooth performance even with extensive datasets. A comparative evaluation against state-of-the-art visualization tools demonstrates DECODE-3DViz's superior performance, achieving up to a 98% reduction in rendering time compared with that of competitors and maintaining a high frame rate of up to 144 FPS. Furthermore, the system exhibits exceptional GPU memory efficiency, utilizing as little as 2.6 MB on desktops, which is significantly less than the over 100 MB required by other tools. User feedback, collected through a comprehensive questionnaire, revealed high satisfaction with the tool's performance, particularly in areas such as structure definition and diagnostic capability, with an average score of 4.3 out of 5. These enhancements enable detailed and accurate visualizations of the peripheral vasculature, improving diagnostic accuracy and supporting better clinical outcomes. The DECODE-3DViz tool is open source and can be accessed at https://github.com/mohammed-abo-arab/3D_WebGL_VolumeRendering.git .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating the Amount of Air Inside the Stomach for Detecting Cancers on Gastric Radiographs Using Artificial Intelligence: an Observational, Cross-sectional Study.
Journal of imaging informatics in medicine Pub Date : 2025-02-14 DOI: 10.1007/s10278-025-01441-6
Chiharu Kai, Takahiro Irie, Yuuki Kobayashi, Hideaki Tamori, Satoshi Kondo, Akifumi Yoshida, Yuta Hirono, Ikumi Sato, Kunihiko Oochi, Satoshi Kasai
{"title":"Estimating the Amount of Air Inside the Stomach for Detecting Cancers on Gastric Radiographs Using Artificial Intelligence: an Observational, Cross-sectional Study.","authors":"Chiharu Kai, Takahiro Irie, Yuuki Kobayashi, Hideaki Tamori, Satoshi Kondo, Akifumi Yoshida, Yuta Hirono, Ikumi Sato, Kunihiko Oochi, Satoshi Kasai","doi":"10.1007/s10278-025-01441-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01441-6","url":null,"abstract":"<p><p>Gastric radiography is an important tool for early detection of cancer. During gastric radiography, the stomach is monitored using barium and effervescent granules. However, stomach compression and physiological phenomena during the examination can cause air to escape the stomach. When the stomach contracts, physicians cannot accurately observe its condition, which may result in missed lesions. Notably, no research using artificial intelligence (AI) has explored the use of gastric radiography to estimate the amount of air in the stomach. Therefore, this study aimed to develop an AI system to estimate the amount of air inside the stomach using gastric radiographs. In this observational, cross-sectional study, we collected data from 300 cases who underwent medical screening and estimated the images with poor stomach air volume. We used pre-trained models of vision transformer (ViT) and convolutional neural network (CNN). Instead of retraining, dimensionality reduction was performed on the output features using principal component analysis, and LightGBM performed discriminative processing. The combination of ViT and CNN resulted in the highest accuracy (F-value 0.792, accuracy 0.943, sensitivity 0.738, specificity 0.978). High accuracy was maintained in the prone position, where air inside the stomach could be easily released. Combining ViT and CNN from gastric radiographs accurately identified cases of poor stomach air volume. The system was highly accurate in the prone position and proved clinically useful. The developed AI can be used to provide high-quality images to physicians and to prevent missed lesions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信