Current Medical Imaging Reviews最新文献

筛选
英文 中文
Evaluation of Left Heart Function in Heart Failure Patients with Different Ejection Fraction Types using a Transthoracic Three-dimensional Echocardiography Heart-Model. 应用经胸三维超声心动图心脏模型评价不同射血分数类型心衰患者左心功能。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-17 DOI: 10.2174/0115734056388350250903130655
Shen-Yi Li, Yi Zhang, Qing-Qing Long, Ming-Juan Chen, Si-Yu Wang, Wei-Ying Sun
{"title":"Evaluation of Left Heart Function in Heart Failure Patients with Different Ejection Fraction Types using a Transthoracic Three-dimensional Echocardiography Heart-Model.","authors":"Shen-Yi Li, Yi Zhang, Qing-Qing Long, Ming-Juan Chen, Si-Yu Wang, Wei-Ying Sun","doi":"10.2174/0115734056388350250903130655","DOIUrl":"https://doi.org/10.2174/0115734056388350250903130655","url":null,"abstract":"<p><strong>Objective: </strong>Heart failure (HF) is classified into three types based on left ventricular ejection fraction (LVEF). A newly developed transthoracic threedimensional (3D) echocardiography Heart-Model (HM) offers quick analysis of the volume and function of the left atrium (LA) and left ventricle (LV). This study aimed to determine the value of the HM in HF patients.</p><p><strong>Methods: </strong>A total of 117 patients with HF were divided into three groups according to EF: preserved EF (HFpEF, EF ≥50%), mid-range EF (HFmrEF, EF =41%-49%), and reduced EF (HFrEF, EF ≤40%). The HM was applied to analyze 3D cardiac functional parameters. LVEF was obtained using Simpson's biplane method. The N-terminal pro-B-type natriuretic peptide (NT-proBNP) concentration was measured.</p><p><strong>Results: </strong>Significant differences in age, female proportion, body mass index, and comorbidities were observed among the three groups. With decreasing EF across the groups, the 3D volumetric parameters of the LA and LV increased, while LVEF decreased. The LV E/e' was significantly higher in HFrEF patients than in HFpEF patients. LVEF measurement was achieved in significantly less time with the HM compared with the conventional Simpson's biplane method. The NT-proBNP concentration increased in the following pattern: HFrEF > HFmrEF > HFpEF. The NT-proBNP concentration correlated positively with LV volume and negatively with LVEF from both the HM and Simpson's biplane method.</p><p><strong>Conclusion: </strong>LA and LV volumes increase, and the derived LV systolic function decreases with increasing HF severity determined by the HM. The functional parameters measurements provided by the HM are associated with laboratory indicators, indicating the feasibility of using the HM in routine clinical application.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBLEformer: Multi-Scale Bidirectional Lesion Enhancement Transformer for Cervical Cancer Image Segmentation. MBLEformer:用于宫颈癌图像分割的多尺度双向病灶增强变压器。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-16 DOI: 10.2174/0115734056357180250516022218
Shuhui Li, Peng Chen, Jun Zhang, Bing Wang
{"title":"MBLEformer: Multi-Scale Bidirectional Lesion Enhancement Transformer for Cervical Cancer Image Segmentation.","authors":"Shuhui Li, Peng Chen, Jun Zhang, Bing Wang","doi":"10.2174/0115734056357180250516022218","DOIUrl":"https://doi.org/10.2174/0115734056357180250516022218","url":null,"abstract":"<p><strong>Background: </strong>Accurate segmentation of lesion areas from Lugol's Iodine Staining images is crucial for screening pre-cancerous cervical lesions. However, in underdeveloped regions lacking skilled clinicians, this method may lead to misdiagnosis and missed diagnoses. In recent years, deep learning methods have been widely applied to assist in medical image segmentation.</p><p><strong>Objective: </strong>This study aims to improve the accuracy of cervical cancer lesion segmentation by addressing the limitations of Convolutional Neural Networks (CNNs) and attention mechanisms in capturing global features and refining upsampling details.</p><p><strong>Methods: </strong>This paper presents a Multi-Scale Bidirectional Lesion Enhancement Network, named MBLEformer, which employs the Swin Transformer encoder to extract image features at multiple stages and utilizes a multi-scale attention mechanism to capture semantic features from different perspectives. Additionally, a bidirectional lesion enhancement upsampling strategy is introduced to refine the edge details of lesion areas.</p><p><strong>Results: </strong>Experimental results demonstrate that the proposed model exhibits superior segmentation performance on a proprietary cervical cancer colposcopic dataset, outperforming other medical image segmentation methods, with a mean Intersection over Union (mIoU) of 82.5%, accuracy, and specificity of 94.9% and 83.6%.</p><p><strong>Conclusion: </strong>MBLEformer significantly improves the accuracy of lesion segmentation in iodine-stained cervical cancer images, with the potential to enhance the efficiency and accuracy of pre-cancerous lesion diagnosis and help address the issue of imbalanced medical resources.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale based Network and Adaptive EfficientnetB7 with ASPP: Analysis of Novel Brain Tumor Segmentation and Classification. 基于多尺度网络和基于ASPP的自适应effentnetb7:新型脑肿瘤分割分类分析。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-15 DOI: 10.2174/0115734056419990250904093436
Sheetal Vijay Kulkarni, S Poornapushpakala
{"title":"Multi-scale based Network and Adaptive EfficientnetB7 with ASPP: Analysis of Novel Brain Tumor Segmentation and Classification.","authors":"Sheetal Vijay Kulkarni, S Poornapushpakala","doi":"10.2174/0115734056419990250904093436","DOIUrl":"https://doi.org/10.2174/0115734056419990250904093436","url":null,"abstract":"<p><strong>Introduction: </strong>Medical imaging has undergone significant advancements with the integration of deep learning techniques, leading to enhanced accuracy in image analysis. These methods autonomously extract relevant features from medical images, thereby improving the detection and classification of various diseases. Among imaging modalities, Magnetic Resonance Imaging (MRI) is particularly valuable due to its high contrast resolution, which enables the differentiation of soft tissues, making it indispensable in the diagnosis of brain disorders. The accurate classification of brain tumors is crucial for diagnosing many neurological conditions. However, conventional classification techniques are often limited by high computational complexity and suboptimal accuracy. Motivated by these issues, an innovative model is proposed in this work for segmenting and classifying brain tumors. The research aims to develop a robust and efficient deep learning framework that can assist clinicians in making precise and early diagnoses, ultimately leading to more effective treatment planning. The proposed methodology begins with the acquisition of MRI images from standardized medical imaging databases.</p><p><strong>Methods: </strong>Subsequently, the abnormal regions from the images are segmented using the Multiscale Bilateral Awareness Network (MBANet), which incorporates multi-scale operations to enhance feature representation and image quality. A novel classificationarchitecture then processes the segmented images, termed Region Vision Transformer-based Adaptive EfficientNetB7 with Atrous Spatial Pyramid Pooling (RVAEB7-ASPP). To optimize the performance of the classification model, hyperparameters are fine-tuned using the Modified Random Parameter-based Hippopotamus Optimization Algorithm (MRP-HOA).</p><p><strong>Results: </strong>The model's effectiveness is verified through a comprehensive experimental evaluation that utilizes various performance metrics and is compared to current state-of-the-art methods. The proposed MRP-HOA-RVAEB7-ASPP model achieves an impressive classification accuracy of 98.2%, significantly outperforming conventional approaches in brain tumor classification tasks.</p><p><strong>Discussion: </strong>The MBANet effectively performs brain tumor segmentation, while the RVAEB7-ASPP model provides reliable classification. The integration of the MRP-HOA-RVAEB7-ASPP model optimizes feature extractions and parameter tuning, leading to improved accuracy and robustness.</p><p><strong>Conclusion: </strong>The integration of advanced segmentation, adaptive feature extraction, and optimal parameter tuning enhances the reliability and accuracy of the model. This framework provides a more effective and trustworthy solution for the early detection and clinical assessment of brain tumors, leading to improved patient outcomes through timely intervention.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced U-Net with Attention Mechanisms for Improved Feature Representation in Lung Nodule Segmentation. 基于注意机制的改进U-Net肺结节分割特征表示方法。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-11 DOI: 10.2174/0115734056386382250902064757
Thin Myat Moe Aung, Arfat Ahmad Khan
{"title":"Enhanced U-Net with Attention Mechanisms for Improved Feature Representation in Lung Nodule Segmentation.","authors":"Thin Myat Moe Aung, Arfat Ahmad Khan","doi":"10.2174/0115734056386382250902064757","DOIUrl":"https://doi.org/10.2174/0115734056386382250902064757","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate segmentation of small and irregular pulmonary nodules remains a significant challenge in lung cancer diagnosis, particularly in complex imaging backgrounds. Traditional U-Net models often struggle to capture long-range dependencies and integrate multi-scale features, limiting their effectiveness in addressing these challenges. To overcome these limitations, this study proposes an enhanced U-Net hybrid model that integrates multiple attention mechanisms to enhance feature representation and improve the precision of segmentation outcomes.</p><p><strong>Methods: </strong>The assessment of the proposed model was conducted using the LUNA16 dataset, which contains annotated CT scans of pulmonary nodules. Multiple attention mechanisms, including Spatial Attention (SA), Dilated Efficient Channel Attention (Dilated ECA), Convolutional Block Attention Module (CBAM), and Squeeze-and-Excitation (SE) Block, were integrated into a U-Net backbone. These modules were strategically combined to enhance both local and global feature representations. The model's architecture and training procedures were designed to address the challenges of segmenting small and irregular pulmonary nodules.</p><p><strong>Results: </strong>The proposed model achieved a Dice similarity coefficient of 84.30%, significantly outperforming the baseline U-Net model. This result demonstrates improved accuracy in segmenting small and irregular pulmonary nodules.</p><p><strong>Discussion: </strong>The integration of multiple attention mechanisms significantly enhances the model's ability to capture both local and global features, addressing key limitations of traditional U-Net architectures. SA preserves spatial features for small nodules, while Dilated ECA captures long-range dependencies. CBAM and SE further refine feature representations. Together, these modules improve segmentation performance in complex imaging backgrounds. A potential limitation is that performance may still be constrained in cases with extreme anatomical variability or lowcontrast lesions, suggesting directions for future research.</p><p><strong>Conclusion: </strong>The Enhanced U-Net hybrid model outperforms the traditional U-Net, effectively addressing challenges in segmenting small and irregular pulmonary nodules within complex imaging backgrounds.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities. 基于SOAE的SqueezeViX-Net:基于x射线和CT成像模式的肺炎准确分类的流行深度学习框架。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-11 DOI: 10.2174/0115734056378882250831125120
N Kavitha, B Anand
{"title":"SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities.","authors":"N Kavitha, B Anand","doi":"10.2174/0115734056378882250831125120","DOIUrl":"https://doi.org/10.2174/0115734056378882250831125120","url":null,"abstract":"<p><strong>Introduction: </strong>Pneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis.</p><p><strong>Materials and methods: </strong>This paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories.</p><p><strong>Results: </strong>SqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric.</p><p><strong>Discussion: </strong>The validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations.</p><p><strong>Conclusion: </strong>SqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic Value of Dual Energy Technology of Dual Source CT in Differentiation Grade of Colorectal Cancer. 双源CT双能技术对结直肠癌分级的诊断价值。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-09 DOI: 10.2174/0115734056360004250828115402
Sudhir K Yadav, Nan Deng, Jikong Ma, Yixin Liu, Chunmei Zhang, Ling Liu
{"title":"Diagnostic Value of Dual Energy Technology of Dual Source CT in Differentiation Grade of Colorectal Cancer.","authors":"Sudhir K Yadav, Nan Deng, Jikong Ma, Yixin Liu, Chunmei Zhang, Ling Liu","doi":"10.2174/0115734056360004250828115402","DOIUrl":"https://doi.org/10.2174/0115734056360004250828115402","url":null,"abstract":"<p><strong>Introduction: </strong>Colorectal cancer (CRC) is a leading cause of cancer-related morbidity and mortality. Accurate differentiation of tumor grade is crucial for prognosis and treatment planning. This study aimed to evaluate the diagnostic value of dual-source CT dual-energy technology parameters in distinguishing CRC differentiation grades.</p><p><strong>Methods: </strong>A retrospective analysis was conducted on 87 surgically and pathologically confirmed CRC patients (64 with medium-high differentiation and 23 with low differentiation) who underwent dual-source CT dual-energy enhancement scanning. Normalized iodine concentration (NIC), spectral curve slope (K), and dual-energy index (DEI) of the tumor center were measured in arterial and venous phases. Differences in these parameters between differentiation groups were compared, and ROC curve analysis was performed to assess diagnostic efficacy.</p><p><strong>Results: </strong>The low-differentiation group exhibited significantly higher NIC, K, and DEI values in both arterial and venous phases compared to the mediumhigh differentiation group (P < 0.01). In the arterial phase, NIC, K, and DEI yielded AUC values of 0.920, 0.770, and 0.903, respectively, with sensitivities of 95.7%, 65.2%, and 91.3%, and specificities of 82.8%, 75.0%, and 75.0%, respectively. In the venous phase, AUC values were 0.874, 0.837, and 0.886, with sensitivities of 91.3%, 82.6%, and 91.3%, and specificities of 68.75%, 75.0%, and 73.4%. NIC in the arterial phase showed statistically superior diagnostic performance compared to K values (P < 0.05).</p><p><strong>Discussion: </strong>Dual-energy CT parameters, particularly NIC in the arterial phase, demonstrate high diagnostic accuracy in differentiating CRC grades. These findings suggest that quantitative dual-energy CT metrics can serve as valuable non-invasive tools for tumor characterization, aiding in clinical decision-making. Study limitations include its retrospective design and relatively small sample size.</p><p><strong>Conclusion: </strong>NIC, K, and DEI values in dual-energy CT scans are highly effective in distinguishing CRC differentiation grades, with arterial-phase NIC showing the highest diagnostic performance. These parameters may enhance preoperative assessment and personalized treatment strategies for CRC patients.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion Model-based Medical Image Generation as a Potential Data Augmentation Strategy for AI Applications. 基于扩散模型的医学图像生成作为人工智能应用的潜在数据增强策略。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-09-01 DOI: 10.2174/0115734056401610250827114351
Zijian Cao, Jueye Zhang, Chen Lin, Tian Li, Hao Wu, Yibao Zhang
{"title":"Diffusion Model-based Medical Image Generation as a Potential Data Augmentation Strategy for AI Applications.","authors":"Zijian Cao, Jueye Zhang, Chen Lin, Tian Li, Hao Wu, Yibao Zhang","doi":"10.2174/0115734056401610250827114351","DOIUrl":"https://doi.org/10.2174/0115734056401610250827114351","url":null,"abstract":"<p><strong>Introduction: </strong>This study explored a generative image synthesis method based on diffusion models, potentially providing a low-cost and high-efficiency training data augmentation strategy for medical artificial intelligence (AI) applications.</p><p><strong>Methods: </strong>The MedMNIST v2 dataset was utilized as a small-volume training dataset under low-performance computing conditions. Based on the characteristics of existing samples, new medical images were synthesized using the proposed annotated diffusion model. In addition to observational assessment, quantitative evaluation was performed based on the gradient descent of the loss function during the generation process and the Fréchet Inception Distance (FID), using various loss functions and feature vector dimensions.</p><p><strong>Results: </strong>Compared to the original data, the proposed diffusion model successfully generated medical images of similar styles but with dramatically varied anatomic details. The model trained with the Huber loss function achieved a higher FID of 15.2 at a feature vector dimension of 2048, compared with the model trained with the L2 loss function, which achieved the best FID of 0.85 at a feature vector dimension of 64.</p><p><strong>Discussion: </strong>The use of the Huber loss enhanced model robustness, while FID values indicated acceptable similarity between generated and real images. Future work should explore the application of these models to more complex datasets and clinical scenarios.</p><p><strong>Conclusion: </strong>This study demonstrated that diffusion model-based medical image synthesis is potentially applicable as an augmentation strategy for AI, particularly in situations where access to real clinical data is limited. Optimal training parameters were also proposed by evaluating the dimensionality of feature vectors in FID calculations and the complexity of loss functions.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence-based Liver Volume Measurement Using Preoperative and Postoperative CT Images. 基于人工智能的术前和术后CT图像肝脏体积测量。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-08-29 DOI: 10.2174/0115734056394257250818060804
Kwang Gi Kim, Doojin Kim, Chang Hyun Lee, Jong Chan Yeom, Young Jae Kim, Yeon Ho Park, Jaehun Yang
{"title":"Artificial Intelligence-based Liver Volume Measurement Using Preoperative and Postoperative CT Images.","authors":"Kwang Gi Kim, Doojin Kim, Chang Hyun Lee, Jong Chan Yeom, Young Jae Kim, Yeon Ho Park, Jaehun Yang","doi":"10.2174/0115734056394257250818060804","DOIUrl":"https://doi.org/10.2174/0115734056394257250818060804","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate liver volumetry is crucial for hepatectomy. In this study, we developed and validated a deep learning system for automated liver volumetry in patients undergoing hepatectomy, both preoperatively and at 7 days and 3 months postoperatively.</p><p><strong>Methods: </strong>A 3D U-Net model was trained on CT images from three time points using a five-fold cross-validation approach. Model performance was assessed with standard metrics and comparatively evaluated across the time points.</p><p><strong>Results: </strong>The model achieved a mean Dice Similarity Coefficient (DSC) of 94.31% (preoperative: 94.91%; 7-day post-operative: 93.45%; 3-month postoperative: 94.57%) and a mean recall of 96.04%. The volumetric difference between predicted and actual volumes was 1.01 ± 0.06% preoperatively, compared to 1.04 ± 0.03% at other time points (p < 0.05).</p><p><strong>Discussion: </strong>This study demonstrates a novel capability to automatically track post-hepatectomy regeneration using AI, offering significant potential to enhance surgical planning and patient monitoring. A key limitation, however, was that the direct correlation with clinical outcomes was not assessed due to constraints of the current dataset. Therefore, future studies using larger, multi-center datasets are essential to validate the model's clinical and prognostic utility.</p><p><strong>Conclusion: </strong>The developed artificial intelligence model successfully and accurately measured liver volumes across three critical post-hepatectomy time points. These findings support the use of this automated technology as a precise and reliable tool to assist in surgical decision-making and postoperative assessment, providing a strong foundation for enhancing patient care.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smartphone-Based Anemia Screening via Conjunctival Imaging with 3D-Printed Spacer: A Cost-Effective Geospatial Health Solution. 基于智能手机的贫血筛查,通过结膜成像与3d打印垫片:一个具有成本效益的地理空间健康解决方案。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-08-29 DOI: 10.2174/0115734056389602250826081355
A M Arunnagiri, M Sasikala, N Ramadass, G Ramya
{"title":"Smartphone-Based Anemia Screening <i>via</i> Conjunctival Imaging with 3D-Printed Spacer: A Cost-Effective Geospatial Health Solution.","authors":"A M Arunnagiri, M Sasikala, N Ramadass, G Ramya","doi":"10.2174/0115734056389602250826081355","DOIUrl":"https://doi.org/10.2174/0115734056389602250826081355","url":null,"abstract":"<p><strong>Introduction: </strong>Anemia is a common blood disorder caused by a low red blood cell count, reducing blood hemoglobin. It affects children, adolescents, and adults of all genders. Anemia diagnosis typically involves invasive procedures like peripheral blood smears and complete blood count (CBC) analysis. This study aims to develop a cost-effective, non-invasive tool for anemia detection using eye conjunctiva images.</p><p><strong>Method: </strong>Eye conjunctiva images were captured from 54 subjects using three imaging modalities such as a DSLR camera, a smartphone camera, and a smartphone camera fitted with a 3D-printed spacer macro lens. Image processing techniques, including You Only Look Once (YOLOv8) and the Segment Anything Model (SAM), and K-means clustering were used to analyze the image. By using an MLP classifier, the images were classified as anemic, moderately anemic, and normal. The trained model was embedded into an Android application with geotagging capabilities to map the prevalence of anemia in different regions.</p><p><strong>Results: </strong>Features extracted using SAM segmentation showed higher statistical significance (p < 0.05) compared to K-Means. Comparing high resolution(DSLR modality) and the proposed 3D-printed spacer macrolens shows statistically significant differences (p < 0.05). The classification accuracy was 98.3% for images from a 3D spacer-equipped smartphone camera, on par with the 98.8% accuracy obtained from DSLR camerabased images.</p><p><strong>Conclusion: </strong>The mobile application, developed using images captured with a 3D spacer-equipped modality, provides portable, cost-effective, and user-friendly non-invasive anemia screening. By identifying anemic clusters, it assists healthcare workers in targeted interventions and supports global health initiatives like Sustainable Development Goal (SDG) 3.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Predictive Value of Grading in Regions Beyond Peritumoral Edema in Gliomas Based on Radiomics. 基于放射组学探讨胶质瘤瘤周水肿以外区域分级的预测价值。
IF 1.1 4区 医学
Current Medical Imaging Reviews Pub Date : 2025-08-28 DOI: 10.2174/0115734056387494250823132119
Jie Pan, Jun Lu, Shaohua Peng, Minhai Wang
{"title":"Exploring the Predictive Value of Grading in Regions Beyond Peritumoral Edema in Gliomas Based on Radiomics.","authors":"Jie Pan, Jun Lu, Shaohua Peng, Minhai Wang","doi":"10.2174/0115734056387494250823132119","DOIUrl":"https://doi.org/10.2174/0115734056387494250823132119","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate preoperative grading of adult-type diffuse gliomas is crucial for personalized treatment. Emerging evidence suggests tumor cell infiltration extends beyond peritumoral edema, but the predictive value of radiomics features in these regions remains underexplored.</p><p><strong>Method: </strong>A retrospective analysis was conducted on 180 patients from the UCSF-PDGM dataset, split into training (70%) and validation (30%) cohorts. Intratumoral volumes (VOI_I, including tumor body and edema) and peritumoral volumes (VOI_P) at 7 expansion distances (1-5, 10, 15 mm) were analyzed. Feature selection involved Levene's test, t-test, mRMR, and LASSO regression. Radiomics models (VOI_I, VOI_P, and combined intratumoral-peritumoral models) were evaluated using AUC, accuracy, sensitivity, specificity, and F1 score, with Delong tests for comparisons.</p><p><strong>Results: </strong>The combined radiomics models established for the intratumoral and peritumoral 1-5mm ranges (VOI_1-5mm) showed better predictive performance than the VOI_I model (AUC=0.815/0.672), among which the VOI_1 model performed the best: in the training cohort, the AUC was 0.903 (accuracy=0.880, sensitivity=0.905, specificity=0.855, F1=0.884); in the validation cohort, the AUC was 0.904 (accuracy=0.852, sensitivity=0.778, specificity=0.926, F1=0.840). This model significantly outperformed the VOI_I model (p<0.05) and the 10/15mm combined models (p<0.05).</p><p><strong>Discussion: </strong>The peritumoral regions within 5 mm beyond the edematous area contain critical grading information, likely reflecting subtle tumor infiltration. Model performance declined with larger peritumoral distances, possibly due to increased normal tissue dilution.</p><p><strong>Conclusion: </strong>The radiomics features of the intratumoral region and the peritumoral region within 5 mm can optimize the preoperative grading of gliomas, providing support for surgical planning and prognostic evaluation.</p>","PeriodicalId":54215,"journal":{"name":"Current Medical Imaging Reviews","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145001895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信