Constanza Vásquez-Venegas, Chenwei Wu, Saketh Sundar, Renata Prôa, Francis Joshua Beloy, Jillian Reeze Medina, Megan McNichol, Krishnaveni Parvataneni, Nicholas Kurtzman, Felipe Mirshawka, Marcela Aguirre-Jerez, Daniel K Ebner, Leo Anthony Celi
{"title":"Detecting and Mitigating the Clever Hans Effect in Medical Imaging: A Scoping Review.","authors":"Constanza Vásquez-Venegas, Chenwei Wu, Saketh Sundar, Renata Prôa, Francis Joshua Beloy, Jillian Reeze Medina, Megan McNichol, Krishnaveni Parvataneni, Nicholas Kurtzman, Felipe Mirshawka, Marcela Aguirre-Jerez, Daniel K Ebner, Leo Anthony Celi","doi":"10.1007/s10278-024-01335-z","DOIUrl":"https://doi.org/10.1007/s10278-024-01335-z","url":null,"abstract":"<p><p>The Clever Hans effect occurs when machine learning models rely on spurious correlations instead of clinically relevant features and poses significant challenges to the development of reliable artificial intelligence (AI) systems in medical imaging. This scoping review provides an overview of methods for identifying and addressing the Clever Hans effect in medical imaging AI algorithms. A total of 173 papers published between 2010 and 2024 were reviewed, and 37 articles were selected for detailed analysis, with classification into two categories: detection and mitigation approaches. Detection methods include model-centric, data-centric, and uncertainty and bias-based approaches, while mitigation strategies encompass data manipulation techniques, feature disentanglement and suppression, and domain knowledge-driven approaches. Despite the progress in detecting and mitigating the Clever Hans effect, the majority of current machine learning studies in medical imaging do not report or test for shortcut learning, highlighting the need for more rigorous validation and transparency in AI research. Future research should focus on creating standardized benchmarks, developing automated detection tools, and exploring the integration of detection and mitigation strategies to comprehensively address shortcut learning. Establishing community-driven best practices and leveraging interdisciplinary collaboration will be crucial for ensuring more reliable, generalizable, and equitable AI systems in healthcare.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142793039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning-Based DCE-MRI Automatic Segmentation in Predicting Lesion Nature in BI-RADS Category 4.","authors":"Tianyu Liu, Yurui Hu, Zehua Liu, Zeshuo Jiang, Xiao Ling, Xueling Zhu, Wenfei Li","doi":"10.1007/s10278-024-01340-2","DOIUrl":"https://doi.org/10.1007/s10278-024-01340-2","url":null,"abstract":"<p><p>To investigate whether automatic segmentation based on DCE-MRI with a deep learning (DL) algorithm enabled advantages over manual segmentation in differentiating BI-RADS 4 breast lesions. A total of 197 patients with suspicious breast lesions from two medical centers were enrolled in this study. Patients treated at the First Hospital of Qinhuangdao between January 2018 and April 2024 were included as the training set (n = 138). Patients treated at Lanzhou University Second Hospital were assigned to an external validation set (n = 59). Areas of suspicious lesions were delineated based on DL automatic segmentation and manual segmentation, and evaluated consistency through the Dice correlation coefficient. Radiomics models were constructed based on DL and manual segmentations to predict the nature of BI-RADS 4 lesions. Meanwhile, the nature of the lesions was evaluated by both a professional radiologist and a non-professional radiologist. Finally, the area under the curve value (AUC) and accuracy (ACC) were used to determine which prediction model was more effective. Sixty-four malignant cases (32.5%) and 133 benign cases (67.5%) were included in this study. The DL-based automatic segmentation model showed high consistency with manual segmentation, achieving a Dice coefficient of 0.84 ± 0.11. The DL-based radiomics model demonstrated superior predictive performance compared to professional radiologists, with an AUC of 0.85 (95% CI 0.79-0.92). The DL model significantly reduced working time and improved efficiency by 83.2% compared to manual segmentation, further demonstrating its feasibility for clinical applications. The DL-based radiomics model for automatic segmentation outperformed professional radiologists in distinguishing between benign and malignant lesions in BI-RADS category 4, thereby helping to avoid unnecessary biopsies. This groundbreaking progress suggests that the DL model is expected to be widely applied in clinical practice in the near future, providing an effective auxiliary tool for the diagnosis and treatment of breast cancer.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taymaz Akan, Sait Alp, Md Shenuarin Bhuiyan, Tarek Helmy, A Wayne Orr, Md Mostafizur Rahman Bhuiyan, Steven A Conrad, John A Vanchiere, Christopher G Kevil, Mohammad Alfrad Nobel Bhuiyan
{"title":"ViViEchoformer: Deep Video Regressor Predicting Ejection Fraction.","authors":"Taymaz Akan, Sait Alp, Md Shenuarin Bhuiyan, Tarek Helmy, A Wayne Orr, Md Mostafizur Rahman Bhuiyan, Steven A Conrad, John A Vanchiere, Christopher G Kevil, Mohammad Alfrad Nobel Bhuiyan","doi":"10.1007/s10278-024-01336-y","DOIUrl":"10.1007/s10278-024-01336-y","url":null,"abstract":"<p><p>Heart disease is the leading cause of death worldwide, and cardiac function as measured by ejection fraction (EF) is an important determinant of outcomes, making accurate measurement a critical parameter in PT evaluation. Echocardiograms are commonly used for measuring EF, but human interpretation has limitations in terms of intra- and inter-observer (or reader) variance. Deep learning (DL) has driven a resurgence in machine learning, leading to advancements in medical applications. We introduce the ViViEchoformer DL approach, which uses a video vision transformer to directly regress the left ventricular function (LVEF) from echocardiogram videos. The study used a dataset of 10,030 apical-4-chamber echocardiography videos from patients at Stanford University Hospital. The model accurately captures spatial information and preserves inter-frame relationships by extracting spatiotemporal tokens from video input, allowing for accurate, fully automatic EF predictions that aid human assessment and analysis. The ViViEchoformer's prediction of ejection fraction has a mean absolute error of 6.14%, a root mean squared error of 8.4%, a mean squared log error of 0.04, and an <math> <msup><mrow><mi>R</mi></mrow> <mn>2</mn></msup> </math> of 0.55. ViViEchoformer predicted heart failure with reduced ejection fraction (HFrEF) with an area under the curve of 0.83 and a classification accuracy of 87 using a standard threshold of less than 50% ejection fraction. Our video-based method provides precise left ventricular function quantification, offering a reliable alternative to human evaluation and establishing a fundamental basis for echocardiogram interpretation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Rumpf, Nicola Zufall, Florian Rumpf, Andreas Gschwendtner
{"title":"A Performance Comparison of Different YOLOv7 Networks for High-Accuracy Cell Classification in Bronchoalveolar Lavage Fluid Utilising the Adam Optimiser and Label Smoothing.","authors":"Sebastian Rumpf, Nicola Zufall, Florian Rumpf, Andreas Gschwendtner","doi":"10.1007/s10278-024-01315-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01315-3","url":null,"abstract":"<p><p>Accurate classification of cells in bronchoalveolar lavage (BAL) fluid is essential for the assessment of lung disease in pneumology and critical care medicine. However, the effectiveness of BAL fluid analysis is highly dependent on individual expertise. Our research is focused on improving the accuracy and efficiency of BAL cell classification using the \"You Only Look Once\" (YOLO) algorithm to reduce variability and increase the accuracy of cell detection in BALF analysis. We assess various YOLOv7 iterations, including YOLOv7, YOLOv7 with Adam and label smoothing, YOLOv7-E6E, and YOLOv7-E6E with Adam and label smoothing focusing on the detection of four key cell types of diagnostic importance in BAL fluid: macrophages, lymphocytes, neutrophils, and eosinophils. This study utilised cytospin preparations of BAL fluid, employing May-Grunwald-Giemsa staining, and analysed a dataset comprising 2032 images with 42,221 annotations. Classification performance was evaluated using recall, precision, F1 score, mAP@.5, and mAP@.5;.95 along with a confusion matrix. The comparison of four algorithmic approaches revealed minor distinctions in mean results, falling short of statistical significance (p < 0.01; p < 0.05). YOLOv7, with an inference time of 13.5 ms for 640 × 640 px images, achieved commendable performance across all cell types, boasting an average F1 metric of 0.922, precision of 0.916, recall of 0.928, and mAP@.5 of 0.966. Remarkably, all four cell types were classified consistently with high-performance metrics. Notably, YOLOv7 demonstrated marginally superior class value dispersion when compared to YOLOv7-adam-label-smoothing, YOLOv7-E6E, and YOLOv7-E6E-adam-label-smoothing, albeit without statistical significance. Consequently, there is limited justification for deploying the more computationally intensive YOLOv7-E6E and YOLOv7-E6E-adam-label-smoothing models. This investigation indicates that the default YOLOv7 variant is the preferred choice for differential cytology due to its accessibility, lower computational demands, and overall more consistent results than more complex models.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeeho E Im, Muhammed Khalifa, Adriana V Gregory, Bradley J Erickson, Timothy L Kline
{"title":"A Systematic Review on the Use of Registration-Based Change Tracking Methods in Longitudinal Radiological Images.","authors":"Jeeho E Im, Muhammed Khalifa, Adriana V Gregory, Bradley J Erickson, Timothy L Kline","doi":"10.1007/s10278-024-01333-1","DOIUrl":"https://doi.org/10.1007/s10278-024-01333-1","url":null,"abstract":"<p><p>Registration is the process of spatially and/or temporally aligning different images. It is a critical tool that can facilitate the automatic tracking of pathological changes detected in radiological images and align images captured by different imaging systems and/or those acquired using different acquisition parameters. The longitudinal analysis of clinical changes has a significant role in helping clinicians evaluate disease progression and determine the most suitable course of treatment for patients. This study provides a comprehensive review of the role registration-based approaches play in automated change tracking in radiological imaging and explores the three types of registration approaches which include rigid, affine, and nonrigid registration, as well as methods of detecting and quantifying changes in registered longitudinal images: the intensity-based approach and the deformation-based approach. After providing an overview and background, we highlight the clinical applications of these methods, specifically focusing on computed tomography (CT) and magnetic resonance imaging (MRI) in tumors and multiple sclerosis (MS), two of the most heavily studied areas in automated change tracking. We conclude with a discussion and recommendation for future directions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farhad Maleki, Linda Moy, Reza Forghani, Tapotosh Ghosh, Katie Ovens, Steve Langer, Pouria Rouzrokh, Bardia Khosravi, Ali Ganjizadeh, Daniel Warren, Roxana Daneshjou, Mana Moassefi, Atlas Haddadi Avval, Susan Sotardi, Neil Tenenholtz, Felipe Kitamura, Timothy Kline
{"title":"RIDGE: Reproducibility, Integrity, Dependability, Generalizability, and Efficiency Assessment of Medical Image Segmentation Models.","authors":"Farhad Maleki, Linda Moy, Reza Forghani, Tapotosh Ghosh, Katie Ovens, Steve Langer, Pouria Rouzrokh, Bardia Khosravi, Ali Ganjizadeh, Daniel Warren, Roxana Daneshjou, Mana Moassefi, Atlas Haddadi Avval, Susan Sotardi, Neil Tenenholtz, Felipe Kitamura, Timothy Kline","doi":"10.1007/s10278-024-01282-9","DOIUrl":"10.1007/s10278-024-01282-9","url":null,"abstract":"<p><p>Deep learning techniques hold immense promise for advancing medical image analysis, particularly in tasks like image segmentation, where precise annotation of regions or volumes of interest within medical images is crucial but manually laborious and prone to interobserver and intraobserver biases. As such, deep learning approaches could provide automated solutions for such applications. However, the potential of these techniques is often undermined by challenges in reproducibility and generalizability, which are key barriers to their clinical adoption. This paper introduces the RIDGE checklist, a comprehensive framework designed to assess the Reproducibility, Integrity, Dependability, Generalizability, and Efficiency of deep learning-based medical image segmentation models. The RIDGE checklist is not just a tool for evaluation but also a guideline for researchers striving to improve the quality and transparency of their work. By adhering to the principles outlined in the RIDGE checklist, researchers can ensure that their developed segmentation models are robust, scientifically valid, and applicable in a clinical setting.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faisal Alshanketi, Abdulrahman Alharbi, Mathew Kuruvilla, Vahid Mahzoon, Shams Tabrez Siddiqui, Nadim Rana, Ali Tahir
{"title":"Pneumonia Detection from Chest X-Ray Images Using Deep Learning and Transfer Learning for Imbalanced Datasets.","authors":"Faisal Alshanketi, Abdulrahman Alharbi, Mathew Kuruvilla, Vahid Mahzoon, Shams Tabrez Siddiqui, Nadim Rana, Ali Tahir","doi":"10.1007/s10278-024-01334-0","DOIUrl":"10.1007/s10278-024-01334-0","url":null,"abstract":"<p><p>Pneumonia remains a significant global health challenge, necessitating timely and accurate diagnosis for effective treatment. In recent years, deep learning techniques have emerged as powerful tools for automating pneumonia detection from chest X-ray images. This paper provides a comprehensive investigation into the application of deep learning for pneumonia detection, with an emphasis on overcoming the challenges posed by imbalanced datasets. The study evaluates the performance of various deep learning architectures, including visual geometry group (VGG), residual networks (ResNet), and Vision Transformers (ViT) along with strategies to mitigate the impact of imbalanced dataset, on publicly available datasets such as the Chest X-Ray Images (Pneumonia) dataset, BRAX dataset, and CheXpert dataset. Additionally, transfer learning from pre-trained models, such as ImageNet, is investigated to leverage prior knowledge for improved performance on pneumonia detection tasks. Our investigation extends to zero-shot and few-shot learning experiments on different geographical regions. The study also explores semi-supervised learning methods, including the Mean Teacher algorithm, to utilize unlabeled data effectively. Experimental results demonstrate the efficacy of transfer learning, data augmentation, and balanced weight in addressing imbalanced datasets, leading to improved accuracy and performance in pneumonia detection. Our findings emphasize the importance of selecting appropriate strategies based on dataset characteristics, with semi-supervised learning showing particular promise in leveraging unlabeled data. The findings highlight the potential of deep learning techniques in revolutionizing pneumonia diagnosis and treatment, paving the way for more efficient and accurate clinical workflows in the future.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparison of Deep Learning vs. Dental Implantologists in Cone-Beam Computed Tomography-Based Bone Quality Classification.","authors":"Thatphong Pornvoranant, Wannakamon Panyarak, Kittichai Wantanajittikul, Arnon Charuakkra, Pimduen Rungsiyakull, Pisaisit Chaijareenont","doi":"10.1007/s10278-024-01317-1","DOIUrl":"10.1007/s10278-024-01317-1","url":null,"abstract":"<p><p>Bone quality assessment is crucial for pre-surgical implant planning, influencing both implant design and drilling protocol selection. The Lekholm and Zarb (L&Z) classification, which categorizes bone quality into four types based on cortical bone width and trabecular bone density using cone-beam computed tomography (CBCT) data, lacks quantitative guidelines, leading to subjective interpretations. This study aimed to compare the performance of deep learning (DL)-based approaches against human examiners in assessing bone quality, according to the L&Z classification, using CBCT images. A dataset of 1100 CBCT cross-sectional slices was classified into four bone types by two oral and maxillofacial radiologists. Five pre-trained DL models were trained on 1000 images using MATLAB<sup>®</sup>, with 100 images reserved for testing. Inception-ResNet-v2 achieved the highest accuracy (86.00%) with a learning rate of 0.001. The performance of Inception-ResNet-v2 was then compared to that of 23 residency students and two experienced implantologists. The DL model outperformed human assessors across all parameters, demonstrating excellent precision and recall, with F1-scores exceeding 75%. Notably, residency students and one implantologist struggled to distinguish bone type 2, with low recall rates (48.15% and 40.74%, respectively). In conclusion, the Inception-ResNet-v2 DL model demonstrated superior performance compared to novice implantologists, suggesting its potential as an supplementary tool for cross-sectional bone quality assessment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Active Learning with Particle Swarm Optimization for Enhanced Skin Cancer Classification Utilizing Deep CNN Models.","authors":"Sayantani Mandal, Subhayu Ghosh, Nanda Dulal Jana, Somenath Chakraborty, Saurav Mallik","doi":"10.1007/s10278-024-01327-z","DOIUrl":"10.1007/s10278-024-01327-z","url":null,"abstract":"<p><p>Skin cancer is a critical global health issue, with millions of non-melanoma and melanoma cases diagnosed annually. Early detection is essential to improving patient outcomes, yet traditional deep learning models for skin cancer classification are often limited by the need for large, annotated datasets and extensive computational resources. The aim of this study is to address these limitations by proposing an efficient skin cancer classification framework that integrates active learning (AL) with particle swarm optimization (PSO). The AL framework selectively identifies the most informative unlabeled instances for expert annotation, minimizing labeling costs while optimizing classifier performance. PSO, a nature-inspired metaheuristic algorithm, enhances the selection process within AL, ensuring the most relevant data points are chosen. This method was applied to train multiple Convolutional Neural Network (CNN) models on the HAM10000 skin lesion dataset. Experimental results demonstrate that the proposed AL-PSO approach significantly improves classification accuracy, with the Least Confidence strategy achieving approximately 89.4% accuracy while using only 40% of the labeled training data. This represents a substantial improvement over traditional approaches in terms of both accuracy and efficiency. The findings indicate that the integration of AL and PSO can accelerate the adoption of AI in clinical settings for skin cancer detection. The code for this study is publicly available at ( https://github.com/Sayantani-31/AL-PSO ).</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning-Based Pediatric Brain Region Segmentation and Volumetric Analysis for General Growth Pattern in Healthy Children.","authors":"Hui Zheng, Xinyun Wang, Ming Liu, Qiufeng Yin, Zhengwei Zhang, Ying Wei, Feng Shi, Dengbin Wang, Yuzhen Zhang","doi":"10.1007/s10278-024-01305-5","DOIUrl":"https://doi.org/10.1007/s10278-024-01305-5","url":null,"abstract":"<p><p>To establish a quantitative reference for brain structural changes in children with neurological disorders, we employed deep learning technique to brain region segmentation and volumetric analysis within a cohort of healthy children. In this study, we recruited 312 participants aged 1.5 to 14.5 years (210 boys and 102 girls), dividing them into five age groups. High-resolution structural T1-weighted images were obtained, and an established toolkit utilizing deep learning algorithms was employed for brain region segmentation. For each age group, the volumes of gray matter and white matter, along with the thickness and surface area of the cortex, were calculated and compared between boys and girls. The results indicated that the volumes of gray matter and white matter in both bilateral cerebral hemispheres, as well as the total brain volume, increased with age. Furthermore, the volumes of the left and right hippocampus, amygdala, and thalamus also demonstrated an increase as age progressed. Conversely, cortical thickness and surface area decreased with age. Our findings provide a quantitative reference for understanding brain structural changes in children with neurological disorders.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}