Jiawei Liu, Fuyong Xing, Connor Elkhill, Marius George Linguraru, Randy C Miles, Ines A Cruz-Guerrero, Antonio R Porras
{"title":"Population-Driven Synthesis of Personalized Cranial Development from Cross-Sectional Pediatric CT Images.","authors":"Jiawei Liu, Fuyong Xing, Connor Elkhill, Marius George Linguraru, Randy C Miles, Ines A Cruz-Guerrero, Antonio R Porras","doi":"10.1109/TBME.2025.3550842","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Predicting normative pediatric growth is crucial to identify developmental anomalies. While traditional statistical and computational methods have shown promising results predicting personalized development, they either rely on statistical assumptions that limit generalizability or require longitudinal datasets, which are scarce in children. Recent deep learning methods trained with cross-sectional dataset have shown potential to predict temporal changes but have only succeeded at predicting local intensity changes and can hardly model major anatomical changes that occur during childhood. We present a novel deep learning method for image synthesis that can be trained using only cross-sectional data to make personalized predictions of pediatric development.</p><p><strong>Methods: </strong>We designed a new generative adversarial network (GAN) with a novel Siamese cyclic encoder-decoder generator architecture and an identity preservation mechanism. Our design allows the encoder to learn age- and sex-independent identity-preserving representations of patient phenotypes from single images by leveraging the statistical distributions in the cross-sectional dataset. The decoder learns to synthesize personalized images from the encoded representations at any age.</p><p><strong>Results: </strong>Trained using only cross-sectional head CT images from 2,014 subjects (age 0-10 years), our model demonstrated state-of-the-art performance evaluated on an independent longitudinal dataset with images from 51 subjects.</p><p><strong>Conclusion: </strong>Our method can predict pediatric development and synthesize temporal image sequences with state-of-the-art accuracy without requiring longitudinal images for training.</p><p><strong>Significance: </strong>Our method enables the personalized prediction of pediatric growth and longitudinal synthesis of clinical images, hence providing a patient-specific reference of normative development.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TBME.2025.3550842","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: Predicting normative pediatric growth is crucial to identify developmental anomalies. While traditional statistical and computational methods have shown promising results predicting personalized development, they either rely on statistical assumptions that limit generalizability or require longitudinal datasets, which are scarce in children. Recent deep learning methods trained with cross-sectional dataset have shown potential to predict temporal changes but have only succeeded at predicting local intensity changes and can hardly model major anatomical changes that occur during childhood. We present a novel deep learning method for image synthesis that can be trained using only cross-sectional data to make personalized predictions of pediatric development.
Methods: We designed a new generative adversarial network (GAN) with a novel Siamese cyclic encoder-decoder generator architecture and an identity preservation mechanism. Our design allows the encoder to learn age- and sex-independent identity-preserving representations of patient phenotypes from single images by leveraging the statistical distributions in the cross-sectional dataset. The decoder learns to synthesize personalized images from the encoded representations at any age.
Results: Trained using only cross-sectional head CT images from 2,014 subjects (age 0-10 years), our model demonstrated state-of-the-art performance evaluated on an independent longitudinal dataset with images from 51 subjects.
Conclusion: Our method can predict pediatric development and synthesize temporal image sequences with state-of-the-art accuracy without requiring longitudinal images for training.
Significance: Our method enables the personalized prediction of pediatric growth and longitudinal synthesis of clinical images, hence providing a patient-specific reference of normative development.
期刊介绍:
IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.