{"title":"剂量组学、放射组学、深部特征和临床数据的多模态数据集成,用于乳腺癌患者放射性肺损伤预测","authors":"Yan Li , Jun Jiang , Xuyi Li , Mei Zhang","doi":"10.1016/j.jrras.2025.101389","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>Radiation-induced lung damage (RILD) is a critical complication in breast cancer patients undergoing radiotherapy. This study proposes a multi-modal predictive framework integrating dosiomics, radiomics, deep learning-based features, and clinical data to enhance early detection and risk stratification of Grade ≥2 RILD, ultimately supporting personalized radiotherapy planning.</div></div><div><h3>Materials and methods</h3><div>A dataset of 450 breast cancer patients receiving radiotherapy was analyzed, incorporating high-resolution CT scans, 3D spatial dose distributions, and comprehensive clinical parameters such as age, BMI, tumor laterality, chemotherapy regimens, and comorbidities. Imaging data were standardized through voxel resampling and intensity normalization, and features were extracted from both radiomics (215 features) and dosiomics. Mutual Information (MI)-based feature selection was applied to enhance model performance, while a 3D autoencoder with attention mechanisms was utilized to capture spatial and structural patterns linked to RILD. Five-fold cross-validation was performed to ensure robustness.</div></div><div><h3>Results</h3><div>The Intraclass Correlation Coefficient (ICC) analysis identified the most reproducible radiomics features, leading to significant feature reduction while maintaining predictive stability. Multi-modal data integration significantly improved classification performance, with the Voting Classifier achieving 95.89% accuracy and 96.98% sensitivity when using MI-based feature selection. Deep features demonstrated superior predictive power compared to standalone dosimetric data. The 3D autoencoder model with attention mechanisms further enhanced predictive accuracy, achieving 95% accuracy, 0.96 AUC, and 0.93 sensitivity.</div></div><div><h3>Conclusion</h3><div>The proposed multi-modal AI-driven approach effectively predicts Grade ≥2 RILD, addressing limitations of traditional dose-volume metrics. The integration of radiomics, dosiomics, deep learning, and clinical data enhances model accuracy and interpretability, paving the way for personalized risk assessment and optimized radiotherapy planning. Future research should focus on external validation and real-time clinical implementation to further refine predictive capabilities.</div></div>","PeriodicalId":16920,"journal":{"name":"Journal of Radiation Research and Applied Sciences","volume":"18 2","pages":"Article 101389"},"PeriodicalIF":1.7000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-modal data integration of dosiomics, radiomics, deep features, and clinical data for radiation-induced lung damage prediction in breast cancer patients\",\"authors\":\"Yan Li , Jun Jiang , Xuyi Li , Mei Zhang\",\"doi\":\"10.1016/j.jrras.2025.101389\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><div>Radiation-induced lung damage (RILD) is a critical complication in breast cancer patients undergoing radiotherapy. This study proposes a multi-modal predictive framework integrating dosiomics, radiomics, deep learning-based features, and clinical data to enhance early detection and risk stratification of Grade ≥2 RILD, ultimately supporting personalized radiotherapy planning.</div></div><div><h3>Materials and methods</h3><div>A dataset of 450 breast cancer patients receiving radiotherapy was analyzed, incorporating high-resolution CT scans, 3D spatial dose distributions, and comprehensive clinical parameters such as age, BMI, tumor laterality, chemotherapy regimens, and comorbidities. Imaging data were standardized through voxel resampling and intensity normalization, and features were extracted from both radiomics (215 features) and dosiomics. Mutual Information (MI)-based feature selection was applied to enhance model performance, while a 3D autoencoder with attention mechanisms was utilized to capture spatial and structural patterns linked to RILD. Five-fold cross-validation was performed to ensure robustness.</div></div><div><h3>Results</h3><div>The Intraclass Correlation Coefficient (ICC) analysis identified the most reproducible radiomics features, leading to significant feature reduction while maintaining predictive stability. Multi-modal data integration significantly improved classification performance, with the Voting Classifier achieving 95.89% accuracy and 96.98% sensitivity when using MI-based feature selection. Deep features demonstrated superior predictive power compared to standalone dosimetric data. The 3D autoencoder model with attention mechanisms further enhanced predictive accuracy, achieving 95% accuracy, 0.96 AUC, and 0.93 sensitivity.</div></div><div><h3>Conclusion</h3><div>The proposed multi-modal AI-driven approach effectively predicts Grade ≥2 RILD, addressing limitations of traditional dose-volume metrics. The integration of radiomics, dosiomics, deep learning, and clinical data enhances model accuracy and interpretability, paving the way for personalized risk assessment and optimized radiotherapy planning. Future research should focus on external validation and real-time clinical implementation to further refine predictive capabilities.</div></div>\",\"PeriodicalId\":16920,\"journal\":{\"name\":\"Journal of Radiation Research and Applied Sciences\",\"volume\":\"18 2\",\"pages\":\"Article 101389\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Radiation Research and Applied Sciences\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1687850725001013\",\"RegionNum\":4,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Radiation Research and Applied Sciences","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1687850725001013","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
Multi-modal data integration of dosiomics, radiomics, deep features, and clinical data for radiation-induced lung damage prediction in breast cancer patients
Objective
Radiation-induced lung damage (RILD) is a critical complication in breast cancer patients undergoing radiotherapy. This study proposes a multi-modal predictive framework integrating dosiomics, radiomics, deep learning-based features, and clinical data to enhance early detection and risk stratification of Grade ≥2 RILD, ultimately supporting personalized radiotherapy planning.
Materials and methods
A dataset of 450 breast cancer patients receiving radiotherapy was analyzed, incorporating high-resolution CT scans, 3D spatial dose distributions, and comprehensive clinical parameters such as age, BMI, tumor laterality, chemotherapy regimens, and comorbidities. Imaging data were standardized through voxel resampling and intensity normalization, and features were extracted from both radiomics (215 features) and dosiomics. Mutual Information (MI)-based feature selection was applied to enhance model performance, while a 3D autoencoder with attention mechanisms was utilized to capture spatial and structural patterns linked to RILD. Five-fold cross-validation was performed to ensure robustness.
Results
The Intraclass Correlation Coefficient (ICC) analysis identified the most reproducible radiomics features, leading to significant feature reduction while maintaining predictive stability. Multi-modal data integration significantly improved classification performance, with the Voting Classifier achieving 95.89% accuracy and 96.98% sensitivity when using MI-based feature selection. Deep features demonstrated superior predictive power compared to standalone dosimetric data. The 3D autoencoder model with attention mechanisms further enhanced predictive accuracy, achieving 95% accuracy, 0.96 AUC, and 0.93 sensitivity.
Conclusion
The proposed multi-modal AI-driven approach effectively predicts Grade ≥2 RILD, addressing limitations of traditional dose-volume metrics. The integration of radiomics, dosiomics, deep learning, and clinical data enhances model accuracy and interpretability, paving the way for personalized risk assessment and optimized radiotherapy planning. Future research should focus on external validation and real-time clinical implementation to further refine predictive capabilities.
期刊介绍:
Journal of Radiation Research and Applied Sciences provides a high quality medium for the publication of substantial, original and scientific and technological papers on the development and applications of nuclear, radiation and isotopes in biology, medicine, drugs, biochemistry, microbiology, agriculture, entomology, food technology, chemistry, physics, solid states, engineering, environmental and applied sciences.