Recent studies have confirmed the effects of whole-brain radiation therapy (RT) on the blood-brain-barrier and vasculature permeability. Optimal therapeutic targeting of cancer depends on ability to distinguish tumor from normal tissue.
This study recruits nested model selection (NMS) and time-frequency analyses of the time-trace of contrast agent from dynamic-contrast-enhanced MRI information to characterize the acute (i.e., within hours) RT response of tumor and normal brain tissues in an animal model of brain tumors.
Twenty immune-compromised-RNU rats were implanted orthotopically with human U251N glioma cells. Twenty-eight days after the brain implantation, two DCE-MRI studies were performed 24 h apart. 20 Gy stereotactic radiation was delivered 1–6.5 h before the second MRI. NMS-based DCE-MRI analysis was performed to distinguish three different brain regions by model selection using a nested paradigm. Model 1 was characterized by non-leaky vasculature and considered as normal brain tissue. Model 2 was characterized by contrast agent (CA) movement predominantly in one direction, out of the vasculature, and was primarily associated with the tumor boundary. In contrast, Model 3 exhibited contrast agent movement in both directions, into and out of the vasculature, and corresponded to the tumor core. Time-traces of CA concentration from pre- and post-RT DCE-MRI data for the different models were analyzed using wavelet-based coherence and wavelet cross-spectrum phase analyses to characterize and rank the magnitude of RT-induced effects. Four distinct time-direction classes (in-phase/anti-phase with lead/lag time) were introduced to describe the impact of RT on CA concentration profiles, allowing for comparison of RT effects across different model-based zones of rat brains.
The time-frequency analyses revealed both average lag and lead times between the pre- and post-RT CA concentration profiles for the three model regions. The average lag times were 2.882 s (95% CI: 2.606–3.157) for Model 1, 1.546 s (95% CI: 1.401–1.691) for Model 2, and 2.515 s (95% CI: 2.319–2.711) for Model 3, all exhibiting anti-phase oscillation. The average lead times were 1.892 s (95% CI: 1.757–2.028) for Model 1, 2.632 s (95% CI: 2.366–2.898) for Model 2, and 2.160 s (95% CI: 2.021–2.299) for Model 3, also with anti-phase oscillation. Results imply that compared to pre-RT, Model 1, 2, and 3 regions that correspond to normal tissue, periphery, and core of the tumor, show lag-time (2.882 [2.606 3.157] s), lead-time (2.632 [2.366 2.898] s), and lag-time (2.515 [2.319 2.711] s), in their post-RT time-trace of CA concentration, respectively. RT-induced lead/lag time changes were found to be more significant for the lower frequency components of the CA concentration profiles of all the three models. The analysis further revealed that Model 2 (tumor periphery) exhibited the most significant lead time, implying a shorter retainage-time of CA after radiation. Conversely, Model 1, normal tissue, showed the most pronounced lag-time, suggesting longer retainage-time of CA.
This study demonstrates a novel approach to analyze the time-frequency information of DCE-MRI CA concentration profiles of the animal brain to detect acute changes in tumor and normal tissue physiology in response to RT that has clinical translatability and has potential to improve treatment planning and RT efficacy.