Alessio Parisi, Keith M. Furutani, Tatsuhiko Sato, Chris J. Beltran
{"title":"LET-based approximation of the microdosimetric kinetic model for proton radiotherapy","authors":"Alessio Parisi, Keith M. Furutani, Tatsuhiko Sato, Chris J. Beltran","doi":"10.1002/mp.17337","DOIUrl":"10.1002/mp.17337","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Phenomenological relative biological effectiveness (RBE) models for proton therapy, based on the dose-averaged linear energy transfer (LET), have been developed to address the apparent RBE increase towards the end of the proton range. The results of these phenomenological models substantially differ due to varying empirical assumptions and fitting functions. In contrast, more theory-based approaches are used in carbon ion radiotherapy, such as the microdosimetric kinetic model (MKM). However, implementing microdosimetry-based models in LET-based proton therapy treatment planning systems poses challenges.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This work presents a LET-based version of the MKM that is practical for clinical use in proton radiotherapy.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>At first, we derived an approximation of the Mayo Clinic Florida (MCF) MKM for relatively-sparsely ionizing radiation such as protons. The mathematical formalism of the proposed model is equivalent to the original MKM, but it maintains some key features of the MCF MKM, such as the determination of model parameters from measurable cell characteristics. Subsequently, we carried out Monte Carlo calculations with PHITS in different simulated scenarios to establish a heuristic correlation between microdosimetric quantities and the dose averaged LET of protons.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>A simple allometric function was found able to describe the relationship between the dose-averaged LET of protons and the dose-mean lineal energy, which includes the contributions of secondary particles. The LET-based MKM was used to model the in vitro clonogenic survival RBE of five human and rodent cell lines (A549, AG01522, CHO, T98G, and U87) exposed to pristine and spread-out Bragg peak (SOBP) proton beams. The results of the LET-based MKM agree well with the biological data in a comparable or better way with respect to the other models included in the study. A sensitivity analysis on the model results was also performed.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The LET-based MKM integrates the predictive theoretical framework of the MCF MKM with a straightforward mathematical description of the RBE based on the dose-averaged LET, a physical quantity readily available in modern treatment planning systems for proton therapy.</p>\u0000 </section>\u0000 </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 10","pages":"7589-7605"},"PeriodicalIF":3.2,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliable multi-modal medical image-to-image translation independent of pixel-wise aligned data","authors":"Langrui Zhou, Guang Li","doi":"10.1002/mp.17362","DOIUrl":"10.1002/mp.17362","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>The current mainstream multi-modal medical image-to-image translation methods face a contradiction. Supervised methods with outstanding performance rely on pixel-wise aligned training data to constrain the model optimization. However, obtaining pixel-wise aligned multi-modal medical image datasets is challenging. Unsupervised methods can be trained without paired data, but their reliability cannot be guaranteed. At present, there is no ideal multi-modal medical image-to-image translation method that can generate reliable translation results without the need for pixel-wise aligned data.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This work aims to develop a novel medical image-to-image translation model that is independent of pixel-wise aligned data (MITIA), enabling reliable multi-modal medical image-to-image translation under the condition of misaligned training data.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>The proposed MITIA model utilizes a prior extraction network composed of a multi-modal medical image registration module and a multi-modal misalignment error detection module to extract pixel-level prior information from training data with misalignment errors to the largest extent. The extracted prior information is then used to construct a regularization term to constrain the optimization of the unsupervised cycle-consistent Generative Adversarial Network model, restricting its solution space and thereby improving the performance and reliability of the generator. We trained the MITIA model using six datasets containing different misalignment errors and two well-aligned datasets. Subsequently, we conducted quantitative analysis using peak signal-to-noise ratio and structural similarity as metrics. Moreover, we compared the proposed method with six other state-of-the-art image-to-image translation methods.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The results of both quantitative analysis and qualitative visual inspection indicate that MITIA achieves superior performance compared to the competing state-of-the-art methods, both on misaligned data and aligned data. Furthermore, MITIA shows more stability in the presence of misalignment errors in the training data, regardless of their severity or type.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The proposed method achieves outstanding performance in multi-modal medical image-to-image translation tasks without aligned training data. Due to the difficulty in obtaining pixel-wise aligned data for medical image translation tasks, MITIA is expected","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8283-8301"},"PeriodicalIF":3.2,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claire Keun Sun Park, Noah Stanley Warner, Evangelia Kaza, Atchar Sudhyadhom
{"title":"Optimization and validation of low-field MP2RAGE T1 mapping on 0.35T MR-Linac: Toward adaptive dose painting with hypoxia biomarkers","authors":"Claire Keun Sun Park, Noah Stanley Warner, Evangelia Kaza, Atchar Sudhyadhom","doi":"10.1002/mp.17353","DOIUrl":"10.1002/mp.17353","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Stereotactic MR-guided Adaptive Radiation Therapy (SMART) dose painting for hypoxia has potential to improve treatment outcomes, but clinical implementation on low-field MR-Linac faces substantial challenges due to dramatically lower signal-to-noise ratio (SNR) characteristics. While quantitative MRI and T<sub>1</sub> mapping of hypoxia biomarkers show promise, T<sub>1</sub>-to-noise ratio (T<sub>1</sub>NR) optimization at low fields is paramount, particularly for the clinical implementation of oxygen-enhanced (OE)-MRI. The 3D Magnetization Prepared (2) Rapid Gradient Echo (MP2RAGE) sequence stands out for its ability to acquire homogeneous T<sub>1</sub>-weighted contrast images with simultaneous T<sub>1</sub> mapping.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>To optimize MP2RAGE for low-field T<sub>1</sub> mapping; conduct experimental validation in a ground-truth phantom; establish feasibility and reproducibility of low-field MP2RAGE acquisition and T<sub>1</sub> mapping in healthy volunteers.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>The MP2RAGE optimization was performed to maximize the contrast-to-noise ratio (CNR) of T<sub>1</sub> values in white matter (WM) and gray matter (GM) brain tissues at 0.35T. Low-field MP2RAGE images were acquired on a 0.35T MR-Linac (ViewRay MRIdian) using a multi-channel head coil. Validation of T<sub>1</sub> mapping was performed with a ground-truth Eurospin phantom, containing inserts of known T<sub>1</sub> values (400–850 ms), with one and two average (1A and 2A) MP2RAGE scans across four acquisition sessions, resulting in eight T<sub>1</sub> maps. Mean (± SD) T<sub>1</sub> relative error, T<sub>1</sub>NR, and intersession coefficient of variation (CV) were determined. Whole-brain MP2RAGE scans were acquired in 5 healthy volunteers across two sessions (A and B) and T<sub>1</sub> maps were generated. Mean (± SD) T<sub>1</sub> values for WM and GM were determined. Whole-brain T<sub>1</sub> histogram analysis was performed, and reproducibility was determined with the CV between sessions. Voxel-by-voxel T<sub>1</sub> difference maps were generated to evaluate 3D spatial variation.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Low-field MP2RAGE optimization resulted in parameters: MP2RAGE<sub>TR</sub> of 3250 ms, inversion times (TI<sub>1</sub>/TI<sub>2</sub>) of 500/1200 ms, and flip angles (<i>α</i><sub>1</sub>/<i>α</i><sub>2</sub>) of 7/5°. Eurospin T<sub>1</sub> maps exhibited a mean (± SD) relative error of 3.45% ± 1.30%, T<sub>1</sub>NR of 20.13 ± 5.31, and CV of 2.22% ± 0.67% across all inserts. Whole-brain MP2RAGE images showed high anatomical","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8124-8140"},"PeriodicalIF":3.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao Wang, Bowen Lin, Yuting Lin, Suzanne M. Shontz, Weizhang Huang, Ronald C. Chen, Hao Gao
{"title":"TEAM: Triangular-mEsh Adaptive and Multiscale proton spot generation method","authors":"Chao Wang, Bowen Lin, Yuting Lin, Suzanne M. Shontz, Weizhang Huang, Ronald C. Chen, Hao Gao","doi":"10.1002/mp.17352","DOIUrl":"10.1002/mp.17352","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Proton therapy is preferred for its dose conformality to spare normal tissues and organs-at-risk (OAR) via Bragg peaks with negligible exit dose. However, proton dose conformality can be further optimized: (1) the spot placement is based on the structured (e.g., Cartesian) grid, which may not offer conformal shaping to complex tumor targets; (2) the spot sampling pattern is uniform, which may be insufficient at the tumor boundary to provide the sharp dose falloff, and at the same time may be redundant at the tumor interior to provide the uniform dose coverage, for example, due to multiple Coulomb scattering (MCS); and (3) the lateral spot penumbra increases with respect to the depth due to MCS, which blurs the lateral dose falloff. On the other hand, while (1) the deliverable spots are subject to the minimum-monitor-unit (MMU) constraint, and (2) the dose rate is proportional to the MMU threshold, the current spot sampling method is sensitive to the MMU threshold and can fail to provide satisfactory plan quality for a large MMU threshold (i.e., high-dose-rate delivery).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This work will develop a novel Triangular-mEsh-based Adaptive and Multiscale (TEAM) proton spot generation method to address these issues for optimizing proton dose conformality and plan delivery efficiency.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Compared to the standard clinically-used spot placement method, three key elements of TEAM are as follows: (1) a triangular mesh instead of a structured grid: the triangular mesh is geometrically more conformal to complex target shapes and therefore more efficient and accurate for dose shaping inside and around the target; (2) adaptive sampling instead of uniform sampling: the adaptive sampling consists of relatively dense sampling at the tumor boundary to create the sharp dose falloff, which is more accurate, and coarse sampling at the tumor interior to uniformly cover the target, which is more efficient; and (3) depth-dependent sampling instead of depth-independent sampling: the depth-dependent sampling is used to compensate for MCS, that is, with increasingly dense sampling at the tumor boundary to improve dose shaping accuracy, and increasingly coarse sampling at the tumor interior to improve dose shaping efficiency, as the depth increases. In the TEAM method the spot locations are generated for each energy layer and layer-by-layer in the multiscale fashion; and then the spot weights are derived by solving the IMPT problem of dose-volume planning objectives, MMU constraints, and robustness optimization with respect to range and setup uncertainties.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 ","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 10","pages":"7067-7079"},"PeriodicalIF":3.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wilbur KS Fum, Mohammad Nazri Md Shah, Raja Rizal Azman Raja Aman, Khairul Azmi Abd Kadir, Sum Leong, Li Kuo Tan
{"title":"Automatic localization of anatomical landmarks in head cine fluoroscopy images via deep learning","authors":"Wilbur KS Fum, Mohammad Nazri Md Shah, Raja Rizal Azman Raja Aman, Khairul Azmi Abd Kadir, Sum Leong, Li Kuo Tan","doi":"10.1002/mp.17349","DOIUrl":"10.1002/mp.17349","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Fluoroscopy guided interventions (FGIs) pose a risk of prolonged radiation exposure; personalized patient dosimetry is necessary to improve patient safety during these procedures. However, current FGIs systems do not capture the precise exposure regions of the patient, making it challenging to perform patient-procedure-specific dosimetry. Thus, there is a pressing need to develop approaches to extract and use this information to enable personalized radiation dosimetry for interventional procedures.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>To propose a deep learning (DL) approach for the automatic localization of 3D anatomical landmarks on randomly collimated and magnified 2D head fluoroscopy images.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Materials and methods</h3>\u0000 \u0000 <p>The model was developed with datasets comprising 800 000 pseudo 2D synthetic images (mixture of vessel-enhanced and non-enhancement), each with 55 annotated anatomical landmarks (two are landmarks for eye lenses), generated from 135 retrospectively collected head computed tomography (CT) volumetric data. Before training, dynamic random cropping was performed to mimic the varied field-size collimation in FGI procedures. Gaussian-distributed additive noise was applied to each individual image to enhance the robustness of the DL model in handling image degradation that may occur during clinical image acquisition in a clinical environment. The model was trained with 629 370 synthetic images for approximately 275 000 iterations and evaluated against a synthetic image test set and a clinical fluoroscopy test set.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The model shows good performance in estimating in- and out-of-image landmark positions and shows feasibility to instantiate the skull shape. The model successfully detected 96.4% and 92.5% 2D and 3D landmarks, respectively, within a 10 mm error on synthetic test images. It demonstrated an average of 3.6 ± 2.3 mm mean radial error and successfully detected 96.8% 2D landmarks within 10 mm error on clinical fluoroscopy images.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>Our deep-learning model successfully localizes anatomical landmarks and estimates the gross shape of skull structures from collimated 2D projection views. This method may help identify the exposure region required for patient-specific organ dosimetry in FGIs procedures.</p>\u0000 </section>\u0000 </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 10","pages":"7191-7205"},"PeriodicalIF":3.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aissam Djahnine, Emilien Jupin-Delevaux, Olivier Nempont, Salim Aymeric Si-Mohamed, Fabien Craighero, Vincent Cottin, Philippe Douek, Alexandre Popoff, Loic Boussel
{"title":"Weakly-supervised learning-based pathology detection and localization in 3D chest CT scans","authors":"Aissam Djahnine, Emilien Jupin-Delevaux, Olivier Nempont, Salim Aymeric Si-Mohamed, Fabien Craighero, Vincent Cottin, Philippe Douek, Alexandre Popoff, Loic Boussel","doi":"10.1002/mp.17302","DOIUrl":"10.1002/mp.17302","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Recent advancements in anomaly detection have paved the way for novel radiological reading assistance tools that support the identification of findings, aimed at saving time. The clinical adoption of such applications requires a low rate of false positives while maintaining high sensitivity.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>In light of recent interest and development in multi pathology identification, we present a novel method, based on a recent contrastive self-supervised approach, for multiple chest-related abnormality identification including low lung density area (“LLDA”), consolidation (“CONS”), nodules (“NOD”) and interstitial pattern (“IP”). Our approach alerts radiologists about abnormal regions within a computed tomography (CT) scan by providing 3D localization.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We introduce a new method for the classification and localization of multiple chest pathologies in 3D Chest CT scans. Our goal is to distinguish four common chest-related abnormalities: “LLDA”, “CONS”, “NOD”, “IP” and “NORMAL”. This method is based on a 3D patch-based classifier with a Resnet backbone encoder pretrained leveraging recent contrastive self supervised approach and a fine-tuned classification head. We leverage the SimCLR contrastive framework for pretraining on an unannotated dataset of randomly selected patches and we then fine-tune it on a labeled dataset. During inference, this classifier generates probability maps for each abnormality across the CT volume, which are aggregated to produce a multi-label patient-level prediction. We compare different training strategies, including random initialization, ImageNet weight initialization, frozen SimCLR pretrained weights and fine-tuned SimCLR pretrained weights. Each training strategy is evaluated on a validation set for hyperparameter selection and tested on a test set. Additionally, we explore the fine-tuned SimCLR pretrained classifier for 3D pathology localization and conduct qualitative evaluation.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Validated on 111 chest scans for hyperparameter selection and subsequently tested on 251 chest scans with multi-abnormalities, our method achieves an AUROC of 0.931 (95% confidence interval [CI]: [0.9034, 0.9557], <span></span><math>\u0000 <semantics>\u0000 <mi>p</mi>\u0000 <annotation>$ p$</annotation>\u0000 </semantics></math>-value < 0.001) and 0.963 (95% CI: [0.952, 0.976], <span></span><math>\u0000 <semantics>\u0000 <mi>p</mi>\u0000 <annotation>$ p$</annotation","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8272-8282"},"PeriodicalIF":3.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xia Li, Renato Bellotti, Barbara Bachtiary, Jan Hrbacek, Damien C. Weber, Antony J. Lomax, Joachim M. Buhmann, Ye Zhang
{"title":"A unified generation-registration framework for improved MR-based CT synthesis in proton therapy","authors":"Xia Li, Renato Bellotti, Barbara Bachtiary, Jan Hrbacek, Damien C. Weber, Antony J. Lomax, Joachim M. Buhmann, Ye Zhang","doi":"10.1002/mp.17338","DOIUrl":"10.1002/mp.17338","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Compared to the baseline Pix2Pix method with MAE 124.95<span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math>30.74 HU, the proposed technique demonstrated 80.98<span></span><math>\u0000 <semantics>\u0000 <mo>±</mo>\u0000 <annotation>$pm$</annotation>\u0000 </semantics></math>7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>This study conclusively demonstrates that a holistic MR-based CT synthesi","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8302-8316"},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17338","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seventieth Annual Scientific Meeting of Canadian Organization of Medical Physicists, Delta Hotels, Regina, Saskatchewan, June 5–8, 2024","authors":"","doi":"10.1002/mp.17344","DOIUrl":"https://doi.org/10.1002/mp.17344","url":null,"abstract":"","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 8","pages":"5785-5835"},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuzhou Zhuang, Hong Liu, Wei Fang, Guangzhi Ma, Sisi Sun, Yunfeng Zhu, Xu Zhang, Chuanbin Ge, Wenyang Chen, Jiaosong Long, Enmin Song
{"title":"A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images","authors":"Yuzhou Zhuang, Hong Liu, Wei Fang, Guangzhi Ma, Sisi Sun, Yunfeng Zhu, Xu Zhang, Chuanbin Ge, Wenyang Chen, Jiaosong Long, Enmin Song","doi":"10.1002/mp.17354","DOIUrl":"10.1002/mp.17354","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Precise glioma segmentation from multi-parametric magnetic resonance (MR) images is essential for brain glioma diagnosis. However, due to the indistinct boundaries between tumor sub-regions and the heterogeneous appearances of gliomas in volumetric MR scans, designing a reliable and automated glioma segmentation method is still challenging. Although existing 3D Transformer-based or convolution-based segmentation networks have obtained promising results via multi-modal feature fusion strategies or contextual learning methods, they widely lack the capability of hierarchical interactions between different modalities and cannot effectively learn comprehensive feature representations related to all glioma sub-regions.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Purpose</h3>\u0000 \u0000 <p>To overcome these problems, in this paper, we propose a 3D hierarchical cross-modality interaction network (HCMINet) using Transformers and convolutions for accurate multi-modal glioma segmentation, which leverages an effective hierarchical cross-modality interaction strategy to sufficiently learn modality-specific and modality-shared knowledge correlated to glioma sub-region segmentation from multi-parametric MR images.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>In the HCMINet, we first design a hierarchical cross-modality interaction Transformer (HCMITrans) encoder to hierarchically encode and fuse heterogeneous multi-modal features by Transformer-based intra-modal embeddings and inter-modal interactions in multiple encoding stages, which effectively captures complex cross-modality correlations while modeling global contexts. Then, we collaborate an HCMITrans encoder with a modality-shared convolutional encoder to construct the dual-encoder architecture in the encoding stage, which can learn the abundant contextual information from global and local perspectives. Finally, in the decoding stage, we present a progressive hybrid context fusion (PHCF) decoder to progressively fuse local and global features extracted by the dual-encoder architecture, which utilizes the local-global context fusion (LGCF) module to efficiently alleviate the contextual discrepancy among the decoding features.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Extensive experiments are conducted on two public and competitive glioma benchmark datasets, including the BraTS2020 dataset with 494 patients and the BraTS2021 dataset with 1251 patients. Results show that our proposed method outperforms existing Transformer-based and CNN-based methods using other multi-modal fusion strategies in our experiments. Specifically, the proposed HCMINet achieves state-of-the-art ","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8371-8389"},"PeriodicalIF":3.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}