Zeitschrift fur Medizinische Physik最新文献

筛选
英文 中文
Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images 利用磁共振图像,借助集合深度学习架构和类激活图指标自动检测脑肿瘤。
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2022.11.010
Omer Turk , Davut Ozhan , Emrullah Acar , Tahir Cetin Akinci , Musa Yilmaz
{"title":"Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images","authors":"Omer Turk ,&nbsp;Davut Ozhan ,&nbsp;Emrullah Acar ,&nbsp;Tahir Cetin Akinci ,&nbsp;Musa Yilmaz","doi":"10.1016/j.zemedi.2022.11.010","DOIUrl":"10.1016/j.zemedi.2022.11.010","url":null,"abstract":"<div><p>Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388922001313/pdfft?md5=e55da35d209b688226a3577197edb180&pid=1-s2.0-S0939388922001313-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10466945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature-guided deep learning reduces signal loss and increases lesion CNR in diffusion-weighted imaging of the liver 在肝脏弥散加权成像中,特征引导的深度学习可减少信号丢失并提高病变CNR
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.07.005
Tobit Führes , Marc Saake , Jennifer Lorenz , Hannes Seuss , Sebastian Bickelhaupt , Michael Uder , Frederik Bernd Laun
{"title":"Feature-guided deep learning reduces signal loss and increases lesion CNR in diffusion-weighted imaging of the liver","authors":"Tobit Führes ,&nbsp;Marc Saake ,&nbsp;Jennifer Lorenz ,&nbsp;Hannes Seuss ,&nbsp;Sebastian Bickelhaupt ,&nbsp;Michael Uder ,&nbsp;Frederik Bernd Laun","doi":"10.1016/j.zemedi.2023.07.005","DOIUrl":"10.1016/j.zemedi.2023.07.005","url":null,"abstract":"<div><h3><strong>Purpose</strong></h3><p>This research aims to develop a feature-guided deep learning approach and compare it with an optimized conventional post-processing algorithm in order to enhance the image quality of diffusion-weighted liver images and, in particular, to reduce the pulsation-induced signal loss occurring predominantly in the left liver lobe.</p></div><div><h3><strong>Methods</strong></h3><p>Data from 40 patients with liver lesions were used. For the conventional approach, the best-suited out of five examined algorithms was chosen. For the deep learning approach, a U-Net was trained. Instead of learning “gold-standard” target images, the network was trained to optimize four image features (lesion CNR, vessel darkness, data consistency, and pulsation artifact reduction), which could be assessed quantitatively using manually drawn ROIs. A quality score was calculated from these four features. As an additional quality assessment, three radiologists rated different features of the resulting images.</p></div><div><h3><strong>Results</strong></h3><p>The conventional approach could substantially increase the lesion CNR and reduce the pulsation-induced signal loss. However, the vessel darkness was reduced. The deep learning approach increased the lesion CNR and reduced the signal loss to a slightly lower extent, but it could additionally increase the vessel darkness. According to the image quality score, the quality of the deep-learning images was higher than that of the images obtained using the conventional approach. The radiologist ratings were mostly consistent with the quantitative scores, but the overall quality ratings differed among the readers.</p></div><div><h3><strong>Conclusion</strong></h3><p>Unlike the conventional algorithm, the deep-learning algorithm increased the vessel darkness. Therefore, it may be a viable alternative to conventional algorithms.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000879/pdfft?md5=b3e5b6c0be696f64222a77e9bdedeec2&pid=1-s2.0-S0939388923000879-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9931929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in medical physics 医学物理学中的人工智能。
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2024.03.002
Steffen Bollmann, Thomas Küstner, Qian Tao, Frank G Zöllner
{"title":"Artificial intelligence in medical physics","authors":"Steffen Bollmann,&nbsp;Thomas Küstner,&nbsp;Qian Tao,&nbsp;Frank G Zöllner","doi":"10.1016/j.zemedi.2024.03.002","DOIUrl":"10.1016/j.zemedi.2024.03.002","url":null,"abstract":"","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S093938892400028X/pdfft?md5=a92b28a357f1d136a6ea66579890411c&pid=1-s2.0-S093938892400028X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140208799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy 基于人工智能的前列腺 MRI 自动轮廓分析,用于在线自适应放疗
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.05.001
Marcel Nachbar , Monica lo Russo , Cihan Gani , Simon Boeke , Daniel Wegener , Frank Paulsen , Daniel Zips , Thais Roque , Nikos Paragios , Daniela Thorwarth
{"title":"Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy","authors":"Marcel Nachbar ,&nbsp;Monica lo Russo ,&nbsp;Cihan Gani ,&nbsp;Simon Boeke ,&nbsp;Daniel Wegener ,&nbsp;Frank Paulsen ,&nbsp;Daniel Zips ,&nbsp;Thais Roque ,&nbsp;Nikos Paragios ,&nbsp;Daniela Thorwarth","doi":"10.1016/j.zemedi.2023.05.001","DOIUrl":"10.1016/j.zemedi.2023.05.001","url":null,"abstract":"<div><h3>Background and purpose</h3><p>MR-guided radiotherapy (MRgRT) online plan adaptation accounts for tumor volume changes, interfraction motion and thus allows daily sparing of relevant organs at risk. Due to the high interfraction variability of bladder and rectum, patients with tumors in the pelvic region may strongly benefit from adaptive MRgRT. Currently, fast automatic annotation of anatomical structures is not available within the online MRgRT workflow. Therefore, the aim of this study was to train and validate a fast, accurate deep learning model for automatic MRI segmentation at the MR-Linac for future implementation in a clinical MRgRT workflow.</p></div><div><h3>Materials and methods</h3><p>For a total of 47 patients, T2w MRI data were acquired on a 1.5 T MR-Linac (Unity, Elekta) on five different days. Prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, body and bony structures were manually annotated. These training data consisting of 232 data sets in total was used for the generation of a deep learning based autocontouring model and validated on 20 unseen T2w-MRIs. For quantitative evaluation the validation set was contoured by a radiation oncologist as gold standard contours (GSC) and compared in MATLAB to the automatic contours (AIC). For the evaluation, dice similarity coefficients (DSC), and 95% Hausdorff distances (95% HD), added path length (APL) and surface DSC (sDSC) were calculated in a caudal-cranial window of <span><math><mrow><mo>±</mo></mrow></math></span> 4 cm with respect to the prostate ends. For qualitative evaluation, five radiation oncologists scored the AIC on the possible usage within an online adaptive workflow as follows: (1) no modifications needed, (2) minor adjustments needed, (3) major adjustments/ multiple minor adjustments needed, (4) not usable.</p></div><div><h3>Results</h3><p>The quantitative evaluation revealed a maximum median 95% HD of 6.9 mm for the rectum and minimum median 95% HD of 2.7 mm for the bladder. Maximal and minimal median DSC were detected for bladder with 0.97 and for penile bulb with 0.73, respectively. Using a tolerance level of 3 mm, the highest and lowest sDSC were determined for rectum (0.94) and anal canal (0.68), respectively.</p><p>Qualitative evaluation resulted in a mean score of 1.2 for AICs over all organs and patients across all expert ratings. For the different autocontoured structures, the highest mean score of 1.0 was observed for anal canal, sacrum, femur left and right, and pelvis left, whereas for prostate the lowest mean score of 2.0 was detected. In total, 80% of the contours were rated be clinically acceptable, 16% to require minor and 4% major adjustments for online adaptive MRgRT.</p></div><div><h3>Conclusion</h3><p>In this study, an AI-based autocontouring was successfully trained for online adaptive MR-guided radiotherapy on the 1.5 T MR-Linac system. The developed model can automatically generate contours accepted by physicians (80%) o","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000533/pdfft?md5=4c8a5787fe97a32ec18b4426b3597127&pid=1-s2.0-S0939388923000533-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9562444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards quality management of artificial intelligence systems for medical applications 实现医疗应用人工智能系统的质量管理。
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2024.02.001
Lorenzo Mercolli, Axel Rominger, Kuangyu Shi
{"title":"Towards quality management of artificial intelligence systems for medical applications","authors":"Lorenzo Mercolli,&nbsp;Axel Rominger,&nbsp;Kuangyu Shi","doi":"10.1016/j.zemedi.2024.02.001","DOIUrl":"10.1016/j.zemedi.2024.02.001","url":null,"abstract":"<div><p>The use of artificial intelligence systems in clinical routine is still hampered by the necessity of a medical device certification and/or by the difficulty of implementing these systems in a clinic’s quality management system. In this context, the key questions for a user are how to ensure robust model predictions and how to appraise the quality of a model’s results on a regular basis.</p><p>In this paper we discuss some conceptual foundation for a clinical implementation of a machine learning system and argue that both vendors and users should take certain responsibilities, as is already common practice for high-risk medical equipment.</p><p>We propose the methodology from AAPM Task Group 100 report No. 283 as a conceptual framework for developing risk-driven a quality management program for a clinical process that encompasses a machine learning system. This is illustrated with an example of a clinical workflow. Our analysis shows how the risk evaluation in this framework can accommodate artificial intelligence based systems independently of their robustness evaluation or the user’s in–house expertise. In particular, we highlight how the degree of interpretability of a machine learning system can be systematically accounted for within the risk evaluation and in the development of a quality management system.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388924000242/pdfft?md5=309f6a0c3aedbe399d5a372c060278f6&pid=1-s2.0-S0939388924000242-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSMA-PET improves deep learning-based automated CT kidney segmentation PSMA-PET 提高了基于深度学习的 CT 自动肾脏分割能力
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.08.006
Julian Leube, Matthias Horn, Philipp E. Hartrampf, Andreas K. Buck, Michael Lassmann, Johannes Tran-Gia
{"title":"PSMA-PET improves deep learning-based automated CT kidney segmentation","authors":"Julian Leube,&nbsp;Matthias Horn,&nbsp;Philipp E. Hartrampf,&nbsp;Andreas K. Buck,&nbsp;Michael Lassmann,&nbsp;Johannes Tran-Gia","doi":"10.1016/j.zemedi.2023.08.006","DOIUrl":"10.1016/j.zemedi.2023.08.006","url":null,"abstract":"<div><p>For dosimetry of radiopharmaceutical therapies, it is essential to determine the volume of relevant structures exposed to therapeutic radiation. For many radiopharmaceuticals, the kidneys represent an important organ-at-risk. To reduce the time required for kidney segmentation, which is often still performed manually, numerous approaches have been presented in recent years to apply deep learning-based methods for CT-based automated segmentation. While the automatic segmentation methods presented so far have been based solely on CT information, the aim of this work is to examine the added value of incorporating PSMA-PET data in the automatic kidney segmentation.</p></div><div><h3><strong>Methods</strong></h3><p>A total of 108 PET/CT examinations (53 [<sup>68</sup>Ga]Ga-PSMA-I&amp;T and 55 [<sup>18</sup>F]F-PSMA-1007 examinations) were grouped to create a reference data set of manual segmentations of the kidney. These segmentations were performed by a human examiner. For each subject, two segmentations were carried out: one CT-based (detailed) segmentation and one PET-based (coarser) segmentation. Five different u-net based approaches were applied to the data set to perform an automated segmentation of the kidney: CT images only, PET images only (coarse segmentation), a combination of CT and PET images, a combination of CT images and a PET-based coarse mask, and a CT image, which had been pre-segmented using a PET-based coarse mask. A quantitative assessment of these approaches was performed based on a test data set of 20 patients, including Dice score, volume deviation and average Hausdorff distance between automated and manual segmentations. Additionally, a visual evaluation of automated segmentations for 100 additional (i.e., exclusively automatically segmented) patients was performed by a nuclear physician.</p></div><div><h3><strong>Results</strong></h3><p>Out of all approaches, the best results were achieved by using CT images which had been pre-segmented using a PET-based coarse mask as input. In addition, this method performed significantly better than the segmentation based solely on CT, which was supported by the visual examination of the additional segmentations. In 80% of the cases, the segmentations created by exploiting the PET-based pre-segmentation were preferred by the nuclear physician.</p></div><div><h3><strong>Conclusion</strong></h3><p>This study shows that deep-learning based kidney segmentation can be significantly improved through the addition of a PET-based pre-segmentation. The presented method was shown to be especially beneficial for kidneys with cysts or kidneys that are closely adjacent to other organs such as the spleen, liver or pancreas. In the future, this could lead to a considerable reduction in the time required for dosimetry calculations as well as an improvement in the results.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000958/pdfft?md5=905c071b84bb04d8b4d49a82783a3b94&pid=1-s2.0-S0939388923000958-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10145024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance 基于人工智能的全身骨扫描分析:探索最佳深度学习算法并与人类观察者的表现进行比较
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.01.008
Ghasem Hajianfar , Maziar Sabouri , Yazdan Salimi , Mehdi Amini , Soroush Bagheri , Elnaz Jenabi , Sepideh Hekmat , Mehdi Maghsudi , Zahra Mansouri , Maziar Khateri , Mohammad Hosein Jamshidi , Esmail Jafari , Ahmad Bitarafan Rajabi , Majid Assadi , Mehrdad Oveisi , Isaac Shiri , Habib Zaidi
{"title":"Artificial intelligence-based analysis of whole-body bone scintigraphy: The quest for the optimal deep learning algorithm and comparison with human observer performance","authors":"Ghasem Hajianfar ,&nbsp;Maziar Sabouri ,&nbsp;Yazdan Salimi ,&nbsp;Mehdi Amini ,&nbsp;Soroush Bagheri ,&nbsp;Elnaz Jenabi ,&nbsp;Sepideh Hekmat ,&nbsp;Mehdi Maghsudi ,&nbsp;Zahra Mansouri ,&nbsp;Maziar Khateri ,&nbsp;Mohammad Hosein Jamshidi ,&nbsp;Esmail Jafari ,&nbsp;Ahmad Bitarafan Rajabi ,&nbsp;Majid Assadi ,&nbsp;Mehrdad Oveisi ,&nbsp;Isaac Shiri ,&nbsp;Habib Zaidi","doi":"10.1016/j.zemedi.2023.01.008","DOIUrl":"10.1016/j.zemedi.2023.01.008","url":null,"abstract":"<div><h3>Purpose</h3><p>Whole-body bone scintigraphy (WBS) is one of the most widely used modalities in diagnosing malignant bone diseases during the early stages. However, the procedure is time-consuming and requires vigour and experience. Moreover, interpretation of WBS scans in the early stages of the disorders might be challenging because the patterns often reflect normal appearance that is prone to subjective interpretation. To simplify the gruelling, subjective, and prone-to-error task of interpreting WBS scans, we developed deep learning (DL) models to automate two major analyses, namely (i) classification of scans into normal and abnormal and (ii) discrimination between malignant and non-neoplastic bone diseases, and compared their performance with human observers.</p></div><div><h3>Materials and Methods</h3><p>After applying our exclusion criteria on 7188 patients from three different centers, 3772 and 2248 patients were enrolled for the first and second analyses, respectively. Data were split into two parts, including training and testing, while a fraction of training data were considered for validation. Ten different CNN models were applied to single- and dual-view input (posterior and anterior views) modes to find the optimal model for each analysis. In addition, three different methods, including squeeze-and-excitation (SE), spatial pyramid pooling (SPP), and attention-augmented (AA), were used to aggregate the features for dual-view input models. Model performance was reported through area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity and was compared with the DeLong test applied to ROC curves. The test dataset was evaluated by three nuclear medicine physicians (NMPs) with different levels of experience to compare the performance of AI and human observers.</p></div><div><h3>Results</h3><p>DenseNet121_AA (DensNet121, with dual-view input aggregated by AA) and InceptionResNetV2_SPP achieved the highest performance (AUC = 0.72) for the first and second analyses, respectively. Moreover, on average, in the first analysis, Inception V3 and InceptionResNetV2 CNN models and dual-view input with AA aggregating method had superior performance. In addition, in the second analysis, DenseNet121 and InceptionResNetV2 as CNN methods and dual-view input with AA aggregating method achieved the best results. Conversely, the performance of AI models was significantly higher than human observers for the first analysis, whereas their performance was comparable in the second analysis, although the AI model assessed the scans in a drastically lower time.</p></div><div><h3>Conclusion</h3><p>Using the models designed in this study, a positive step can be taken toward improving and optimizing WBS interpretation. By training DL models with larger and more diverse cohorts, AI could potentially be used to assist physicians in the assessment of WBS images.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000089/pdfft?md5=40da3cacf80f682e80f4655f04f990de&pid=1-s2.0-S0939388923000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9133122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards MR contrast independent synthetic CT generation 实现独立于磁共振造影剂的合成 CT 生成
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/j.zemedi.2023.07.001
Attila Simkó , Mikael Bylund , Gustav Jönsson , Tommy Löfstedt , Anders Garpebring , Tufve Nyholm , Joakim Jonsson
{"title":"Towards MR contrast independent synthetic CT generation","authors":"Attila Simkó ,&nbsp;Mikael Bylund ,&nbsp;Gustav Jönsson ,&nbsp;Tommy Löfstedt ,&nbsp;Anders Garpebring ,&nbsp;Tufve Nyholm ,&nbsp;Joakim Jonsson","doi":"10.1016/j.zemedi.2023.07.001","DOIUrl":"10.1016/j.zemedi.2023.07.001","url":null,"abstract":"<div><p>The use of synthetic CT (sCT) in the radiotherapy workflow would reduce costs and scan time while removing the uncertainties around working with both MR and CT modalities. The performance of deep learning (DL) solutions for sCT generation is steadily increasing, however most proposed methods were trained and validated on private datasets of a single contrast from a single scanner. Such solutions might not perform equally well on other datasets, limiting their general usability and therefore value. Additionally, functional evaluations of sCTs such as dosimetric comparisons with CT-based dose calculations better show the impact of the methods, but the evaluations are more labor intensive than pixel-wise metrics.</p><p>To improve the generalization of an sCT model, we propose to incorporate a pre-trained DL model to pre-process the input MR images by generating artificial proton density, <span><math><mrow><mi>T</mi><mn>1</mn></mrow></math></span> and <span><math><mrow><mi>T</mi><mn>2</mn></mrow></math></span> maps (<em>i.e.</em> contrast-independent quantitative maps), which are then used for sCT generation. Using a dataset of only <span><math><mrow><mi>T</mi><mn>2</mn><mi>w</mi></mrow></math></span> MR images, the robustness towards input MR contrasts of this approach is compared to a model that was trained using the MR images directly. We evaluate the generated sCTs using pixel-wise metrics and calculating mean radiological depths, as an approximation of the mean delivered dose.</p><p>On <span><math><mrow><mi>T</mi><mn>2</mn><mi>w</mi></mrow></math></span> images acquired with the same settings as the training dataset, there was no significant difference between the performance of the models. However, when evaluated on <span><math><mrow><mi>T</mi><mn>1</mn><mi>w</mi></mrow></math></span> images, and a wide range of other contrasts and scanners from both public and private datasets, our approach outperforms the baseline model.</p><p>Using a dataset of <span><math><mrow><mi>T</mi><mn>2</mn><mi>w</mi></mrow></math></span> MR images, our proposed model implements synthetic quantitative maps to generate sCT images, improving the generalization towards other contrasts. Our code and trained models are publicly available.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000831/pdfft?md5=3e1f7674de1352aa91dfccab724c3a83&pid=1-s2.0-S0939388923000831-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9988159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Board + Consulting Editorial Board 编辑委员会 + 咨询编辑委员会
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-05-01 DOI: 10.1016/S0939-3889(24)00034-5
{"title":"Editorial Board + Consulting Editorial Board","authors":"","doi":"10.1016/S0939-3889(24)00034-5","DOIUrl":"https://doi.org/10.1016/S0939-3889(24)00034-5","url":null,"abstract":"","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388924000345/pdfft?md5=874b26dd35f0095e923d023375e4842c&pid=1-s2.0-S0939388924000345-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of multi-method-multi-model inference to radiation related solid cancer excess risks models for astronaut risk assessment 将多方法-多模型推论应用于宇航员风险评估中与辐射相关的实体癌超额风险模型。
IF 2 4区 医学
Zeitschrift fur Medizinische Physik Pub Date : 2024-02-01 DOI: 10.1016/j.zemedi.2023.06.003
Luana Hafner , Linda Walsh
{"title":"Application of multi-method-multi-model inference to radiation related solid cancer excess risks models for astronaut risk assessment","authors":"Luana Hafner ,&nbsp;Linda Walsh","doi":"10.1016/j.zemedi.2023.06.003","DOIUrl":"10.1016/j.zemedi.2023.06.003","url":null,"abstract":"<div><p>The impact of including model-averaged excess radiation risks (ER) into a measure of radiation attributed decrease of survival (RADS) for the outcome all solid cancer incidence and the impact on the uncertainties is demonstrated. It is shown that RADS applying weighted model averaged ER based on AIC weights result in smaller risk estimates with narrower 95% CI than RADS using ER based on BIC weights. Further a multi-method-multi-model inference approach is introduced that allows calculating one general RADS estimate providing a weighted average risk estimate for a lunar and a Mars mission. For males the general RADS estimate is found to be 0.42% (95% CI: 0.38%; 0.45%) and for females 0.67% (95% CI: 0.59%; 0.75%) for a lunar mission and 2.45% (95% CI: 2.23%; 2.67%) for males and 3.91% (95% CI: 3.44%; 4.39%) for females for a Mars mission considering an age at exposure of 40 years and an attained age of 65 years. It is recommended to include these types of uncertainties and to include model-averaged excess risks in astronaut risk assessment.</p></div>","PeriodicalId":54397,"journal":{"name":"Zeitschrift fur Medizinische Physik","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0939388923000806/pdfft?md5=c3e7e327440b0492d75125a9932acf05&pid=1-s2.0-S0939388923000806-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9769766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信