Andrzej Liebert, Dominique Hadler, Chris Ehring, Hannes Schreiter, Luise Brock, Lorenz A Kapsner, Jessica Eberle, Ramona Erber, Julius Emons, Frederik B Laun, Michael Uder, Evelyn Wenkel, Sabine Ohlmeyer, Sebastian Bickelhaupt
{"title":"基于卷积神经网络的虚拟t2加权饱和脂肪乳腺MRI图像的可行性。","authors":"Andrzej Liebert, Dominique Hadler, Chris Ehring, Hannes Schreiter, Luise Brock, Lorenz A Kapsner, Jessica Eberle, Ramona Erber, Julius Emons, Frederik B Laun, Michael Uder, Evelyn Wenkel, Sabine Ohlmeyer, Sebastian Bickelhaupt","doi":"10.1186/s41747-025-00580-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Breast magnetic resonance imaging (MRI) protocols often include T2-weighted fat-saturated (T2w-FS) sequences, which support tissue characterization but significantly increase scan time. This study aims to evaluate whether a 2D-U-Net neural network can generate virtual T2w-FS (VirtuT2w) images from routine multiparametric breast MRI images.</p><p><strong>Methods: </strong>This IRB-approved, retrospective study included 914 breast MRI examinations from January 2017 to June 2020. The dataset was divided into training (n = 665), validation (n = 74), and test sets (n = 175). The U-Net was trained using different input protocols consisting of T1-weighted, diffusion-weighted, and dynamic contrast-enhanced sequences to generate VirtuT2. Quantitative metrics were used to evaluate the different input protocols. A qualitative assessment by two radiologists was used to evaluate the VirtuT2w images of the best input protocol.</p><p><strong>Results: </strong>VirtuT2w images demonstrated the best quantitative metrics compared to original T2w-FS images for an input protocol using all of the available data. A high level of high-frequency error norm (0.87) indicated a strong blurring presence in the VirtuT2 images, which was also confirmed by qualitative reading. Radiologists correctly identified VirtuT2 images with at least 96% accuracy. Significant difference in diagnostic image quality was noted for both readers (p ≤ 0.015). Moderate inter-reader agreement was observed for edema detection on both T2w-FS images (κ = 0.49) and VirtuT2 images (κ = 0.44).</p><p><strong>Conclusion: </strong>The 2D-U-Net generated virtual T2w-FS images similar to real T2w-FS images, though blurring remains a limitation. Investigation of other architectures and using larger datasets is necessary to improve potential future clinical applicability.</p><p><strong>Relevance statement: </strong>Generating VirtuT2 images could potentially decrease the examination time of multiparametric breast MRI, but its quality needs to improve before introduction into a clinical setting.</p><p><strong>Key points: </strong>Breast MRI T2w-fat-saturated (FS) images can be virtually generated using convolutional neural networks. Image blurring in virtual T2w-FS images currently limits their clinical applicability. Best quantitative performance could be achieved when using full dynamic-contrast-enhanced acquisition and DWI as input of the neural network.</p>","PeriodicalId":36926,"journal":{"name":"European Radiology Experimental","volume":"9 1","pages":"47"},"PeriodicalIF":3.7000,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12048370/pdf/","citationCount":"0","resultStr":"{\"title\":\"Feasibility of virtual T2-weighted fat-saturated breast MRI images by convolutional neural networks.\",\"authors\":\"Andrzej Liebert, Dominique Hadler, Chris Ehring, Hannes Schreiter, Luise Brock, Lorenz A Kapsner, Jessica Eberle, Ramona Erber, Julius Emons, Frederik B Laun, Michael Uder, Evelyn Wenkel, Sabine Ohlmeyer, Sebastian Bickelhaupt\",\"doi\":\"10.1186/s41747-025-00580-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Breast magnetic resonance imaging (MRI) protocols often include T2-weighted fat-saturated (T2w-FS) sequences, which support tissue characterization but significantly increase scan time. This study aims to evaluate whether a 2D-U-Net neural network can generate virtual T2w-FS (VirtuT2w) images from routine multiparametric breast MRI images.</p><p><strong>Methods: </strong>This IRB-approved, retrospective study included 914 breast MRI examinations from January 2017 to June 2020. The dataset was divided into training (n = 665), validation (n = 74), and test sets (n = 175). The U-Net was trained using different input protocols consisting of T1-weighted, diffusion-weighted, and dynamic contrast-enhanced sequences to generate VirtuT2. Quantitative metrics were used to evaluate the different input protocols. A qualitative assessment by two radiologists was used to evaluate the VirtuT2w images of the best input protocol.</p><p><strong>Results: </strong>VirtuT2w images demonstrated the best quantitative metrics compared to original T2w-FS images for an input protocol using all of the available data. A high level of high-frequency error norm (0.87) indicated a strong blurring presence in the VirtuT2 images, which was also confirmed by qualitative reading. Radiologists correctly identified VirtuT2 images with at least 96% accuracy. Significant difference in diagnostic image quality was noted for both readers (p ≤ 0.015). Moderate inter-reader agreement was observed for edema detection on both T2w-FS images (κ = 0.49) and VirtuT2 images (κ = 0.44).</p><p><strong>Conclusion: </strong>The 2D-U-Net generated virtual T2w-FS images similar to real T2w-FS images, though blurring remains a limitation. Investigation of other architectures and using larger datasets is necessary to improve potential future clinical applicability.</p><p><strong>Relevance statement: </strong>Generating VirtuT2 images could potentially decrease the examination time of multiparametric breast MRI, but its quality needs to improve before introduction into a clinical setting.</p><p><strong>Key points: </strong>Breast MRI T2w-fat-saturated (FS) images can be virtually generated using convolutional neural networks. Image blurring in virtual T2w-FS images currently limits their clinical applicability. Best quantitative performance could be achieved when using full dynamic-contrast-enhanced acquisition and DWI as input of the neural network.</p>\",\"PeriodicalId\":36926,\"journal\":{\"name\":\"European Radiology Experimental\",\"volume\":\"9 1\",\"pages\":\"47\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12048370/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Radiology Experimental\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41747-025-00580-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology Experimental","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41747-025-00580-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Feasibility of virtual T2-weighted fat-saturated breast MRI images by convolutional neural networks.
Background: Breast magnetic resonance imaging (MRI) protocols often include T2-weighted fat-saturated (T2w-FS) sequences, which support tissue characterization but significantly increase scan time. This study aims to evaluate whether a 2D-U-Net neural network can generate virtual T2w-FS (VirtuT2w) images from routine multiparametric breast MRI images.
Methods: This IRB-approved, retrospective study included 914 breast MRI examinations from January 2017 to June 2020. The dataset was divided into training (n = 665), validation (n = 74), and test sets (n = 175). The U-Net was trained using different input protocols consisting of T1-weighted, diffusion-weighted, and dynamic contrast-enhanced sequences to generate VirtuT2. Quantitative metrics were used to evaluate the different input protocols. A qualitative assessment by two radiologists was used to evaluate the VirtuT2w images of the best input protocol.
Results: VirtuT2w images demonstrated the best quantitative metrics compared to original T2w-FS images for an input protocol using all of the available data. A high level of high-frequency error norm (0.87) indicated a strong blurring presence in the VirtuT2 images, which was also confirmed by qualitative reading. Radiologists correctly identified VirtuT2 images with at least 96% accuracy. Significant difference in diagnostic image quality was noted for both readers (p ≤ 0.015). Moderate inter-reader agreement was observed for edema detection on both T2w-FS images (κ = 0.49) and VirtuT2 images (κ = 0.44).
Conclusion: The 2D-U-Net generated virtual T2w-FS images similar to real T2w-FS images, though blurring remains a limitation. Investigation of other architectures and using larger datasets is necessary to improve potential future clinical applicability.
Relevance statement: Generating VirtuT2 images could potentially decrease the examination time of multiparametric breast MRI, but its quality needs to improve before introduction into a clinical setting.
Key points: Breast MRI T2w-fat-saturated (FS) images can be virtually generated using convolutional neural networks. Image blurring in virtual T2w-FS images currently limits their clinical applicability. Best quantitative performance could be achieved when using full dynamic-contrast-enhanced acquisition and DWI as input of the neural network.