Frontiers in radiologyPub Date : 2023-08-08eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1241651
Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar
{"title":"Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis.","authors":"Joseph M Rich, Lokesh N Bhardwaj, Aman Shah, Krish Gangal, Mohitha S Rapaka, Assad A Oberai, Brandon K K Fields, George R Matcuk, Vinay A Duddalwar","doi":"10.3389/fradi.2023.1241651","DOIUrl":"10.3389/fradi.2023.1241651","url":null,"abstract":"<p><strong>Introduction: </strong>Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT).</p><p><strong>Method: </strong>The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review.</p><p><strong>Results: </strong>The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9.</p><p><strong>Discussion: </strong>Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10069334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in radiologyPub Date : 2023-06-22eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1088068
Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen
{"title":"Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation.","authors":"Ricardo Bigolin Lanfredi, Joyce D Schroeder, Tolga Tasdizen","doi":"10.3389/fradi.2023.1088068","DOIUrl":"10.3389/fradi.2023.1088068","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model's interpretability without impacting its image-level classification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365091/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in radiologyPub Date : 2023-06-02eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1144004
Yitong Yang, Zahraw Shah, Athira J Jacob, Jackson Hair, Teodora Chitiboi, Tiziano Passerini, Jerome Yerly, Lorenzo Di Sopra, Davide Piccini, Zahra Hosseini, Puneet Sharma, Anurag Sahu, Matthias Stuber, John N Oshinski
{"title":"Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions.","authors":"Yitong Yang, Zahraw Shah, Athira J Jacob, Jackson Hair, Teodora Chitiboi, Tiziano Passerini, Jerome Yerly, Lorenzo Di Sopra, Davide Piccini, Zahra Hosseini, Puneet Sharma, Anurag Sahu, Matthias Stuber, John N Oshinski","doi":"10.3389/fradi.2023.1144004","DOIUrl":"10.3389/fradi.2023.1144004","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we <i>hypothesized</i> that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images.</p><p><strong>Method: </strong>This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's <i>t</i>-test (alpha = 0.05) was used to test the significance in this study.</p><p><strong>Results & discussion: </strong>The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (<i>n</i> = 15, <i>p</i>-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a <i>p</i>-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a <i>p</i>-value of 0.014 (<i>n</i> = 15).</p><p><strong>Conclusion: </strong>We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365088/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10234001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in radiologyPub Date : 2023-05-22eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1180699
Anjali Agrawal
{"title":"Digital transformation of career landscapes in radiology: personal and professional implications.","authors":"Anjali Agrawal","doi":"10.3389/fradi.2023.1180699","DOIUrl":"10.3389/fradi.2023.1180699","url":null,"abstract":"<p><p>Millennial radiology is marked by technical disruptions. Advances in internet, digital communications and computing technology, paved way for digitalized workflow orchestration of busy radiology departments. The COVID pandemic brought teleradiology to the forefront, highlighting its importance in maintaining continuity of radiological services, making it an integral component of the radiology practice. Increasing computing power and integrated multimodal data are driving incorporation of artificial intelligence at various stages of the radiology image and reporting cycle. These have and will continue to transform the career landscape in radiology, with more options for radiologists with varied interests and career goals. The ability to work from anywhere and anytime needs to be balanced with other aspects of life. Robust communication, internal and external collaboration, self-discipline, and self-motivation are key to achieving the desired balance while practicing radiology the unconventional way.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in radiologyPub Date : 2023-05-15eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1155866
Zifei Liang, Jiangyang Zhang
{"title":"Mouse brain MR super-resolution using a deep learning network trained with optical imaging data.","authors":"Zifei Liang, Jiangyang Zhang","doi":"10.3389/fradi.2023.1155866","DOIUrl":"10.3389/fradi.2023.1155866","url":null,"abstract":"<p><strong>Introduction: </strong>The resolution of magnetic resonance imaging is often limited at the millimeter level due to its inherent signal-to-noise disadvantage compared to other imaging modalities. Super-resolution (SR) of MRI data aims to enhance its resolution and diagnostic value. While deep learning-based SR has shown potential, its applications in MRI remain limited, especially for preclinical MRI, where large high-resolution MRI datasets for training are often lacking.</p><p><strong>Methods: </strong>In this study, we first used high-resolution mouse brain auto-fluorescence (AF) data acquired using serial two-photon tomography (STPT) to examine the performance of deep learning-based SR for mouse brain images.</p><p><strong>Results: </strong>We found that the best SR performance was obtained when the resolutions of training and target data were matched. We then applied the network trained using AF data to MRI data of the mouse brain, and found that the performance of the SR network depended on the tissue contrast presented in the MRI data. Using transfer learning and a limited set of high-resolution mouse brain MRI data, we were able to fine-tune the initial network trained using AF to enhance the resolution of MRI data.</p><p><strong>Discussion: </strong>Our results suggest that deep learning SR networks trained using high-resolution data of a different modality can be applied to MRI data after transfer learning.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"uRP: An integrated research platform for one-stop analysis of medical images.","authors":"Jiaojiao Wu, Yuwei Xia, Xuechun Wang, Ying Wei, Aie Liu, Arun Innanje, Meng Zheng, Lei Chen, Jing Shi, Liye Wang, Yiqiang Zhan, Xiang Sean Zhou, Zhong Xue, Feng Shi, Dinggang Shen","doi":"10.3389/fradi.2023.1153784","DOIUrl":"10.3389/fradi.2023.1153784","url":null,"abstract":"<p><strong>Introduction: </strong>Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible.</p><p><strong>Methods: </strong>We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable.</p><p><strong>Results and discussion: </strong>The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in radiologyPub Date : 2023-03-20eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1151258
Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi
{"title":"A radiomics approach to the diagnosis of femoroacetabular impingement.","authors":"Eros Montin, Richard Kijowski, Thomas Youm, Riccardo Lattanzi","doi":"10.3389/fradi.2023.1151258","DOIUrl":"10.3389/fradi.2023.1151258","url":null,"abstract":"<p><strong>Introduction: </strong>Femoroacetabular Impingement (FAI) is a hip pathology characterized by impingement of the femoral head-neck junction against the acetabular rim, due to abnormalities in bone morphology. FAI is normally diagnosed by manual evaluation of morphologic features on magnetic resonance imaging (MRI). In this study, we assess, for the first time, the feasibility of using radiomics to detect FAI by automatically extracting quantitative features from images.</p><p><strong>Material and methods: </strong>17 patients diagnosed with monolateral FAI underwent pre-surgical MR imaging, including a 3D Dixon sequence of the pelvis. An expert radiologist drew regions of interest on the water-only Dixon images outlining femur and acetabulum in both impingement (IJ) and healthy joints (HJ). 182 radiomic features were extracted for each hip. The dataset numerosity was increased by 60 times with an ad-hoc data augmentation tool. Features were subdivided by type and region in 24 subsets. For each, a univariate ANOVA <i>F</i>-value analysis was applied to find the 5 features most correlated with IJ based on <i>p</i>-value, for a total of 48 subsets. For each subset, a K-nearest neighbor model was trained to differentiate between IJ and HJ using the values of the radiomic features in the subset as input. The training was repeated 100 times, randomly subdividing the data with 75%/25% training/testing.</p><p><strong>Results: </strong>The texture-based gray level features yielded the highest prediction max accuracy (0.972) with the smallest subset of features. This suggests that the gray image values are more homogeneously distributed in the HJ in comparison to IJ, which could be due to stress-related inflammation resulting from impingement.</p><p><strong>Conclusions: </strong>We showed that radiomics can automatically distinguish IJ from HJ using water-only Dixon MRI. To our knowledge, this is the first application of radiomics for FAI diagnosis. We reported an accuracy greater than 97%, which is higher than the 90% accuracy for detecting FAI reported for standard diagnostic tests (90%). Our proposed radiomic analysis could be combined with methods for automated joint segmentation to rapidly identify patients with FAI, avoiding time-consuming radiological measurements of bone morphology.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10233997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in radiologyPub Date : 2023-01-30eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1112841
Ahmed Maiter, Mahan Salehi, Andrew J Swift, Samer Alabed
{"title":"How should studies using AI be reported? lessons from a systematic review in cardiac MRI.","authors":"Ahmed Maiter, Mahan Salehi, Andrew J Swift, Samer Alabed","doi":"10.3389/fradi.2023.1112841","DOIUrl":"10.3389/fradi.2023.1112841","url":null,"abstract":"<p><p>Recent years have seen a dramatic increase in studies presenting artificial intelligence (AI) tools for cardiac imaging. Amongst these are AI tools that undertake segmentation of structures on cardiac MRI (CMR), an essential step in obtaining clinically relevant functional information. The quality of reporting of these studies carries significant implications for advancement of the field and the translation of AI tools to clinical practice. We recently undertook a systematic review to evaluate the quality of reporting of studies presenting automated approaches to segmentation in cardiac MRI (Alabed et al. 2022 Quality of reporting in AI cardiac MRI segmentation studies-a systematic review and recommendations for future studies. <i>Frontiers in Cardiovascular Medicine</i> 9:956811). 209 studies were assessed for compliance with the Checklist for AI in Medical Imaging (CLAIM), a framework for reporting. We found variable-and sometimes poor-quality of reporting and identified significant and frequently missing information in publications. Compliance with CLAIM was high for descriptions of models (100%, IQR 80%-100%), but lower than expected for descriptions of study design (71%, IQR 63-86%), datasets used in training and testing (63%, IQR 50%-67%) and model performance (60%, IQR 50%-70%). Here, we present a summary of our key findings, aimed at general readers who may not be experts in AI, and use them as a framework to discuss the factors determining quality of reporting, making recommendations for improving the reporting of research in this field. We aim to assist researchers in presenting their work and readers in their appraisal of evidence. Finally, we emphasise the need for close scrutiny of studies presenting AI tools, even in the face of the excitement surrounding AI in cardiac imaging.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10364997/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The promise and limitations of artificial intelligence in musculoskeletal imaging.","authors":"Patrick Debs, Laura M Fayad","doi":"10.3389/fradi.2023.1242902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1242902","url":null,"abstract":"<p><p>With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10440743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10048687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Bongiovanni, Marco Bozzolo, Simone Amabile, Enrico Peano, Alberto Balderi
{"title":"Case report: ultrasound assisted catheter directed thrombolysis of an embolic partial occlusion of the superior mesenteric artery.","authors":"Simone Bongiovanni, Marco Bozzolo, Simone Amabile, Enrico Peano, Alberto Balderi","doi":"10.3389/fradi.2023.1167901","DOIUrl":"https://doi.org/10.3389/fradi.2023.1167901","url":null,"abstract":"<p><p>Acute mesenteric ischemia (AMI) is a severe medical condition defined by insufficient vascular supply to the small bowel through mesenteric vessels, resulting in necrosis and eventual gangrene of bowel walls. We present the case of a 64-year-old man with recrudescence of prolonged epigastric pain at rest of few hours duration, cold sweating and episodes of vomiting. A computed tomography scan of his abdomen revealed multiple filling defects in the mid-distal part of the superior mesenteric artery (SMA) and the proximal part of jejunal branches, associated with small intestine walls thickening, suggesting SMA thromboembolism and initial intestinal ischemia. Considering the absence of signs of peritonitis at the abdominal examination and the presence of multiple arterial emboli was decided to perform an endovascular treatment with ultrasound assisted catheter-directed thrombolysis with EkoSonic Endovascular System-EKOS, which resulted in complete dissolution of the multiple emboli and improved blood flow into the intestine wall. The day after the procedure the patient's pain improved significantly and 5 days after he was discharged home asymptomatic on warfarin anticoagulation. After 1 year of follow-up the patient is fine with no further episodes of mesenteric ischemia or other embolisms.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10365118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10252060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}