Mengjie Gu , Yingying Liu , Yuanyuan Sheng , Mingchuan Zhang , Junqiang Yan , Lin Wang , Junlong Zhu
{"title":"基于深度学习的经胸超声造影对卵圆孔未闭的智能分级体系","authors":"Mengjie Gu , Yingying Liu , Yuanyuan Sheng , Mingchuan Zhang , Junqiang Yan , Lin Wang , Junlong Zhu","doi":"10.1016/j.compmedimag.2025.102538","DOIUrl":null,"url":null,"abstract":"<div><div>Patent foramen ovale (PFO) is one of the main causes of ischemic stroke. Due to the complex characteristics of contrast transthoracic echocardiography (cTTE), PFO classification is time-consuming and laborious in clinical practice. For this reason, a variety of PFO diagnostic models have been presented based on machine learning in recent years. However, existing models have lower diagnostic accuracy due to similar gray values of microbubbles and surrounding myocardial tissue in cTTE. Meanwhile, the greater volume of right-to-left shunt (RLS) volume leads to a higher incidence of migraine and stroke. Existing models do not quantify the severity of RLS, which affects the use of treatment methods in later clinical treatment. To solve these problems, we propose TVUNet++ for left ventricular segmentation and ULSAM-ResNet for PFO classification. More specifically, TVUNet++ can distinguish various local features in cTTE through learnable affinity maps and implicitly capture the semantic relationship between the left heart cavity and the background region. In addition, we provide a benchmark cTTE dataset to evaluate the performance of the proposed model through various experiments. Experimental results show that the average Dice Coefficient of the proposed model can reach 92.11%. Moreover, ULSAM-ResNet can realize multi-scale and multi-frequency feature learning through multiple subspaces and learn cross-channel information for accurate grade classification efficiently. The average recall of static cTTE can reach 84.27%. Furthermore, the proposed model outperforms state-of-the-art models in the grade classification of PFO.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102538"},"PeriodicalIF":5.4000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel intelligent grade classification architecture for Patent Foramen Ovale by Contrast Transthoracic Echocardiography based on deep learning\",\"authors\":\"Mengjie Gu , Yingying Liu , Yuanyuan Sheng , Mingchuan Zhang , Junqiang Yan , Lin Wang , Junlong Zhu\",\"doi\":\"10.1016/j.compmedimag.2025.102538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Patent foramen ovale (PFO) is one of the main causes of ischemic stroke. Due to the complex characteristics of contrast transthoracic echocardiography (cTTE), PFO classification is time-consuming and laborious in clinical practice. For this reason, a variety of PFO diagnostic models have been presented based on machine learning in recent years. However, existing models have lower diagnostic accuracy due to similar gray values of microbubbles and surrounding myocardial tissue in cTTE. Meanwhile, the greater volume of right-to-left shunt (RLS) volume leads to a higher incidence of migraine and stroke. Existing models do not quantify the severity of RLS, which affects the use of treatment methods in later clinical treatment. To solve these problems, we propose TVUNet++ for left ventricular segmentation and ULSAM-ResNet for PFO classification. More specifically, TVUNet++ can distinguish various local features in cTTE through learnable affinity maps and implicitly capture the semantic relationship between the left heart cavity and the background region. In addition, we provide a benchmark cTTE dataset to evaluate the performance of the proposed model through various experiments. Experimental results show that the average Dice Coefficient of the proposed model can reach 92.11%. Moreover, ULSAM-ResNet can realize multi-scale and multi-frequency feature learning through multiple subspaces and learn cross-channel information for accurate grade classification efficiently. The average recall of static cTTE can reach 84.27%. Furthermore, the proposed model outperforms state-of-the-art models in the grade classification of PFO.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"123 \",\"pages\":\"Article 102538\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2025-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611125000473\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000473","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
A novel intelligent grade classification architecture for Patent Foramen Ovale by Contrast Transthoracic Echocardiography based on deep learning
Patent foramen ovale (PFO) is one of the main causes of ischemic stroke. Due to the complex characteristics of contrast transthoracic echocardiography (cTTE), PFO classification is time-consuming and laborious in clinical practice. For this reason, a variety of PFO diagnostic models have been presented based on machine learning in recent years. However, existing models have lower diagnostic accuracy due to similar gray values of microbubbles and surrounding myocardial tissue in cTTE. Meanwhile, the greater volume of right-to-left shunt (RLS) volume leads to a higher incidence of migraine and stroke. Existing models do not quantify the severity of RLS, which affects the use of treatment methods in later clinical treatment. To solve these problems, we propose TVUNet++ for left ventricular segmentation and ULSAM-ResNet for PFO classification. More specifically, TVUNet++ can distinguish various local features in cTTE through learnable affinity maps and implicitly capture the semantic relationship between the left heart cavity and the background region. In addition, we provide a benchmark cTTE dataset to evaluate the performance of the proposed model through various experiments. Experimental results show that the average Dice Coefficient of the proposed model can reach 92.11%. Moreover, ULSAM-ResNet can realize multi-scale and multi-frequency feature learning through multiple subspaces and learn cross-channel information for accurate grade classification efficiently. The average recall of static cTTE can reach 84.27%. Furthermore, the proposed model outperforms state-of-the-art models in the grade classification of PFO.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.