Helin Huang, Zhenyi Ge, Hairui Wang, Jing Wu, Chunqiang Hu, Nan Li, Xiaomei Wu, Cuizhen Pan
{"title":"基于深度学习方法的超声心动图二尖瓣反流分类。","authors":"Helin Huang, Zhenyi Ge, Hairui Wang, Jing Wu, Chunqiang Hu, Nan Li, Xiaomei Wu, Cuizhen Pan","doi":"10.21037/qims-2025-120","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The classification of mitral regurgitation (MR) based on echocardiography is highly dependent on the expertise of specialized physicians and is often time-consuming. This study aims to develop an artificial intelligence (AI)-assisted decision-making system to improve the accuracy and efficiency of MR classification.</p><p><strong>Methods: </strong>We utilized 754 echocardiography videos from 266 subjects to develop an MR classification model. The dataset included 179 apical two-chamber (A2C), 206 apical three-chamber (A3C), and 369 apical four-chamber (A4C) view videos. A deep learning neural network, named ARMF-Net, was designed to classify MR into four types: normal mitral valve function (NM), degenerative mitral regurgitation (DMR), atrial functional mitral regurgitation (AFMR), and ventricular functional mitral regurgitation (VFMR). ARMF-Net incorporates three-dimensional (3D) convolutional residual modules, a multi-attention mechanism, and auxiliary feature fusion based on the segmentation results of the left atrium and left ventricle. The dataset was split into 639 videos for training and validation, with 115 videos reserved as an independent test set. Model performance was evaluated using precision and F1-score metrics.</p><p><strong>Results: </strong>At the video level, ARMF-Net achieved an overall precision of 0.93 on the test dataset. The precision for DMR, AFMR, VFMR, and NM was 0.886, 0.81, 1, and 1, respectively. At the participant level, the highest precision was 0.961, with precision values of 1.0, 1.0, 0.846, and 1.0 for DMR, AFMR, VFMR, and NM, respectively. The model can make classifications within seconds, significantly reducing the time and labor required for diagnosis.</p><p><strong>Conclusions: </strong>The proposed model can identify NM and three types of MR in echocardiography videos, providing a method for the automated auxiliary analysis and rapid screening of echocardiogram images in clinical practice.</p>","PeriodicalId":54267,"journal":{"name":"Quantitative Imaging in Medicine and Surgery","volume":"15 9","pages":"7847-7861"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12397662/pdf/","citationCount":"0","resultStr":"{\"title\":\"Classification of mitral regurgitation in echocardiography based on deep learning methods.\",\"authors\":\"Helin Huang, Zhenyi Ge, Hairui Wang, Jing Wu, Chunqiang Hu, Nan Li, Xiaomei Wu, Cuizhen Pan\",\"doi\":\"10.21037/qims-2025-120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The classification of mitral regurgitation (MR) based on echocardiography is highly dependent on the expertise of specialized physicians and is often time-consuming. This study aims to develop an artificial intelligence (AI)-assisted decision-making system to improve the accuracy and efficiency of MR classification.</p><p><strong>Methods: </strong>We utilized 754 echocardiography videos from 266 subjects to develop an MR classification model. The dataset included 179 apical two-chamber (A2C), 206 apical three-chamber (A3C), and 369 apical four-chamber (A4C) view videos. A deep learning neural network, named ARMF-Net, was designed to classify MR into four types: normal mitral valve function (NM), degenerative mitral regurgitation (DMR), atrial functional mitral regurgitation (AFMR), and ventricular functional mitral regurgitation (VFMR). ARMF-Net incorporates three-dimensional (3D) convolutional residual modules, a multi-attention mechanism, and auxiliary feature fusion based on the segmentation results of the left atrium and left ventricle. The dataset was split into 639 videos for training and validation, with 115 videos reserved as an independent test set. Model performance was evaluated using precision and F1-score metrics.</p><p><strong>Results: </strong>At the video level, ARMF-Net achieved an overall precision of 0.93 on the test dataset. The precision for DMR, AFMR, VFMR, and NM was 0.886, 0.81, 1, and 1, respectively. At the participant level, the highest precision was 0.961, with precision values of 1.0, 1.0, 0.846, and 1.0 for DMR, AFMR, VFMR, and NM, respectively. The model can make classifications within seconds, significantly reducing the time and labor required for diagnosis.</p><p><strong>Conclusions: </strong>The proposed model can identify NM and three types of MR in echocardiography videos, providing a method for the automated auxiliary analysis and rapid screening of echocardiogram images in clinical practice.</p>\",\"PeriodicalId\":54267,\"journal\":{\"name\":\"Quantitative Imaging in Medicine and Surgery\",\"volume\":\"15 9\",\"pages\":\"7847-7861\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12397662/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quantitative Imaging in Medicine and Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.21037/qims-2025-120\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/8/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantitative Imaging in Medicine and Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.21037/qims-2025-120","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/11 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Classification of mitral regurgitation in echocardiography based on deep learning methods.
Background: The classification of mitral regurgitation (MR) based on echocardiography is highly dependent on the expertise of specialized physicians and is often time-consuming. This study aims to develop an artificial intelligence (AI)-assisted decision-making system to improve the accuracy and efficiency of MR classification.
Methods: We utilized 754 echocardiography videos from 266 subjects to develop an MR classification model. The dataset included 179 apical two-chamber (A2C), 206 apical three-chamber (A3C), and 369 apical four-chamber (A4C) view videos. A deep learning neural network, named ARMF-Net, was designed to classify MR into four types: normal mitral valve function (NM), degenerative mitral regurgitation (DMR), atrial functional mitral regurgitation (AFMR), and ventricular functional mitral regurgitation (VFMR). ARMF-Net incorporates three-dimensional (3D) convolutional residual modules, a multi-attention mechanism, and auxiliary feature fusion based on the segmentation results of the left atrium and left ventricle. The dataset was split into 639 videos for training and validation, with 115 videos reserved as an independent test set. Model performance was evaluated using precision and F1-score metrics.
Results: At the video level, ARMF-Net achieved an overall precision of 0.93 on the test dataset. The precision for DMR, AFMR, VFMR, and NM was 0.886, 0.81, 1, and 1, respectively. At the participant level, the highest precision was 0.961, with precision values of 1.0, 1.0, 0.846, and 1.0 for DMR, AFMR, VFMR, and NM, respectively. The model can make classifications within seconds, significantly reducing the time and labor required for diagnosis.
Conclusions: The proposed model can identify NM and three types of MR in echocardiography videos, providing a method for the automated auxiliary analysis and rapid screening of echocardiogram images in clinical practice.