Robert B. Labs , Apostolos Vrettos , Jonathan Loo , Massoud Zolgharni
{"title":"利用深度神经网络自动评估经胸超声心动图图像质量","authors":"Robert B. Labs , Apostolos Vrettos , Jonathan Loo , Massoud Zolgharni","doi":"10.1016/j.imed.2022.08.001","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Standard views in two-dimensional echocardiography are well established but the qualities of acquired images are highly dependent on operator skills and are assessed subjectively. This study was aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization.</p></div><div><h3>Methods</h3><p>We developed deep neural networks for the automated assessment of echocardiographic frames that were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Unlike non-medical images where full-reference metrics can be applied for image quality, echocardiogram's data are highly heterogeneous and requires blind-reference (IQA) metrics. Therefore, deep learning approaches were used to extract the spatiotemporal features and the image's quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness.</p></div><div><h3>Results</h3><p>The model performance accuracy yielded 94.4%, 96.8%, 96.2%, 97.4% for anatomical visibility, clarity, depth-gain and foreshortedness, respectively. The mean model error of 0.375±0.0052 with computational speed of 2.52 ms per frame (real-time performance) was achieved.</p></div><div><h3>Conclusion</h3><p>The novel approach offers new insight to the objective assessment of transthoracic echocardiogram image quality and clinical quantification in A4C and PLAX views. It also lays stronger foundations for the operator's guidance system which can leverage the learning curve for the acquisition of optimum quality images during the transthoracic examination.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 3","pages":"Pages 191-199"},"PeriodicalIF":4.4000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Automated assessment of transthoracic echocardiogram image quality using deep neural networks\",\"authors\":\"Robert B. Labs , Apostolos Vrettos , Jonathan Loo , Massoud Zolgharni\",\"doi\":\"10.1016/j.imed.2022.08.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><p>Standard views in two-dimensional echocardiography are well established but the qualities of acquired images are highly dependent on operator skills and are assessed subjectively. This study was aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization.</p></div><div><h3>Methods</h3><p>We developed deep neural networks for the automated assessment of echocardiographic frames that were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Unlike non-medical images where full-reference metrics can be applied for image quality, echocardiogram's data are highly heterogeneous and requires blind-reference (IQA) metrics. Therefore, deep learning approaches were used to extract the spatiotemporal features and the image's quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness.</p></div><div><h3>Results</h3><p>The model performance accuracy yielded 94.4%, 96.8%, 96.2%, 97.4% for anatomical visibility, clarity, depth-gain and foreshortedness, respectively. The mean model error of 0.375±0.0052 with computational speed of 2.52 ms per frame (real-time performance) was achieved.</p></div><div><h3>Conclusion</h3><p>The novel approach offers new insight to the objective assessment of transthoracic echocardiogram image quality and clinical quantification in A4C and PLAX views. It also lays stronger foundations for the operator's guidance system which can leverage the learning curve for the acquisition of optimum quality images during the transthoracic examination.</p></div>\",\"PeriodicalId\":73400,\"journal\":{\"name\":\"Intelligent medicine\",\"volume\":\"3 3\",\"pages\":\"Pages 191-199\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667102622000705\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102622000705","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Automated assessment of transthoracic echocardiogram image quality using deep neural networks
Background
Standard views in two-dimensional echocardiography are well established but the qualities of acquired images are highly dependent on operator skills and are assessed subjectively. This study was aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization.
Methods
We developed deep neural networks for the automated assessment of echocardiographic frames that were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Unlike non-medical images where full-reference metrics can be applied for image quality, echocardiogram's data are highly heterogeneous and requires blind-reference (IQA) metrics. Therefore, deep learning approaches were used to extract the spatiotemporal features and the image's quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness.
Results
The model performance accuracy yielded 94.4%, 96.8%, 96.2%, 97.4% for anatomical visibility, clarity, depth-gain and foreshortedness, respectively. The mean model error of 0.375±0.0052 with computational speed of 2.52 ms per frame (real-time performance) was achieved.
Conclusion
The novel approach offers new insight to the objective assessment of transthoracic echocardiogram image quality and clinical quantification in A4C and PLAX views. It also lays stronger foundations for the operator's guidance system which can leverage the learning curve for the acquisition of optimum quality images during the transthoracic examination.