Lhuqita Fazry, Asep Haryono, Nuzulul Khairu Nissa, Sunarno, Naufal Muhammad Hirzi, M. F. Rachmadi, W. Jatmiko
{"title":"用于心脏射血分数估计的分层视觉变压器","authors":"Lhuqita Fazry, Asep Haryono, Nuzulul Khairu Nissa, Sunarno, Naufal Muhammad Hirzi, M. F. Rachmadi, W. Jatmiko","doi":"10.1109/IWBIS56557.2022.9924664","DOIUrl":null,"url":null,"abstract":"The left ventricular of ejection fraction is one of the most important metric of cardiac function. It is used by cardiologist to identify patients who are eligible for life-prolonging therapies. However, the assessment of ejection fraction suffers from inter-observer variability. To overcome this challenge, we propose a deep learning approach, based on hierarchical vision Transformers, to estimate the ejection fraction from echocardiogram videos. The proposed method can estimate ejection fraction without the need for left ventrice segmentation first, make it more efficient than other methods. We evaluated our method on EchoNet-Dynamic dataset resulting 5.59, 7.59 and 0.59 for MAE, RMSE and R2 respectivelly. This results are better compared to the state-of-the-art method, Ultrasound Video Transformer (UVT). The source code is available on https://github.com/lhfazry/UltraSwin.","PeriodicalId":348371,"journal":{"name":"2022 7th International Workshop on Big Data and Information Security (IWBIS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Hierarchical Vision Transformers for Cardiac Ejection Fraction Estimation\",\"authors\":\"Lhuqita Fazry, Asep Haryono, Nuzulul Khairu Nissa, Sunarno, Naufal Muhammad Hirzi, M. F. Rachmadi, W. Jatmiko\",\"doi\":\"10.1109/IWBIS56557.2022.9924664\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The left ventricular of ejection fraction is one of the most important metric of cardiac function. It is used by cardiologist to identify patients who are eligible for life-prolonging therapies. However, the assessment of ejection fraction suffers from inter-observer variability. To overcome this challenge, we propose a deep learning approach, based on hierarchical vision Transformers, to estimate the ejection fraction from echocardiogram videos. The proposed method can estimate ejection fraction without the need for left ventrice segmentation first, make it more efficient than other methods. We evaluated our method on EchoNet-Dynamic dataset resulting 5.59, 7.59 and 0.59 for MAE, RMSE and R2 respectivelly. This results are better compared to the state-of-the-art method, Ultrasound Video Transformer (UVT). The source code is available on https://github.com/lhfazry/UltraSwin.\",\"PeriodicalId\":348371,\"journal\":{\"name\":\"2022 7th International Workshop on Big Data and Information Security (IWBIS)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Workshop on Big Data and Information Security (IWBIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IWBIS56557.2022.9924664\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Workshop on Big Data and Information Security (IWBIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWBIS56557.2022.9924664","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hierarchical Vision Transformers for Cardiac Ejection Fraction Estimation
The left ventricular of ejection fraction is one of the most important metric of cardiac function. It is used by cardiologist to identify patients who are eligible for life-prolonging therapies. However, the assessment of ejection fraction suffers from inter-observer variability. To overcome this challenge, we propose a deep learning approach, based on hierarchical vision Transformers, to estimate the ejection fraction from echocardiogram videos. The proposed method can estimate ejection fraction without the need for left ventrice segmentation first, make it more efficient than other methods. We evaluated our method on EchoNet-Dynamic dataset resulting 5.59, 7.59 and 0.59 for MAE, RMSE and R2 respectivelly. This results are better compared to the state-of-the-art method, Ultrasound Video Transformer (UVT). The source code is available on https://github.com/lhfazry/UltraSwin.