Soufiane Dangoury, Mohammed Sadik, A. Alali, Abderrahim Fail
{"title":"二维超声图像分割的V-net性能","authors":"Soufiane Dangoury, Mohammed Sadik, A. Alali, Abderrahim Fail","doi":"10.1109/CSPA55076.2022.9781973","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has conquered all areas of human being life through its performance when it is adapted to a particular domain. Nowadays, different research papers are interested in the application of AI in medical area for ultrasound imaging. Hence, the most important task in medical field is imaging and image segmentation since it helps doctors to perform accurate diagnosis and therefore to prescribe the right treatment. In this paper, we study the image segmentation to improve the visualization and quantification of different image regions. To this end we propose the implementation of a 2D version of V-net architecture. The results are compared to the popular medical’s imaging algorithm U-net and its variation U-net++. The performance of our results is validated by the widely used metrics in segmentation field which are Dice coefficient, Sensitivity, Specificity and Accuracy. In addition, losses function has a high influence on training models. Therefore, our model will be experimented under different losses such as function Cross-Entropy, Dice-Similarity-loss, Focal loss and Focal Tversky loss to end up with the good cases for a training model. Extensive simulation of the proposed V-net model shows an improvement of 85.01% in Dice Coefficient, 85% in terms of sensitivity, 99% in specificity and 99% in accuracy.","PeriodicalId":174315,"journal":{"name":"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"V-net Performances for 2D Ultrasound Image Segmentation\",\"authors\":\"Soufiane Dangoury, Mohammed Sadik, A. Alali, Abderrahim Fail\",\"doi\":\"10.1109/CSPA55076.2022.9781973\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) has conquered all areas of human being life through its performance when it is adapted to a particular domain. Nowadays, different research papers are interested in the application of AI in medical area for ultrasound imaging. Hence, the most important task in medical field is imaging and image segmentation since it helps doctors to perform accurate diagnosis and therefore to prescribe the right treatment. In this paper, we study the image segmentation to improve the visualization and quantification of different image regions. To this end we propose the implementation of a 2D version of V-net architecture. The results are compared to the popular medical’s imaging algorithm U-net and its variation U-net++. The performance of our results is validated by the widely used metrics in segmentation field which are Dice coefficient, Sensitivity, Specificity and Accuracy. In addition, losses function has a high influence on training models. Therefore, our model will be experimented under different losses such as function Cross-Entropy, Dice-Similarity-loss, Focal loss and Focal Tversky loss to end up with the good cases for a training model. Extensive simulation of the proposed V-net model shows an improvement of 85.01% in Dice Coefficient, 85% in terms of sensitivity, 99% in specificity and 99% in accuracy.\",\"PeriodicalId\":174315,\"journal\":{\"name\":\"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSPA55076.2022.9781973\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSPA55076.2022.9781973","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
V-net Performances for 2D Ultrasound Image Segmentation
Artificial intelligence (AI) has conquered all areas of human being life through its performance when it is adapted to a particular domain. Nowadays, different research papers are interested in the application of AI in medical area for ultrasound imaging. Hence, the most important task in medical field is imaging and image segmentation since it helps doctors to perform accurate diagnosis and therefore to prescribe the right treatment. In this paper, we study the image segmentation to improve the visualization and quantification of different image regions. To this end we propose the implementation of a 2D version of V-net architecture. The results are compared to the popular medical’s imaging algorithm U-net and its variation U-net++. The performance of our results is validated by the widely used metrics in segmentation field which are Dice coefficient, Sensitivity, Specificity and Accuracy. In addition, losses function has a high influence on training models. Therefore, our model will be experimented under different losses such as function Cross-Entropy, Dice-Similarity-loss, Focal loss and Focal Tversky loss to end up with the good cases for a training model. Extensive simulation of the proposed V-net model shows an improvement of 85.01% in Dice Coefficient, 85% in terms of sensitivity, 99% in specificity and 99% in accuracy.