V. Petrovic, Gorana Gojic, D. Dragan, Dušan B. Gajić, Nebojsa Horvat, R. Turovic, A. Oros
{"title":"Robustness of Deep Learning Methods for Ocular Fundus Segmentation: Evaluation of Blur Sensitivity","authors":"V. Petrovic, Gorana Gojic, D. Dragan, Dušan B. Gajić, Nebojsa Horvat, R. Turovic, A. Oros","doi":"10.1109/INISTA49547.2020.9194612","DOIUrl":null,"url":null,"abstract":"This paper analyzes the sensitivity of deep learning methods for ocular fundus segmentation. We use an empirical methodology based on non-adversarial perturbed datasets. The research is motivated by the perceived needs of mass screening and self-administered tests in which autonomous or semi-autonomous artificially intelligent methods are needed and may be given substandard images with focus issues. These substandard pictures are simulated using blurring algorithms of varying designs and kernel sizes which are subjected to a test of inter-network sensitivity. The network's result on an unblurred original is derived from the testing subset of the DRIVE ocular fundus image dataset used as the ground truth. The networks studied were VesselUNet (Ronnenberger et al. and Huang et al.), VesselGAN (Son et al.), and VesselFCNN (Oliveira et al.). Statistical analysis of the resultant n = 3600 sample has determined that the datapoints indicating sensitivity over kernel size can be fitted with a sigmoid (with a maximum final tolerance of 9.33e-6), and that it can be shown, using robust pairwise Holm-corrected comparisons, that VesselUNet is the least sensitive (with p-values <5e-8). The least disruptive was Gaussian blur, and the most disruptive motion blur unaligned with cardinal axes. The analysis gives us cause to believe that there is reason to research this problem with more depth, and to work on developing more robust methods for ocular fundus segmentation.","PeriodicalId":124632,"journal":{"name":"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INISTA49547.2020.9194612","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper analyzes the sensitivity of deep learning methods for ocular fundus segmentation. We use an empirical methodology based on non-adversarial perturbed datasets. The research is motivated by the perceived needs of mass screening and self-administered tests in which autonomous or semi-autonomous artificially intelligent methods are needed and may be given substandard images with focus issues. These substandard pictures are simulated using blurring algorithms of varying designs and kernel sizes which are subjected to a test of inter-network sensitivity. The network's result on an unblurred original is derived from the testing subset of the DRIVE ocular fundus image dataset used as the ground truth. The networks studied were VesselUNet (Ronnenberger et al. and Huang et al.), VesselGAN (Son et al.), and VesselFCNN (Oliveira et al.). Statistical analysis of the resultant n = 3600 sample has determined that the datapoints indicating sensitivity over kernel size can be fitted with a sigmoid (with a maximum final tolerance of 9.33e-6), and that it can be shown, using robust pairwise Holm-corrected comparisons, that VesselUNet is the least sensitive (with p-values <5e-8). The least disruptive was Gaussian blur, and the most disruptive motion blur unaligned with cardinal axes. The analysis gives us cause to believe that there is reason to research this problem with more depth, and to work on developing more robust methods for ocular fundus segmentation.