Young Jin Heo, Ikjong Park, K. H. Kim, Myoung-Joon Kim, W. Chung
{"title":"基于深度神经网络的角膜手术光学相干断层图像分割","authors":"Young Jin Heo, Ikjong Park, K. H. Kim, Myoung-Joon Kim, W. Chung","doi":"10.1109/URAI.2018.8441889","DOIUrl":null,"url":null,"abstract":"This paper describes use of deep neural networks for semantic segmentation of optical coherence tomography (OCT) images to accurately predict segmentation masks from noisy and occluded OCT images. The OCT images and semantic masks are acquired and commercial surgical tools, from an ex-vivo porcine eye. Simple post-processing can compute needle tip position and insertion depth from the predicted semantic masks. The segmentation accuracy, needle tip position error, and insertion depth error obtained from the FCN-8s, dilated convolution, and U-Net were compared. U-Net achieved the highest accuracy in the presence of occlusion and object overlap (81.5% mean IoU; 30.0-ILm tip-position error). The results show that the OCT image segmentation can be applied to the development of a surgical robot for corneal suturing.","PeriodicalId":347727,"journal":{"name":"2018 15th International Conference on Ubiquitous Robots (UR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Optical Coherence Tomography Image Segmentation for Cornea Surgery using Deep Neural Networks\",\"authors\":\"Young Jin Heo, Ikjong Park, K. H. Kim, Myoung-Joon Kim, W. Chung\",\"doi\":\"10.1109/URAI.2018.8441889\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes use of deep neural networks for semantic segmentation of optical coherence tomography (OCT) images to accurately predict segmentation masks from noisy and occluded OCT images. The OCT images and semantic masks are acquired and commercial surgical tools, from an ex-vivo porcine eye. Simple post-processing can compute needle tip position and insertion depth from the predicted semantic masks. The segmentation accuracy, needle tip position error, and insertion depth error obtained from the FCN-8s, dilated convolution, and U-Net were compared. U-Net achieved the highest accuracy in the presence of occlusion and object overlap (81.5% mean IoU; 30.0-ILm tip-position error). The results show that the OCT image segmentation can be applied to the development of a surgical robot for corneal suturing.\",\"PeriodicalId\":347727,\"journal\":{\"name\":\"2018 15th International Conference on Ubiquitous Robots (UR)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 15th International Conference on Ubiquitous Robots (UR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/URAI.2018.8441889\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Ubiquitous Robots (UR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/URAI.2018.8441889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optical Coherence Tomography Image Segmentation for Cornea Surgery using Deep Neural Networks
This paper describes use of deep neural networks for semantic segmentation of optical coherence tomography (OCT) images to accurately predict segmentation masks from noisy and occluded OCT images. The OCT images and semantic masks are acquired and commercial surgical tools, from an ex-vivo porcine eye. Simple post-processing can compute needle tip position and insertion depth from the predicted semantic masks. The segmentation accuracy, needle tip position error, and insertion depth error obtained from the FCN-8s, dilated convolution, and U-Net were compared. U-Net achieved the highest accuracy in the presence of occlusion and object overlap (81.5% mean IoU; 30.0-ILm tip-position error). The results show that the OCT image segmentation can be applied to the development of a surgical robot for corneal suturing.