Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)最新文献
Elizaveta Savochkina, Lok Hin Lee, Lior Drukker, Aris T Papageorghiou, J Alison Noble
{"title":"First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction.","authors":"Elizaveta Savochkina, Lok Hin Lee, Lior Drukker, Aris T Papageorghiou, J Alison Noble","doi":"10.1007/978-3-030-80432-9_28","DOIUrl":"https://doi.org/10.1007/978-3-030-80432-9_28","url":null,"abstract":"<p><p>While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multimodal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoderdecoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).</p>","PeriodicalId":93336,"journal":{"name":"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7611594/pdf/EMS132092.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39379950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixin Yang, Richard Simon, Yangming Li, Cristian A Linte
{"title":"Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.","authors":"Zixin Yang, Richard Simon, Yangming Li, Cristian A Linte","doi":"10.1007/978-3-030-80432-9_26","DOIUrl":"10.1007/978-3-030-80432-9_26","url":null,"abstract":"<p><p>In the context of Minimally Invasive Surgery, estimating depth from stereo endoscopy plays a crucial role in three-dimensional (3D) reconstruction, surgical navigation, and augmentation reality (AR) visualization. However, the challenges associated with this task are three-fold: 1) feature-less surface representations, often polluted by artifacts, pose difficulty in identifying correspondence; 2) ground truth depth is difficult to estimate; and 3) an endoscopy image acquisition accompanied by accurately calibrated camera parameters is rare, as the camera is often adjusted during an intervention. To address these difficulties, we propose an unsupervised depth estimation framework (END-flow) based on an unsupervised optical flow network trained on un-rectified binocular videos without calibrated camera parameters. The proposed END-flow architecture is compared with traditional stereo matching, self-supervised depth estimation, unsupervised optical flow, and supervised methods implemented on the Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED) Challenge dataset. Experimental results show that our method outperforms several state-of-the-art techniques and achieves a close performance to that of supervised methods.</p>","PeriodicalId":93336,"journal":{"name":"Medical image understanding and analysis : 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12-14, 2021, Proceedings. Medical Image Understanding and Analysis (Conference) (25th : 2021 : Online)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9125693/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85941062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}