Lan Min, Hongxu Yang, Caifeng Shan, Alexander F. Kolen, P. D. With
{"title":"Feasibility study of catheter segmentation in 3D Frustum ultrasounds by DCNN","authors":"Lan Min, Hongxu Yang, Caifeng Shan, Alexander F. Kolen, P. D. With","doi":"10.1117/12.2549084","DOIUrl":"https://doi.org/10.1117/12.2549084","url":null,"abstract":"Nowadays, 3D ultrasound (US) has been employed rapidly in medical intervention therapies, such as cardiac catheterization. To efficiently interpret 3D US images and localize the catheter during the surgery, an experienced sonographer is required. As a consequence, image-based catheter detection can be a benefit to sonographer to localize the instrument in the 3D US images timely. Conventionally, the 3D imaging methods are based on the Cartesian domain, which is limited by bandwidth and information lose when it is converted from the original acquisition space-Frustum domain. The exploration of catheter segmentation in Frustum space helps to reduce the computational cost and improve efficiency. In this paper, we present a catheter segmentation method in 3D Frustum image via a deep convolutional network (DCNN). To better describe 3D information and reduce the complexity of DCNN, cross-planes with spatial gaps are extracted for each voxel. Then, the cross-planes of the voxel are processed by the DCNN to distinguish it, whether it is a catheter voxel or not. To accelerate the prediction efficiency on whole US Frustum volume, a filter-based pre-selection is applied to reduce the computational cost of the DCNN. Based on experiments on the ex-vivo dataset, our proposed method can segment the catheter in Frustum images with 0.67 Dice score within 3 seconds, which indicates the possibility of real-time application.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning-based automatic prostate segmentation in 3D transrectal ultrasound images from multiple acquisition geometries and systems","authors":"N. Orlando, D. Gillies, I. Gyacskov, A. Fenster","doi":"10.1117/12.2549804","DOIUrl":"https://doi.org/10.1117/12.2549804","url":null,"abstract":"Transrectal ultrasound (TRUS) fusion-guided biopsy and brachytherapy (BT) offer promising diagnostic and therapeutic improvements to conventional practice for prostate cancer. One key component of these procedures is accurate segmentation of the prostate in three-dimensional (3D) TRUS images to define margins used for accurate targeting and guidance techniques. However, manual prostate segmentation is a time-consuming and difficult process that must be completed by the physician intraoperatively, often while the patient is under sedation (biopsy) or anesthetic (BT). Providing physicians with a quick and accurate prostate segmentation immediately after acquiring a 3D TRUS image could benefit multiple minimally invasive prostate interventional procedures and greatly reduce procedure time. Our solution to this limitation is the development of a convolutional neural network to segment the prostate in 3D TRUS images using multiple commercial ultrasound systems. Training of a modified U-Net was performed on 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsy and BT procedures. Our approach for 3D segmentation involved prediction on 2D radial slices, which were reconstructed into a 3D geometry. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 unseen 3D side-fire images. Pixel map comparisons (Dice similarity coefficient (DSC), recall, and precision) and volume percent difference (VPD) were computed to assess error in the segmentation algorithm. Our algorithm performed with a 93.5% median DSC and 5.89% median VPD with a <0.7 s computation time, offering the possibility for reduced treatment time during prostate interventional procedures.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"14 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131070078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger
{"title":"Open-source platform for automated collection of training data to support video-based feedback in surgical simulators","authors":"J. Laframboise, T. Ungi, K. Sunderland, B. Zevin, G. Fichtinger","doi":"10.1117/12.2549878","DOIUrl":"https://doi.org/10.1117/12.2549878","url":null,"abstract":"Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128024200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Carnahan, John Moore, D. Bainbridge, G. Wheeler, S. Deng, K. Pushparajah, E. Chen, J. Simpson, T. Peters
{"title":"Applications of VR medical image visualization to chordal length measurements for cardiac procedures","authors":"Patrick Carnahan, John Moore, D. Bainbridge, G. Wheeler, S. Deng, K. Pushparajah, E. Chen, J. Simpson, T. Peters","doi":"10.1117/12.2549597","DOIUrl":"https://doi.org/10.1117/12.2549597","url":null,"abstract":"Cardiac surgeons rely on diagnostic imaging for preoperative planning. Recently, developments have been made on improving 3D ultrasound (US) spatial compounding tailored for cardiac images. Compounded 3D ultrasound volumes are able to capture complex anatomical structures at a level similar to a CT scan, however these images are difficult to display and visualize due to an increased amount of surrounding tissue captured including excess noise at the volume boundaries. Traditional medical image visualization software does not easily allow for viewing 2D slices at arbitrary angles, and 3D rendering techniques do not adequately capture depth information without the use of advanced transfer functions or other depth-encoding techniques that must be tuned to each individual data set. Previous studies have shown that the effective use of virtual reality (VR) can improve image visualization, usability and reduce surgical errors in case planning. We demonstrate the novel use of a VR system for the application of measuring chordal lengths from compounded transesophageal and transgastric echocardiography (TEE, TTE) ultrasound images. Compounded images are constructed from TEE (en-face) views registered and spatially compounded with multiple TEE transgastric views in order to capture both the mitral valve leaflets and chordae tendineae with high levels of detail. Users performed the task of taking linear measurements of chordae visible in these images using both traditional software and a VR platform. Compared to traditional software, the VR platform offered a more intuitive experience with respect to orientation, however users felt there was a lack of precision when performing the measurement tasks.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129477353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Capostagno, A. Sisniega, J. W. Stayman, T. Ehtiati, C. R. Weiss, J. Siewerdsen
{"title":"Image-based deformable motion compensation in cone-beam CT: translation to clinical studies in interventional body radiology","authors":"S. Capostagno, A. Sisniega, J. W. Stayman, T. Ehtiati, C. R. Weiss, J. Siewerdsen","doi":"10.1117/12.2549998","DOIUrl":"https://doi.org/10.1117/12.2549998","url":null,"abstract":"Purpose: Complex, involuntary, non-periodic, deformable motion presents a confounding factor to cone-beam CT (CBCT) image quality due to long (>10 s) scan times. We report and demonstrate an image-based deformable motion compensation method for CBCT, including phantom, cadaver, and animal studies as precursors to clinical studies. Methods: The method corrects deformable motion in CBCT scan data by solving for a motion vector field (MVF) that optimizes a sharpness criterion in the 3D image (viz., gradient entropy). MVFs are estimated by interpolating M locally rigid motion trajectories across N temporal nodes and are incorporated in a modified 3D filtered backprojection approach. The method was evaluated in a cervical spine phantom under flexion, and a cadaver undergoing variable magnitude of complex motion while imaged on a mobile C-arm (Cios Spin 3D, Siemens Healthineers, Forchheim, Germany). Further assessment was performed on a preclinical animal study using a clinical fixed-room C-arm (Artis Zee, Siemens Healthineers, Forchheim, Germany). Results: In phantom studies, the algorithm resolved visibility of cervical vertebrae under situations of strong flexion, reducing the root-mean-square error by 60% when compared to a motion-free reference. Reduced motion artifacts (blurring, streaks, and loss of soft-tissue edges) were evident in abdominal CBCT of a cadaver imaged during small, medium, and large motion-induced deformation. The animal study demonstrated reduction of streaks from complex motion of bowel gas during the scan. Conclusion: Overall, the studies demonstrate the robustness of the algorithm to a broad range of motion amplitudes, frequencies, data sources (i.e., mobile or fixed-room C-arms) and other confounding factors in real (not simulated) experimental data (e.g., truncation and scatter). These preclinical studies successfully demonstrate reduction of motion artifacts in CBCT and support translation of the method to clinical studies in interventional body radiology.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"121-124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133901103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soumya Gupta, Sharib Ali, L. Goldsmith, B. Turney, J. Rittscher
{"title":"Motion induced segmentation of stone fragments in ureteroscopy video","authors":"Soumya Gupta, Sharib Ali, L. Goldsmith, B. Turney, J. Rittscher","doi":"10.1117/12.2549657","DOIUrl":"https://doi.org/10.1117/12.2549657","url":null,"abstract":"Ureteroscopy is a conventional procedure used for localization and removal of kidney stones. Laser is commonly used to fragment the stones until they are small enough to be removed. Often, the surgical team faces tremendous challenge to successfully perform this task, mainly due to poor image quality, presence of floating debris and occlusions in the endoscopy video. Automated localization and segmentation can help to perform stone fragmentation e ciently. However, the automatic segmentation of kidney stones is a complex and challenging procedure due to stone heterogeneity in terms of shape, size, texture, color and position. In addition, dynamic background, motion blur, local deformations, occlusions, varying illumination conditions and visual clutter from the stone debris make the segmentation task even more challenging. In this paper, we present a novel illumination invariant optical flow based segmentation technique. We introduce a multi-frame based dense optical flow estimation in a primal-dual optimization framework embedded with a robust data-term based on normalized correlation transform descriptors. The proposed technique leverages the motion fields between multiple frames reducing the e↵ect of blur, deformations, occlusions and debris; and the proposed descriptor makes the method robust to illumination changes and dynamic background. Both qualitative and quantitative evaluations show the e cacy of the proposed method on ureteroscopy data. Our algorithm shows an improvement of 5-8% over all evaluation metrics as compared to the previous method. Our multi-frame strategy outperforms classically used two-frame model.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134271379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three-dimensional ultrasound for monitoring knee inflammation and cartilage damage in osteoarthritis and rheumatoid arthritis","authors":"S. Papernick, D. Gillies, T. Appleton, A. Fenster","doi":"10.1117/12.2549624","DOIUrl":"https://doi.org/10.1117/12.2549624","url":null,"abstract":"","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130852922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zheng Zhang, C. Beltran, S. Corner, A. Deisher, M. Herman, J. Kruse, H. Tseung, E. Tryggestad
{"title":"A Windows GUI application for real-time image guidance during motion-managed proton beam therapy","authors":"Zheng Zhang, C. Beltran, S. Corner, A. Deisher, M. Herman, J. Kruse, H. Tseung, E. Tryggestad","doi":"10.1117/12.2549748","DOIUrl":"https://doi.org/10.1117/12.2549748","url":null,"abstract":"","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122288006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Carnahan, John Moore, D. Bainbridge, E. Chen, T. Peters
{"title":"Multi-view 3D echocardiography volume compounding for mitral valve procedure planning","authors":"Patrick Carnahan, John Moore, D. Bainbridge, E. Chen, T. Peters","doi":"10.1117/12.2549598","DOIUrl":"https://doi.org/10.1117/12.2549598","url":null,"abstract":"Echocardiography is widely used for obtaining images of the heart for both preoperative diagnostic and intraoperative purposes. For procedures targeting the mitral valve, transesophageal echocardiography (TEE) is the primary imaging modality used as it provides clear 3D images of the valve and surrounding tissues. However, TEE suffers from image artifacts and signal dropout, particularly for structures lying below the valve including chordae tendineae. In order to see these structures, alternative echo views are required. However due to the limited field of view obtainable, the entire ventricle cannot be directly visualized in sufficient detail from a single image acquisition in 3D. This results in a large learning curve for interpreting these images as the multiple views must be reconciled mentally by a clinician. We propose applying an image compounding technique to TEE images acquired from a mid-esophageal position and a number of transgastric positions in order to reconstruct a high-detail image of the mitral valve and sub-valvular structures. This compounding technique utilizes a semi-simultaneous group-wise registration to align the multiple 3D volumes, followed by a weighted intensity compounding step. This compounding technique is validated using images acquired of a custom silicone phantom, excised porcine mitral valve units, and two patient data sets. We demonstrate that this compounding technique accurately captures the physical structures present, including the mitral valve, chordae tendineae and papillary muscles.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122623865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}