R D Dias, M A Zenati, G Rance, Rithy Srey, D Arney, L Chen, R Paleja, L R Kennedy-Metz, M Gombolay
{"title":"Using machine learning to predict perfusionists' critical decision-making during cardiac surgery.","authors":"R D Dias, M A Zenati, G Rance, Rithy Srey, D Arney, L Chen, R Paleja, L R Kennedy-Metz, M Gombolay","doi":"10.1080/21681163.2021.2002724","DOIUrl":"https://doi.org/10.1080/21681163.2021.2002724","url":null,"abstract":"<p><p>The cardiac surgery operating room is a high-risk and complex environment in which multiple experts work as a team to provide safe and excellent care to patients. During the cardiopulmonary bypass phase of cardiac surgery, critical decisions need to be made and the perfusionists play a crucial role in assessing available information and taking a certain course of action. In this paper, we report the findings of a simulation-based study using machine learning to build predictive models of perfusionists' decision-making during critical situations in the operating room (OR). Performing 30-fold cross-validation across 30 random seeds, our machine learning approach was able to achieve an accuracy of 78.2% (95% confidence interval: 77.8% to 78.6%) in predicting perfusionists' actions, having access to only 148 simulations. The findings from this study may inform future development of computerised clinical decision support tools to be embedded into the OR, improving patient safety and surgical outcomes.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9355042/pdf/nihms-1762195.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9741510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yehuda K Ben-Zikri, María Helguera, David Fetzer, David A Shrier, Stephen R Aylward, Deepak Chittajallu, Marc Niethammer, Nathan D Cahill, Cristian A Linte
{"title":"A Feature-based Affine Registration Method for Capturing Background Lung Tissue Deformation for Ground Glass Nodule Tracking.","authors":"Yehuda K Ben-Zikri, María Helguera, David Fetzer, David A Shrier, Stephen R Aylward, Deepak Chittajallu, Marc Niethammer, Nathan D Cahill, Cristian A Linte","doi":"10.1080/21681163.2021.1994471","DOIUrl":"https://doi.org/10.1080/21681163.2021.1994471","url":null,"abstract":"<p><p>Lung nodule tracking assessment relies on cross-sectional measurements of the largest lesion profile depicted in initial and follow-up computed tomography (CT) images. However, apparent changes in nodule size assessed via simple image-based measurements may also be compromised by the effect of the background lung tissue deformation on the GGN between the initial and follow-up images, leading to erroneous conclusions about nodule changes due to disease. To compensate for the lung deformation and enable consistent nodule tracking, here we propose a feature-based affine registration method and study its performance vis-a-vis several other registration methods. We implement and test each registration method using both a lung- and a lesion-centered region of interest on ten patient CT datasets featuring twelve nodules, including both benign and malignant GGO lesions containing pure GGNs, part-solid, or solid nodules. We evaluate each registration method according to the target registration error (TRE) computed across 30 - 50 homologous fiducial landmarks surrounding the lesions and selected by expert radiologists in both the initial and follow-up patient CT images. Our results show that the proposed feature-based affine lesion-centered registration yielded a 1.1 ± 1.2 mm TRE, while a Symmetric Normalization deformable registration yielded a 1.2 ± 1.2 mm TRE, and a least-square fit registration of the 30-50 validation fiducial landmark set yielded a 1.5 ± 1.2 mm TRE. Although the deformable registration yielded a slightly higher registration accuracy than the feature-based affine registration, it is significantly more computationally efficient, eliminates the need for ambiguous segmentation of GGNs featuring ill-defined borders, and reduces the susceptibility of artificial deformations introduced by the deformable registration, which may lead to increased similarity between the registered initial and follow-up images, over-compensating for the background lung tissue deformation, and, in turn, compromising the true disease-induced nodule change assessment. We also assessed the registration qualitatively, by visual inspection of the subtraction images, and conducted a pilot pre-clinical study that showed the proposed feature-based lesion-centered affine registration effectively compensates for the background lung tissue deformation between the initial and follow-up images and also serves as a reliable baseline registration method prior to assessing lung nodule changes due to disease.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9718421/pdf/nihms-1751692.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10808703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive computation and visualization of deep brain stimulation effects using Duality.","authors":"J Vorwerk, D McCann, J Krüger, C R Butson","doi":"10.1080/21681163.2018.1484817","DOIUrl":"10.1080/21681163.2018.1484817","url":null,"abstract":"<p><p>Deep brain stimulation (DBS) is an established treatment for movement disorders such as Parkinson's disease or essential tremor. Currently, the selection of optimal stimulation settings is performed by iteratively adjusting the stimulation parameters and is a time consuming procedure that requires multiple clinic visits of several hours. Recently, computational models to predict and visualize the effect of DBS have been developed with the goal to simplify and accelerate this procedure by providing visual guidance and such models have been made available also on mobile devices. However, currently available visualization software still either lacks mobility, i.e., it is running on desktop computers and not easily available in clinical praxis, or flexibility, as the simulations that are visualized on mobile devices have to be precomputed. The goal of the pipeline presented in this paper is to close this gap: Using Duality, a newly developed software for the interactive visualization of simulation results, we implemented a pipeline that allows to compute DBS simulations in near-real time and instantaneously visualize the result on a tablet computer. Therefore, a client-server setup is used, so that the visualization and user interaction occur on the tablet computer, while the computations are carried out on a remote server. We present two examples for the use of Duality, one for postoperative programming and one for the planning of DBS surgery in a pre- or intraoperative setting. We carry out a performance analysis and present the results of a case study in which the pipeline for postoperative programming was applied.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7394461/pdf/nihms-1514891.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38219291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nazim Haouchine, Pariskhit Juvekar, Alexandra Golby, Sarah Frisken
{"title":"Predicted Microscopic Cortical Brain Images for Optimal Craniotomy Positioning and Visualization.","authors":"Nazim Haouchine, Pariskhit Juvekar, Alexandra Golby, Sarah Frisken","doi":"10.1080/21681163.2020.1834874","DOIUrl":"https://doi.org/10.1080/21681163.2020.1834874","url":null,"abstract":"<p><p>During a craniotomy, the skull is opened to allow surgeons to have access to the brain and perform the procedure. The position and size of this opening are chosen in a way to avoid critical structures, such as vessels, and facilitate the access to tumors. Planning the operation is done based on pre-operative images and does not account for intra-operative surgical events. We present a novel image-guided neurosurgical system to optimize the craniotomy opening. Using physics-based modeling we define a cortical deformation map that estimates the displacement field at candidate craniotomy locations. This deformation map is coupled with an image analogy algorithm that produces realistic synthetic images that can be used to predict both the geometry and the appearance of the brain surface before opening the skull. These images account for cortical vessel deformations that may occur after opening the skull and is rendered in a way that increases the surgeon's understanding and assimilation. Our method was tested retrospectively on patients data showing good results and demonstrating the feasibility of practical use of our system.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21681163.2020.1834874","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39541805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New Developments on Computational Methods and Imaging in Biomechanics and Biomedical Engineering","authors":"J. Tavares, P. Fernandes, F. Engenharia","doi":"10.1007/978-3-030-23073-9","DOIUrl":"https://doi.org/10.1007/978-3-030-23073-9","url":null,"abstract":"","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84840892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yehuda K Ben-Zikri, Ziv R Yaniv, Karl Baum, Cristian A Linte
{"title":"A marker-free registration method for standing X-ray panorama reconstruction for hip-knee-ankle axis deformity assessment.","authors":"Yehuda K Ben-Zikri, Ziv R Yaniv, Karl Baum, Cristian A Linte","doi":"10.1080/21681163.2018.1537859","DOIUrl":"https://doi.org/10.1080/21681163.2018.1537859","url":null,"abstract":"<p><p>Accurate measurement of knee alignment, quantified by the hip-knee-ankle (HKA) angle (varus-valgus), serves as an essential biomarker in the diagnosis of various orthopaedic conditions and selection of appropriate therapies. Such angular deformities are assessed from standing X-ray panoramas. However, the limited field-of-view of traditional X-ray imaging systems necessitates the acquisition of several sector images to capture an individual's standing posture, and their subsequent 'stitching' to reconstruct a panoramic image. Such panoramas are typically constructed manually by an X-ray imaging technician, often using various external markers attached to the individual's clothing and visible in two adjacent sector images. To eliminate human error, user-induced variability, improve consistency and reproducibility, and reduce the time associated with the traditional manual 'stitching' protocol, here we propose an automatic panorama construction method that only relies on anatomical features reliably detected in the images, eliminating the need for any external markers or manual input from the technician. The method first performs a rough segmentation of the femur and the tibia, then the sector images are registered by evaluating a distance metric between the corresponding bones along their medial edge. The identified translations are then used to generate the standing panorama image. The method was evaluated on 95 patient image datasets from a database of X-ray images acquired across 10 clinical sites as part of the screening process for a multi-site clinical trial. The panorama reconstruction parameters yielded by the proposed method were compared to those used for the manual panorama construction, which served as gold-standard. The horizontal translation differences were 0:43 ± 1:95 mm 0:26 ± 1:43 mm for the femur and tibia respectively, while the vertical translation differences were 3:76 ± 22:35 mm and 1:85 ± 6:79 mm for the femur and tibia, respectively. Our results showed no statistically significant differences between the HKA angles measured using the automated vs. the manually generated panoramas, and also led to similar decisions with regards to the patient inclusion/exclusion in the clinical trial. Thus, the proposed method was shown to provide comparable performance to manual panorama construction, with increased efficiency, consistency and robustness.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21681163.2018.1537859","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37324987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco T Y Schneider, Ju Zhang, Joseph J Crisco, Arnold-Peter C Weiss, Amy L Ladd, Poul M F Nielsen, Thor Besier
{"title":"Automatic segmentation of the thumb trapeziometacarpal joint using parametric statistical shape modelling and random forest regression voting.","authors":"Marco T Y Schneider, Ju Zhang, Joseph J Crisco, Arnold-Peter C Weiss, Amy L Ladd, Poul M F Nielsen, Thor Besier","doi":"10.1080/21681163.2018.1501765","DOIUrl":"https://doi.org/10.1080/21681163.2018.1501765","url":null,"abstract":"<p><p>We propose an automatic pipeline for creating shape modelling suitable parametric meshes of the trapeziometacarpal (TMC) joint from clinical CT images for the purpose of batch processing and analysis. The method uses 3D random forest regression voting (RFRV) with statistical shape model (SSM) segmentation. The method was demonstrated in a validation experiment involving 65 CT images, 15 of which were randomly selected to be excluded from the training set for testing. With mean root mean squared (RMS) errors of 1.066 mm and 0.632 mm for the first metacarpal and trapezial bones respectively, and a segmentation time of ~2 minutes per CT image, the preliminary results showed promise for providing accurate 3D meshes of TMC joint bones for batch processing.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21681163.2018.1501765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37392389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S Akbar, M Peikari, S Salama, S Nofech-Mozes, A L Martel
{"title":"The transition module: a method for preventing overfitting in convolutional neural networks.","authors":"S Akbar, M Peikari, S Salama, S Nofech-Mozes, A L Martel","doi":"10.1080/21681163.2018.1427148","DOIUrl":"https://doi.org/10.1080/21681163.2018.1427148","url":null,"abstract":"<p><p>Digital pathology has advanced substantially over the last decade with the adoption of slide scanners in pathology labs. The use of digital slides to analyse diseases at the microscopic level is both cost-effective and efficient. Identifying complex tumour patterns in digital slides is a challenging problem but holds significant importance for tumour burden assessment, grading and many other pathological assessments in cancer research. The use of convolutional neural networks (CNNs) to analyse such complex images has been well adopted in digital pathology. However, in recent years, the architecture of CNNs has altered with the introduction of inception modules which have shown great promise for classification tasks. In this paper, we propose a modified 'transition' module which encourages generalisation in a deep learning framework with few training samples. In the transition module, filters of varying sizes are used to encourage class-specific filters at multiple spatial resolutions followed by global average pooling. We demonstrate the performance of the transition module in AlexNet and ZFNet, for classifying breast tumours in two independent data-sets of scanned histology sections; the inclusion of the transition module in these CNNs improved performance.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21681163.2018.1427148","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37322685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrated 3D Anatomical Model for Automatic Myocardial Segmentation in Cardiac CT Imagery.","authors":"N Dahiya, A Yezzi, M Piccinelli, E Garcia","doi":"10.1080/21681163.2019.1583607","DOIUrl":"https://doi.org/10.1080/21681163.2019.1583607","url":null,"abstract":"<p><p>Segmentation of epicardial and endocardial boundaries is a critical step in diagnosing cardiovascular function in heart patients. The manual tracing of organ contours in Computed Tomography Angiography (CTA) slices is subjective, time-consuming and impractical in clinical setting. We propose a novel multi-dimensional automatic edge detection algorithm based on shape priors and principal component analysis (PCA). We have developed a highly customized parametric model for implicit representations of segmenting curves (3D) for Left Ventricle (LV), Right Ventricle (RV), and Epicardium (Epi) used simultaneously to achieve myocardial segmentation. We have combined these representations in a region-based image modeling framework with high level constraints enabling the modeling of complex cardiac anatomical structures to automatically guide the segmentation of endo/epicardial boundaries. Test results on 30 short-axis CTA datasets show robust segmentation with error (mean ± std mm) of (1.46 ± 0.41), (2.06 ± 0.65), (2.88 ± 0.59) for LV, RV and Epi respectively.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21681163.2019.1583607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37503335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Milad Rakhsha, Colin R Smith, Antonio Recuero, Scott C E Brandon, Michael F Vignos, Darryl G Thelen, Dan Negrut
{"title":"Simulation of surface strain in tibiofemoral cartilage during walking for the prediction of collagen fiber orientation.","authors":"Milad Rakhsha, Colin R Smith, Antonio Recuero, Scott C E Brandon, Michael F Vignos, Darryl G Thelen, Dan Negrut","doi":"10.1080/21681163.2018.1442751","DOIUrl":"https://doi.org/10.1080/21681163.2018.1442751","url":null,"abstract":"<p><p>The collagen fibers in the superficial layer of tibiofemoral articular cartilage exhibit distinct patterns in orientation revealed by split lines. In this study, we introduce a simulation framework to predict cartilage surface loading during walking to investigate if split line orientations correspond with principal strain directions in the cartilage surface. The two-step framework uses a multibody musculoskeletal model to predict tibiofemoral kinematics which are then imposed on a deformable surface model to predict surface strains. The deformable surface model uses absolute nodal coordinate formulation (ANCF) shell elements to represent the articular surface and a system of spring-dampers and internal pressure to represent the underlying cartilage. Simulations were performed to predict surface strains due to osmotic pressure, loading induced by walking, and the combination of both loading due to pressure and walking. Time-averaged magnitude-weighted first principal strain directions agreed well with split line maps from the literature for both the osmotic pressure and combined cases. This result suggests there is indeed a connection between collagen fiber orientation and mechanical loading, and indicates the importance of accounting for the pre-strain in the cartilage surface due to osmotic pressure.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21681163.2018.1442751","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37498827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}