{"title":"Adaptive neighborhood triplet loss: enhanced segmentation of dermoscopy datasets by mining pixel information.","authors":"Mohan Xu, Lena Wiese","doi":"10.1007/s11548-024-03241-9","DOIUrl":"https://doi.org/10.1007/s11548-024-03241-9","url":null,"abstract":"<p><strong>Purpose: </strong>The integration of deep learning in image segmentation technology markedly improves the automation capabilities of medical diagnostic systems, reducing the dependence on the clinical expertise of medical professionals. However, the accuracy of image segmentation is still impacted by various interference factors encountered during image acquisition.</p><p><strong>Methods: </strong>To address this challenge, this paper proposes a loss function designed to mine specific pixel information which dynamically changes during training process. Based on the triplet concept, this dynamic change is leveraged to drive the predicted boundaries of images closer to the real boundaries.</p><p><strong>Results: </strong>Extensive experiments on the PH2 and ISIC2017 dermoscopy datasets validate that our proposed loss function overcomes the limitations of traditional triplet loss methods in image segmentation applications. This loss function not only enhances Jaccard indices of neural networks by 2.42 <math><mo>%</mo></math> and 2.21 <math><mo>%</mo></math> for PH2 and ISIC2017, respectively, but also neural networks utilizing this loss function generally surpass those that do not in terms of segmentation performance.</p><p><strong>Conclusion: </strong>This work proposed a loss function that mined the information of specific pixels deeply without incurring additional training costs, significantly improving the automation of neural networks in image segmentation tasks. This loss function adapts to dermoscopic images of varying qualities and demonstrates higher effectiveness and robustness compared to other boundary loss functions, making it suitable for image segmentation tasks across various neural networks.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S F Frisken, N Haouchine, D D Chlorogiannis, V Gopalakrishnan, A Cafaro, W T Wells, A J Golby, R Du
{"title":"VESCL: an open source 2D vessel contouring library.","authors":"S F Frisken, N Haouchine, D D Chlorogiannis, V Gopalakrishnan, A Cafaro, W T Wells, A J Golby, R Du","doi":"10.1007/s11548-024-03212-0","DOIUrl":"10.1007/s11548-024-03212-0","url":null,"abstract":"<p><strong>Purpose: </strong>VESCL (pronounced 'vessel') is a novel vessel contouring library for computer-assisted 2D vessel contouring and segmentation. VESCL facilitates manual vessel segmentation in 2D medical images to generate gold-standard datasets for training, testing, and validating automatic vessel segmentation.</p><p><strong>Methods: </strong>VESCL is an open-source C++ library designed for easy integration into medical image processing systems. VESCL provides an intuitive interface for drawing variable-width parametric curves along vessels in 2D images. It includes highly optimized localized filtering to automatically fit drawn curves to the nearest vessel centerline and automatically determine the varying vessel width along each curve. To support a variety of segmentation paradigms, VESCL can export multiple segmentation representations including binary segmentations, occupancy maps, and distance fields.</p><p><strong>Results: </strong>VESCL provides sub-pixel resolution for vessel centerlines and vessel widths. It is optimized to segment small vessels with single- or sub-pixel widths that are visible to the human eye but hard to segment automatically via conventional filters. When tested on neurovascular digital subtraction angiography (DSA), VESCL's intuitive hand-drawn input with automatic curve fitting increased the speed of fully manual segmentation by 22× over conventional methods and by 3× over the best publicly available computer-assisted manual segmentation method. Accuracy was shown to be within the range of inter-operator variability of gold standard manually segmented data from a publicly available dataset of neurovascular DSA images as measured using Dice scores. Preliminary tests showed similar improvements for segmenting DSA of coronary arteries and RGB images of retinal arteries.</p><p><strong>Conclusion: </strong>VESCL is an open-source C++ library for contouring vessels in 2D images which can be used to reduce the tedious, labor-intensive process of manually generating gold-standard segmentations for training, testing, and comparing automatic segmentation methods.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1627-1636"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141328066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tejas Sudharshan Mathai, Bohan Liu, Ronald M Summers
{"title":"Segmentation of mediastinal lymph nodes in CT with anatomical priors.","authors":"Tejas Sudharshan Mathai, Bohan Liu, Ronald M Summers","doi":"10.1007/s11548-024-03165-4","DOIUrl":"10.1007/s11548-024-03165-4","url":null,"abstract":"<p><strong>Purpose: </strong>Lymph nodes (LNs) in the chest have a tendency to enlarge due to various pathologies, such as lung cancer or pneumonia. Clinicians routinely measure nodal size to monitor disease progression, confirm metastatic cancer, and assess treatment response. However, variations in their shapes and appearances make it cumbersome to identify LNs, which reside outside of most organs.</p><p><strong>Methods: </strong>We propose to segment LNs in the mediastinum by leveraging the anatomical priors of 28 different structures (e.g., lung, trachea etc.) generated by the public TotalSegmentator tool. The CT volumes from 89 patients available in the public NIH CT Lymph Node dataset were used to train three 3D off-the-shelf nnUNet models to segment LNs. The public St. Olavs dataset containing 15 patients (out-of-training-distribution) was used to evaluate the segmentation performance.</p><p><strong>Results: </strong>For LNs with short axis diameter <math><mo>≥</mo></math> 8 mm, the 3D cascade nnUNet model obtained the highest Dice score of 67.9 ± 23.4 and lowest Hausdorff distance error of 22.8 ± 20.2. For LNs of all sizes, the Dice score was 58.7 ± 21.3 and this represented a <math><mo>≥</mo></math> 10% improvement over a recently published approach evaluated on the same test dataset.</p><p><strong>Conclusion: </strong>To our knowledge, we are the first to harness 28 distinct anatomical priors to segment mediastinal LNs, and our work can be extended to other nodal zones in the body. The proposed method has the potential for improved patient outcomes through the identification of enlarged nodes in initial staging CT scans.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1537-1544"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329534/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140916595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica Copeland, Mehida Rojas-Alexandre, Lilian Tsai, Franklin King, Nobuhiko Hata
{"title":"Characterizing the accuracy of robotic bronchoscopy in localization & targeting of small pulmonary lesions.","authors":"Jessica Copeland, Mehida Rojas-Alexandre, Lilian Tsai, Franklin King, Nobuhiko Hata","doi":"10.1007/s11548-024-03152-9","DOIUrl":"10.1007/s11548-024-03152-9","url":null,"abstract":"<p><strong>Purpose: </strong>Considering the recent implementation of lung cancer screening guidelines, it is crucial that small pulmonary nodules are accurately diagnosed. There is a significant need for quick, precise, and minimally invasive biopsy methods, especially for patients with small lung lesions in the outer periphery. Robotic bronchoscopy (RB) has recently emerged as a novel solution. The purpose of this study was to evaluate the accuracy of RB compared to the existing standard, electromagnetic navigational bronchoscopy (EM-NB).</p><p><strong>Methods: </strong>A prospective, single-blinded, and randomized-controlled study was performed to compare the accuracy of RB to EM-NB in localizing and targeting pulmonary lesions in a porcine lung model. Four operators were tasked with navigating to four pulmonary targets in the outer periphery of a porcine lung, to which they were blinded, using both the RB and EM-NB systems. The dependent variable was accuracy. Accuracy was measured as a rate of success in lesion localization and targeting, the distance from the center of the pulmonary target, and by anatomic location. The independent variable was the navigation system, RB was compared to EM-NB using 1:1 randomization.</p><p><strong>Results: </strong>Of 75 attempts, 72 were successful in lesion localization and 60 were successful in lesion targeting. The success rate for lesion localization was 100% with RB and 91% with EM- NB. The success rate for lesion targeting was 93% with RB and 80% for EM-NB. RB demonstrated superior accuracy in reaching the distance from the center of the lesion, at 0.62 mm compared to EM-NB at 1.28 mm (p = 0.001). Accuracy was improved using RB compared to EM- NB for lesions in the LLL (p = 0.025), LUL (p < 0.001), and RUL (p < 0.001).</p><p><strong>Conclusion: </strong>Our findings support RB as a more accurate method of navigating and localizing small peripheral pulmonary targets when compared to standard EM-NB in a porcine lung model. This may be attributed to the ability of RB to reduce substantial tissue displacement seen with standard EM-NB navigation. As the development and application of RB advances, so will the ability to accurately diagnose small peripheral lung cancer nodules, providing patients with early-stage lung cancer the best possible outcomes.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1505-1515"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ina Vernikouskaya, Hans-Peter Müller, Albert C Ludolph, Jan Kassubek, Volker Rasche
{"title":"AI-assisted automatic MRI-based tongue volume evaluation in motor neuron disease (MND).","authors":"Ina Vernikouskaya, Hans-Peter Müller, Albert C Ludolph, Jan Kassubek, Volker Rasche","doi":"10.1007/s11548-024-03099-x","DOIUrl":"10.1007/s11548-024-03099-x","url":null,"abstract":"<p><strong>Purpose: </strong>Motor neuron disease (MND) causes damage to the upper and lower motor neurons including the motor cranial nerves, the latter resulting in bulbar involvement with atrophy of the tongue muscle. To measure tongue atrophy, an operator independent automatic segmentation of the tongue is crucial. The aim of this study was to apply convolutional neural network (CNN) to MRI data in order to determine the volume of the tongue.</p><p><strong>Methods: </strong>A single triplanar CNN of U-Net architecture trained on axial, coronal, and sagittal planes was used for the segmentation of the tongue in MRI scans of the head. The 3D volumes were processed slice-wise across the three orientations and the predictions were merged using different voting strategies. This approach was developed using MRI datasets from 20 patients with 'classical' spinal amyotrophic lateral sclerosis (ALS) and 20 healthy controls and, in a pilot study, applied to the tongue volume quantification to 19 controls and 19 ALS patients with the variant progressive bulbar palsy (PBP).</p><p><strong>Results: </strong>Consensus models with softmax averaging and majority voting achieved highest segmentation accuracy and outperformed predictions on single orientations and consensus models with union and unanimous voting. At the group level, reduction in tongue volume was not observed in classical spinal ALS, but was significant in the PBP group, as compared to controls.</p><p><strong>Conclusion: </strong>Utilizing single U-Net trained on three orthogonal orientations with consequent merging of respective orientations in an optimized consensus model reduces the number of erroneous detections and improves the segmentation of the tongue. The CNN-based automatic segmentation allows for accurate quantification of the tongue volumes in all subjects. The application to the ALS variant PBP showed significant reduction of the tongue volume in these patients and opens the way for unbiased future longitudinal studies in diseases affecting tongue volume.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1579-1587"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using diffusion models to generate synthetic labeled data for medical image segmentation.","authors":"Daniel G Saragih, Atsuhiro Hibi, Pascal N Tyrrell","doi":"10.1007/s11548-024-03213-z","DOIUrl":"10.1007/s11548-024-03213-z","url":null,"abstract":"<p><strong>Purpose: </strong>Medical image analysis has become a prominent area where machine learning has been applied. However, high-quality, publicly available data are limited either due to patient privacy laws or the time and cost required for experts to annotate images. In this retrospective study, we designed and evaluated a pipeline to generate synthetic labeled polyp images for augmenting medical image segmentation models with the aim of reducing this data scarcity.</p><p><strong>Methods: </strong>We trained diffusion models on the HyperKvasir dataset, comprising 1000 images of polyps in the human GI tract from 2008 to 2016. Qualitative expert review, Fréchet Inception Distance (FID), and Multi-Scale Structural Similarity (MS-SSIM) were tested for evaluation. Additionally, various segmentation models were trained with the generated data and evaluated using Dice score (DS) and Intersection over Union (IoU).</p><p><strong>Results: </strong>Our pipeline produced images more akin to real polyp images based on FID scores. Segmentation model performance also showed improvements over GAN methods when trained entirely, or partially, with synthetic data, despite requiring less compute for training. Moreover, the improvement persists when tested on different datasets, showcasing the transferability of the generated images.</p><p><strong>Conclusions: </strong>The proposed pipeline produced realistic image and mask pairs which could reduce the need for manual data annotation when performing a machine learning task. We support this use case by showing that the methods proposed in this study enhanced segmentation model performance, as measured by Dice and IoU scores, when trained fully or partially on synthetic data.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1615-1625"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah Strohm, Sven Rothluebbers, Luis Perotti, Oskar Stamm, Marc Fournelle, Juergen Jenne, Matthias Guenther
{"title":"Contraction assessment of abdominal muscles using automated segmentation designed for wearable ultrasound applications.","authors":"Hannah Strohm, Sven Rothluebbers, Luis Perotti, Oskar Stamm, Marc Fournelle, Juergen Jenne, Matthias Guenther","doi":"10.1007/s11548-024-03204-0","DOIUrl":"10.1007/s11548-024-03204-0","url":null,"abstract":"<p><strong>Purpose: </strong>Wearable ultrasound devices can be used to continuously monitor muscle activity. One possible application is to provide real-time feedback during physiotherapy, to show a patient whether an exercise is performed correctly. Algorithms which automatically analyze the data can be of importance to overcome the need for manual assessment and annotations and speed up evaluations especially when considering real-time video sequences. They even could be used to present feedback in an understandable manner to patients in a home-use scenario. The following work investigates three deep learning based segmentation approaches for abdominal muscles in ultrasound videos during a segmental stabilizing exercise. The segmentations are used to automatically classify the contraction state of the muscles.</p><p><strong>Methods: </strong>The first approach employs a simple 2D network, while the remaining two integrate the time information from the videos either via additional tracking or directly into the network architecture. The contraction state is determined by comparing measures such as muscle thickness and center of mass between rest and exercise. A retrospective analysis is conducted but also a real-time scenario is simulated, where classification is performed during exercise.</p><p><strong>Results: </strong>Using the proposed segmentation algorithms, 71% of the muscle states are classified correctly in the retrospective analysis in comparison to 90% accuracy with manual reference segmentation. For the real-time approach the majority of given feedback during exercise is correct when the retrospective analysis had come to the correct result, too.</p><p><strong>Conclusion: </strong>Both retrospective and real-time analysis prove to be feasible. While no substantial differences between the algorithms were observed regarding classification, the networks incorporating the time information showed temporally more consistent segmentations. Limitations of the approaches as well as reasons for failing cases in segmentation, classification and real-time assessment are discussed and requirements regarding image quality and hardware design are derived.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1607-1614"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Unger, Annika Hänel, Claire Chalopin, Dirk Halama
{"title":"Design and evaluation of an AR-based thermal imaging system for planning reconstructive surgeries.","authors":"Michael Unger, Annika Hänel, Claire Chalopin, Dirk Halama","doi":"10.1007/s11548-024-03184-1","DOIUrl":"10.1007/s11548-024-03184-1","url":null,"abstract":"<p><strong>Introduction: </strong>Thermal imaging can be used for the non-invasive detection of blood vessels of the skin. However, mapping the results to the patient currently lacks user-friendliness. Augmented reality may provide a useful tool to superimpose thermal information on the patient.</p><p><strong>Methods: </strong>A system to support planning in reconstructive surgery using a thermal camera was designed. The obtained information was superimposed on the physical object using a Microsoft HoloLens. An RGB, depth, and thermal camera were combined to capture a scene of different modalities and reconstruct a virtual scene in real time. To register the different cameras and the AR device, an active calibration target was developed and evaluated. A Vuforia marker was used to register the hologram in the virtual space. The accuracy of the projected hologram was evaluated in a laboratory setting with participants by measuring the error between the physical object and the hologram.</p><p><strong>Results: </strong>The AR-based system was evaluated by 21 participants in a laboratory setting. The mean projection error is 10.3 ± 9.4 mm. The system is able to stream a three-dimensional scene with augmented thermal information in real time at 5 frames per second. The active calibration target can be used independently of the environment.</p><p><strong>Conclusion: </strong>The calibration target provides an easy-to-use method for the registration of cameras capturing the visible to long-infrared spectral range. The inside-out tracking of the HoloLens in combination with a Vuforia marker is not accurate enough for the intended clinical use.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1659-1666"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Heart and great vessels segmentation in congenital heart disease via CNN and conditioned energy function postprocessing.","authors":"Jiaxuan Liu, Bolun Zeng, Xiaojun Chen","doi":"10.1007/s11548-024-03182-3","DOIUrl":"10.1007/s11548-024-03182-3","url":null,"abstract":"<p><strong>Purpose: </strong>The segmentation of the heart and great vessels in CT images of congenital heart disease (CHD) is critical for the clinical assessment of cardiac anomalies and the diagnosis of CHD. However, the diverse types and abnormalities inherent in CHD present significant challenges to comprehensive heart segmentation.</p><p><strong>Methods: </strong>We proposed a novel two-stage segmentation approach, integrating a Convolutional Neural Network (CNN) with a postprocessing method with conditioned energy function for pulmonary and aorta. The initial stage employs a CNN enhanced by a gated self-attention mechanism for the segmentation of five primary heart structures and two major vessels. Subsequently, the second stage utilizes a conditioned energy function specifically tailored to refine the segmentation of the pulmonary artery and aorta, ensuring vascular continuity.</p><p><strong>Results: </strong>Our method was evaluated on a public dataset including 110 3D CT volumes, encompassing 16 CHD variants. Compared to prevailing segmentation techniques (U-Net, V-Net, Unetr, dynUnet), our approach demonstrated improvements of 1.02, 1.04, and 1.41% in Dice Coefficient (DSC), Intersection over Union (IOU), and the 95th percentile Hausdorff Distance (HD95), respectively, for heart structure segmentation. For the two great vessels, the enhancements were 1.05, 1.07, and 1.42% in these metrics.</p><p><strong>Conclusion: </strong>The outcomes on the public dataset affirm the efficacy of our proposed segmentation method. Precise segmentation of the entire heart and great vessels can significantly aid in the diagnosis and treatment of CHD, underscoring the clinical relevance of our findings.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1597-1605"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141175775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design consideration on integration of mechanical intravascular ultrasound and electromagnetic tracking sensor for intravascular reconstruction.","authors":"Wenran Cai, Kazuaki Hara, Naoki Tomii, Etsuko Kobayashi, Takashi Ohya, Ichiro Sakuma","doi":"10.1007/s11548-024-03059-5","DOIUrl":"10.1007/s11548-024-03059-5","url":null,"abstract":"<p><strong>Purpose: </strong>Considering vessel deformation, endovascular navigation requires intraoperative geometric information. Mechanical intravascular ultrasound (IVUS) with an electromagnetic (EM) sensor can be used to reconstruct blood vessels with thin diameter. However, the integration design should be evaluated based on the factors affecting the reconstruction error.</p><p><strong>Methods: </strong>The interference between the mechanical IVUS and EM sensor was measured in different relative positions. Two designs of the integrated catheter were evaluated by measuring the reconstruction errors using a rigid vascular phantom.</p><p><strong>Results: </strong>When the distance from the EM sensor to the field generator was 75 mm, the interference from mechanical IVUS to an EM sensor was negligible, with position and rotation errors less than 0.1 mm and 0.6°, respectively. The reconstructed vessel model for proximal IVUS transducer had a smooth surface but an inaccurate shape at large curvature of the vascular phantom. When the distance to the field generator was 175 mm, the error increased significantly.</p><p><strong>Conclusion: </strong>Placing the IVUS transducer on the proximal side of the EM sensor is superior in terms of interference reduction but inferior in terms of mechanical stability compared to a distal transducer. The distal side is preferred due to better mechanical stability during catheter manipulation at larger curvature. With this configuration, surface reconstruction errors less than 1.7 mm (with RMS 0.57 mm) were achieved when the distance to the field generator was less than 175 mm.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1545-1554"},"PeriodicalIF":2.3,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139492882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}