Journal of Medical Imaging最新文献

筛选
英文 中文
CMNet: deep learning model for colon polyp segmentation based on dual-branch structure. CMNet:基于双分支结构的结肠息肉分割深度学习模型。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-23 DOI: 10.1117/1.JMI.11.2.024004
Xuguang Cao, Kefeng Fan, Cun Xu, Huilin Ma, Kaijie Jiao
{"title":"CMNet: deep learning model for colon polyp segmentation based on dual-branch structure.","authors":"Xuguang Cao, Kefeng Fan, Cun Xu, Huilin Ma, Kaijie Jiao","doi":"10.1117/1.JMI.11.2.024004","DOIUrl":"10.1117/1.JMI.11.2.024004","url":null,"abstract":"<p><strong>Purpose: </strong>Colon cancer is one of the top three diseases in gastrointestinal cancers, and colon polyps are an important trigger of colon cancer. Early diagnosis and removal of colon polyps can avoid the incidence of colon cancer. Currently, colon polyp removal surgery is mainly based on artificial-intelligence (AI) colonoscopy, supplemented by deep-learning technology to help doctors remove colon polyps. With the development of deep learning, the use of advanced AI technology to assist in medical diagnosis has become mainstream and can maximize the doctor's diagnostic time and help doctors to better formulate medical plans.</p><p><strong>Approach: </strong>We propose a deep-learning model for segmenting colon polyps. The model adopts a dual-branch structure, combines a convolutional neural network (CNN) with a transformer, and replaces ordinary convolution with deeply separable convolution based on ResNet; a stripe pooling module is introduced to obtain more effective information. The aggregated attention module (AAM) is proposed for high-dimensional semantic information, which effectively combines two different structures for the high-dimensional information fusion problem. Deep supervision and multi-scale training are added in the model training process to enhance the learning effect and generalization performance of the model.</p><p><strong>Results: </strong>The experimental results show that the proposed dual-branch structure is significantly better than the single-branch structure, and the model using the AAM has a significant performance improvement over the model not using the AAM. Our model leads 1.1% and 1.5% in mIoU and mDice, respectively, when compared with state-of-the-art models in a fivefold cross-validation on the Kvasir-SEG dataset.</p><p><strong>Conclusions: </strong>We propose and validate a deep learning model for segmenting colon polyps, using a dual-branch network structure. Our results demonstrate the feasibility of complementing traditional CNNs and transformer with each other. And we verified the feasibility of fusing different structures on high-dimensional semantics and successfully retained the high-dimensional information of different structures effectively.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140207951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-based transferable deep-learning scatter correction in cone beam computed tomography: a simulation study. 锥形束计算机断层扫描中基于任务的可转移深度学习散射校正:一项模拟研究。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-23 DOI: 10.1117/1.JMI.11.2.024006
Juan P Cruz-Bastida, Fernando Moncada, Arnulfo Martínez-Dávalos, Mercedes Rodríguez-Villafuerte
{"title":"Task-based transferable deep-learning scatter correction in cone beam computed tomography: a simulation study.","authors":"Juan P Cruz-Bastida, Fernando Moncada, Arnulfo Martínez-Dávalos, Mercedes Rodríguez-Villafuerte","doi":"10.1117/1.JMI.11.2.024006","DOIUrl":"10.1117/1.JMI.11.2.024006","url":null,"abstract":"<p><strong>Purpose: </strong>X-ray scatter significantly affects the image quality of cone beam computed tomography (CBCT). Although convolutional neural networks (CNNs) have shown promise in correcting x-ray scatter, their effectiveness is hindered by two main challenges: the necessity for extensive datasets and the uncertainty regarding model generalizability. This study introduces a task-based paradigm to overcome these obstacles, enhancing the application of CNNs in scatter correction.</p><p><strong>Approach: </strong>Using a CNN with U-net architecture, the proposed methodology employs a two-stage training process for scatter correction in CBCT scans. Initially, the CNN is pre-trained on approximately 4000 image pairs from geometric phantom projections, then fine-tuned using transfer learning (TL) on 250 image pairs of anthropomorphic projections, enabling task-specific adaptations with minimal data. 2D scatter ratio (SR) maps from projection data were considered as CNN targets, and such maps were used to perform the scatter prediction. The fine-tuning process for specific imaging tasks, like head and neck imaging, involved simulating scans of an anthropomorphic phantom and pre-processing the data for CNN retraining.</p><p><strong>Results: </strong>For the pre-training stage, it was observed that SR predictions were quite accurate (<math><mrow><mi>SSIM</mi><mo>≥</mo><mn>0.9</mn></mrow></math>). The accuracy of SR predictions was further improved after TL, with a relatively short retraining time (<math><mrow><mo>≈</mo><mn>70</mn></mrow></math> times faster than pre-training) and using considerably fewer samples compared to the pre-training dataset (<math><mrow><mo>≈</mo><mn>12</mn></mrow></math> times smaller).</p><p><strong>Conclusions: </strong>A fast and low-cost methodology to generate task-specific CNN for scatter correction in CBCT was developed. CNN models trained with the proposed methodology were successful to correct x-ray scatter in anthropomorphic structures, unknown to the network, for simulated data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10960584/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140207953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile infrared slit-light scanner for rapid eye disease screening. 用于快速眼病筛查的移动式红外裂隙光扫描仪。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-04-10 DOI: 10.1117/1.JMI.11.2.026003
Neelam Kaushik, Parmanand Sharma, Noriko Himori, Takuro Matsumoto, Takehiro Miya, Toru Nakazawa
{"title":"Mobile infrared slit-light scanner for rapid eye disease screening.","authors":"Neelam Kaushik, Parmanand Sharma, Noriko Himori, Takuro Matsumoto, Takehiro Miya, Toru Nakazawa","doi":"10.1117/1.JMI.11.2.026003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.2.026003","url":null,"abstract":"<p><strong>Purpose: </strong>Timely detection and treatment of visual impairments and age-related eye diseases are essential for maintaining a longer, healthier life. However, the shortage of appropriate medical equipment often impedes early detection. We have developed a portable self-imaging slit-light device utilizing NIR light and a scanning mirror. The objective of our study is to assess the accuracy and compare the performance of our device with conventional nonportable slit-lamp microscopes and anterior segment optical coherence tomography (AS-OCT) for screening and remotely diagnosing eye diseases, such as cataracts and glaucoma, outside of an eye clinic.</p><p><strong>Approach: </strong>The NIR light provides an advantage as measurements are nonmydriatic and less traumatic for patients. A cross-sectional study involving Japanese adults was conducted. Cataract evaluation was performed using photographs captured by the device. Van-Herick grading was assessed by the ratio of peripheral anterior chamber depth to peripheral corneal thickness, in addition to the iridocorneal angle using Image J software.</p><p><strong>Results: </strong>The correlation coefficient between values obtained by AS-OCT, and our fabricated portable scanning slit-light device was notably high. The results indicate that our portable device is equally reliable as the conventional nonportable slit-lamp microscope and AS-OCT for screening and evaluating eye diseases.</p><p><strong>Conclusions: </strong>Our fabricated device matches the functionality of the traditional slit lamp, offering a cost-effective and portable solution. Ideal for remote locations, healthcare facilities, or areas affected by disasters, our scanning slit-light device can provide easy access to initial eye examinations and supports digital eye healthcare initiatives.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140870690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison study of intraoperative surface acquisition methods on registration accuracy for soft-tissue surgical navigation. 术中表面采集方法对软组织手术导航注册准确性的比较研究。
IF 1.9
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-04 DOI: 10.1117/1.JMI.11.2.025001
Bowen Xiang, Jon S Heiselman, Winona L Richey, Michael I D'Angelica, Alice Wei, T Peter Kingham, Frankangel Servin, Kyvia Pereira, Sunil K Geevarghese, William R Jarnagin, Michael I Miga
{"title":"Comparison study of intraoperative surface acquisition methods on registration accuracy for soft-tissue surgical navigation.","authors":"Bowen Xiang, Jon S Heiselman, Winona L Richey, Michael I D'Angelica, Alice Wei, T Peter Kingham, Frankangel Servin, Kyvia Pereira, Sunil K Geevarghese, William R Jarnagin, Michael I Miga","doi":"10.1117/1.JMI.11.2.025001","DOIUrl":"10.1117/1.JMI.11.2.025001","url":null,"abstract":"<p><strong>Purpose: </strong>To study the difference between rigid registration and nonrigid registration using two forms of digitization (contact and noncontact) in human <i>in vivo</i> liver surgery.</p><p><strong>Approach: </strong>A Conoprobe device attachment and sterilization process was developed to enable prospective noncontact intraoperative acquisition of organ surface data in the operating room (OR). The noncontact Conoprobe digitization method was compared against stylus-based acquisition in the context of image-to-physical registration for image-guided surgical navigation. Data from <math><mrow><mi>n</mi><mo>=</mo><mn>10</mn></mrow></math> patients undergoing liver resection were analyzed under an Institutional Review Board-approved study at Memorial Sloan Kettering Cancer Center. Organ surface coverage of each surface acquisition method was compared. Registration accuracies resulting from the acquisition techniques were compared for (1) rigid registration method (RRM), (2) model-based nonrigid registration method (NRM) using surface data only, and (3) NRM with one subsurface feature (vena cava) from tracked intraoperative ultrasound (NRM-VC). Novel vessel centerline and tumor targets were segmented and compared to their registered preoperative counterparts for accuracy validation.</p><p><strong>Results: </strong>Surface data coverage collected by stylus and Conoprobe were <math><mrow><mn>24.6</mn><mo>%</mo><mo>±</mo><mn>6.4</mn><mo>%</mo></mrow></math> and <math><mrow><mn>19.6</mn><mo>%</mo><mo>±</mo><mn>5.0</mn><mo>%</mo></mrow></math>, respectively. The average difference between stylus data and Conoprobe data using NRM was <math><mrow><mo>-</mo><mn>1.05</mn><mtext>  </mtext><mi>mm</mi></mrow></math> and using NRM-VC was <math><mrow><mo>-</mo><mn>1.42</mn><mtext>  </mtext><mi>mm</mi></mrow></math>, indicating the registrations to Conoprobe data performed worse than to stylus data with both NRM approaches. However, using the stylus and Conoprobe acquisition methods led to significant improvement of NRM-VC over RRM by average differences of 4.48 and 3.66 mm, respectively.</p><p><strong>Conclusion: </strong>The first use of a sterile-field amenable Conoprobe surface acquisition strategy in the OR is reported for open liver surgery. Under clinical conditions, the nonrigid registration significantly outperformed standard-of-care rigid registration, and acquisition by contact-based stylus and noncontact-based Conoprobe produced similar registration results. The accuracy benefits of noncontact surface acquisition with a Conoprobe are likely obscured by inferior data coverage and intrinsic noise within acquisition systems.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10911768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of radiographic joint space of the trapeziometacarpal joint with computed tomographic validation. 通过计算机断层扫描验证估算斜方肌掌关节的影像关节间隙。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-04 DOI: 10.1117/1.JMI.11.2.024001
David Jordan, John Elfar, Chian K Kwoh, Zong-Ming Li
{"title":"Estimation of radiographic joint space of the trapeziometacarpal joint with computed tomographic validation.","authors":"David Jordan, John Elfar, Chian K Kwoh, Zong-Ming Li","doi":"10.1117/1.JMI.11.2.024001","DOIUrl":"10.1117/1.JMI.11.2.024001","url":null,"abstract":"<p><strong>Purpose: </strong>Joint space width (JSW) is a common metric used to evaluate joint structure on plain radiographs. For the hand, quantitative techniques are available for evaluation of the JSW of finger joints; however, such techniques have been difficult to establish for the trapeziometacarpal (TMC) joint. This study aimed to develop a validated method for measuring the radiographic joint space of the healthy TMC joint.</p><p><strong>Approach: </strong>Computed tomographic scans were taken of 15 cadaveric hands. The location of a JSW analysis region on the articular surface of the first metacarpal was established in 3D space and standardized in a 2D projection. The standardized region was applied to simulated radiographic images. A correction factor was defined as the ratio of the CT-based and radiograph-based joint space measurements. Leave-one-out validation was used to correct the radiograph-based measurements. A t-test was used to evaluate the difference between CT-based and corrected radiograph-based measurements (<math><mrow><mi>α</mi><mo>=</mo><mn>0.05</mn></mrow></math>).</p><p><strong>Results: </strong>The CT-based and radiograph-based measurements of JSW were <math><mrow><mn>3.61</mn><mo>±</mo><mn>0.72</mn><mtext>  </mtext><mi>mm</mi></mrow></math> and <math><mrow><mn>2.18</mn><mo>±</mo><mn>0.40</mn><mtext>  </mtext><mi>mm</mi></mrow></math>, respectively. The correction factor for radiograph-based joint space was <math><mrow><mn>1.69</mn><mo>±</mo><mn>0.41</mn></mrow></math>. Before correction, the difference between the CT-based and radiograph-based joint space was 1.43 mm [95% CI: <math><mrow><mn>0.99</mn><mo>-</mo><mn>1.86</mn><mtext>  </mtext><mi>mm</mi></mrow></math>; <math><mrow><mi>p</mi><mo><</mo><mn>0.001</mn></mrow></math>]. After correction, the difference was <math><mrow><mo>-</mo><mn>0.11</mn><mtext>  </mtext><mi>mm</mi></mrow></math> [95% CI: <math><mrow><mo>-</mo><mn>0.63</mn><mo>-</mo><mn>0.41</mn><mtext>  </mtext><mi>mm</mi></mrow></math>; <math><mrow><mi>p</mi><mo>=</mo><mn>0.669</mn></mrow></math>].</p><p><strong>Conclusions: </strong>Corrected measurements of radiographic TMC JSW agreed well with CT-measured JSW. With <i>in-vivo</i> validation, the developed methodology has potential for automated and accurate radiographic measurement of TMC JSW.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10911767/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-energy computed tomography imaging with megavoltage and kilovoltage X-ray spectra. 使用兆伏特和千伏特 X 射线光谱的双能量计算机断层扫描成像。
IF 2.4
Journal of Medical Imaging Pub Date : 2024-03-01 Epub Date: 2024-03-04 DOI: 10.1117/1.JMI.11.2.023501
Giavanna Jadick, Geneva Schlafly, Patrick J La Rivière
{"title":"Dual-energy computed tomography imaging with megavoltage and kilovoltage X-ray spectra.","authors":"Giavanna Jadick, Geneva Schlafly, Patrick J La Rivière","doi":"10.1117/1.JMI.11.2.023501","DOIUrl":"10.1117/1.JMI.11.2.023501","url":null,"abstract":"<p><strong>Purpose: </strong>Single-energy computed tomography (CT) often suffers from poor contrast yet remains critical for effective radiotherapy treatment. Modern therapy systems are often equipped with both megavoltage (MV) and kilovoltage (kV) X-ray sources and thus already possess hardware for dual-energy (DE) CT. There is unexplored potential for enhanced image contrast using MV-kV DE-CT in radiotherapy contexts.</p><p><strong>Approach: </strong>A single-line integral toy model was designed for computing basis material signal-to-noise ratio (SNR) using estimation theory. Five dose-matched spectra (3 kV, 2 MV) and three variables were considered: spectral combination, spectral dose allocation, and object material composition. The single-line model was extended to a simulated CT acquisition of an anthropomorphic phantom with and without a metal implant. Basis material sinograms were computed and synthesized into virtual monoenergetic images (VMIs). MV-kV and kV-kV VMIs were compared with single-energy images.</p><p><strong>Results: </strong>The 80 kV-140 kV pair typically yielded the best SNRs, but for bone thicknesses <math><mrow><mo>></mo><mn>8</mn><mtext>  </mtext><mi>cm</mi></mrow></math>, the detunedMV-80 kV pair surpassed it. Peak MV-kV SNR was achieved with <math><mrow><mo>∼</mo><mn>90</mn><mo>%</mo></mrow></math> dose allocated to the MV spectrum. In CT simulations of the pelvis with a steel implant, MV-kV VMIs yielded a higher contrast-to-noise ratio (CNR) than single-energy CT and kV-kV DE-CT. Without steel, the MV-kV VMIs produced higher contrast but lower CNR than single-energy CT.</p><p><strong>Conclusions: </strong>This work analyzes MV-kV DE-CT imaging and assesses its potential advantages. The technique may be used for metal artifact correction and generation of VMIs with higher native contrast than single-energy CT. Improved denoising is generally necessary for greater CNR without metal.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10910563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph convolutional networks for automated intracranial artery labeling 用于颅内动脉自动标记的图卷积网络
IF 2.4
Journal of Medical Imaging Pub Date : 2024-02-15 DOI: 10.1117/1.jmi.11.1.014007
I. Vos, Y. Ruigrok, Ishaan Bhat, K. Timmins, B. Velthuis, Hugo J. Kuijf
{"title":"Graph convolutional networks for automated intracranial artery labeling","authors":"I. Vos, Y. Ruigrok, Ishaan Bhat, K. Timmins, B. Velthuis, Hugo J. Kuijf","doi":"10.1117/1.jmi.11.1.014007","DOIUrl":"https://doi.org/10.1117/1.jmi.11.1.014007","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139835081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of an augmented reality navigational guidance platform for percutaneous procedures in a cadaver model 评估用于尸体模型经皮手术的增强现实导航引导平台
IF 2.4
Journal of Medical Imaging Pub Date : 2024-02-15 DOI: 10.1117/1.jmi.11.6.062602
Gaurav Gadodia, Michael Evans, C. Weunski, Amy Ho, Adam Cargill, Charles Martin
{"title":"Evaluation of an augmented reality navigational guidance platform for percutaneous procedures in a cadaver model","authors":"Gaurav Gadodia, Michael Evans, C. Weunski, Amy Ho, Adam Cargill, Charles Martin","doi":"10.1117/1.jmi.11.6.062602","DOIUrl":"https://doi.org/10.1117/1.jmi.11.6.062602","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139835937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph convolutional networks for automated intracranial artery labeling 用于颅内动脉自动标记的图卷积网络
IF 2.4
Journal of Medical Imaging Pub Date : 2024-02-15 DOI: 10.1117/1.jmi.11.1.014007
I. Vos, Y. Ruigrok, Ishaan Bhat, K. Timmins, B. Velthuis, Hugo J. Kuijf
{"title":"Graph convolutional networks for automated intracranial artery labeling","authors":"I. Vos, Y. Ruigrok, Ishaan Bhat, K. Timmins, B. Velthuis, Hugo J. Kuijf","doi":"10.1117/1.jmi.11.1.014007","DOIUrl":"https://doi.org/10.1117/1.jmi.11.1.014007","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139775225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of an augmented reality navigational guidance platform for percutaneous procedures in a cadaver model 评估用于尸体模型经皮手术的增强现实导航引导平台
IF 2.4
Journal of Medical Imaging Pub Date : 2024-02-15 DOI: 10.1117/1.jmi.11.6.062602
Gaurav Gadodia, Michael Evans, C. Weunski, Amy Ho, Adam Cargill, Charles Martin
{"title":"Evaluation of an augmented reality navigational guidance platform for percutaneous procedures in a cadaver model","authors":"Gaurav Gadodia, Michael Evans, C. Weunski, Amy Ho, Adam Cargill, Charles Martin","doi":"10.1117/1.jmi.11.6.062602","DOIUrl":"https://doi.org/10.1117/1.jmi.11.6.062602","url":null,"abstract":"","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139776425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信