Hassan Rasheed, Reuben Dorent, Maximilian Fehrentz, Tina Kapur, William M. Wells III, Alexandra Golby, Sarah Frisken, Julia A. Schnabel, Nazim Haouchine
{"title":"Learning to Match 2D Keypoints Across Preoperative MR and Intraoperative Ultrasound","authors":"Hassan Rasheed, Reuben Dorent, Maximilian Fehrentz, Tina Kapur, William M. Wells III, Alexandra Golby, Sarah Frisken, Julia A. Schnabel, Nazim Haouchine","doi":"arxiv-2409.08169","DOIUrl":null,"url":null,"abstract":"We propose in this paper a texture-invariant 2D keypoints descriptor\nspecifically designed for matching preoperative Magnetic Resonance (MR) images\nwith intraoperative Ultrasound (US) images. We introduce a\nmatching-by-synthesis strategy, where intraoperative US images are synthesized\nfrom MR images accounting for multiple MR modalities and intraoperative US\nvariability. We build our training set by enforcing keypoints localization over\nall images then train a patient-specific descriptor network that learns\ntexture-invariant discriminant features in a supervised contrastive manner,\nleading to robust keypoints descriptors. Our experiments on real cases with\nground truth show the effectiveness of the proposed approach, outperforming the\nstate-of-the-art methods and achieving 80.35% matching precision on average.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose in this paper a texture-invariant 2D keypoints descriptor
specifically designed for matching preoperative Magnetic Resonance (MR) images
with intraoperative Ultrasound (US) images. We introduce a
matching-by-synthesis strategy, where intraoperative US images are synthesized
from MR images accounting for multiple MR modalities and intraoperative US
variability. We build our training set by enforcing keypoints localization over
all images then train a patient-specific descriptor network that learns
texture-invariant discriminant features in a supervised contrastive manner,
leading to robust keypoints descriptors. Our experiments on real cases with
ground truth show the effectiveness of the proposed approach, outperforming the
state-of-the-art methods and achieving 80.35% matching precision on average.