S. Souipas, Anh M Nguyen, Stephen Laws, Brian Davies, F. Rodriguez y Baena
{"title":"SimPS-Net:手术工具的同步姿态和分割网络","authors":"S. Souipas, Anh M Nguyen, Stephen Laws, Brian Davies, F. Rodriguez y Baena","doi":"10.31256/hsmr2023.36","DOIUrl":null,"url":null,"abstract":"Image-based detection and localisation of surgical tools has received significant attention due to the development of rele- vant deep learning techniques, along with recent upgrades in computational capabilities. Although not as accurate as optical trackers [1], image-based methods are easy to deploy, and require no surgical tool redesign to accommodate trackable markers, which could be beneficial when it comes to cheaper, “off-the-shelf” tools, such as scalpels and scissors. In the operating room however, these techniques suffer from drawbacks due to the presence of highly reflective or featureless materials, but also occlusions, such as smoke and blood. Furthermore, networks often utilise tool 3D models (e.g. CAD data), not only for the purpose of point correspon- dence, but also for pose regression. The aforementioned “off- the-shelf” tools are scarcely accompanied by such prior 3D structure data. Ultimately, in addition to the above hindrances, estimating 3D pose using a monocular camera setup, poses a challenge in itself due to the lack of depth information. Con- sidering these limitations, we present SimPS-Net, a network capable of both detection and 3D pose estimation of standard surgical tools using a single RGB camera.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SimPS-Net: Simultaneous Pose & Segmentation Network of Surgical Tools\",\"authors\":\"S. Souipas, Anh M Nguyen, Stephen Laws, Brian Davies, F. Rodriguez y Baena\",\"doi\":\"10.31256/hsmr2023.36\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image-based detection and localisation of surgical tools has received significant attention due to the development of rele- vant deep learning techniques, along with recent upgrades in computational capabilities. Although not as accurate as optical trackers [1], image-based methods are easy to deploy, and require no surgical tool redesign to accommodate trackable markers, which could be beneficial when it comes to cheaper, “off-the-shelf” tools, such as scalpels and scissors. In the operating room however, these techniques suffer from drawbacks due to the presence of highly reflective or featureless materials, but also occlusions, such as smoke and blood. Furthermore, networks often utilise tool 3D models (e.g. CAD data), not only for the purpose of point correspon- dence, but also for pose regression. The aforementioned “off- the-shelf” tools are scarcely accompanied by such prior 3D structure data. Ultimately, in addition to the above hindrances, estimating 3D pose using a monocular camera setup, poses a challenge in itself due to the lack of depth information. Con- sidering these limitations, we present SimPS-Net, a network capable of both detection and 3D pose estimation of standard surgical tools using a single RGB camera.\",\"PeriodicalId\":129686,\"journal\":{\"name\":\"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.31256/hsmr2023.36\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31256/hsmr2023.36","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SimPS-Net: Simultaneous Pose & Segmentation Network of Surgical Tools
Image-based detection and localisation of surgical tools has received significant attention due to the development of rele- vant deep learning techniques, along with recent upgrades in computational capabilities. Although not as accurate as optical trackers [1], image-based methods are easy to deploy, and require no surgical tool redesign to accommodate trackable markers, which could be beneficial when it comes to cheaper, “off-the-shelf” tools, such as scalpels and scissors. In the operating room however, these techniques suffer from drawbacks due to the presence of highly reflective or featureless materials, but also occlusions, such as smoke and blood. Furthermore, networks often utilise tool 3D models (e.g. CAD data), not only for the purpose of point correspon- dence, but also for pose regression. The aforementioned “off- the-shelf” tools are scarcely accompanied by such prior 3D structure data. Ultimately, in addition to the above hindrances, estimating 3D pose using a monocular camera setup, poses a challenge in itself due to the lack of depth information. Con- sidering these limitations, we present SimPS-Net, a network capable of both detection and 3D pose estimation of standard surgical tools using a single RGB camera.