Flakë Bajraktari, Kathrin Fleissner, Peter P. Pott
{"title":"两种基于cnn的自动手术辅助系统仪器检测方法的比较","authors":"Flakë Bajraktari, Kathrin Fleissner, Peter P. Pott","doi":"10.1515/cdbme-2023-1150","DOIUrl":null,"url":null,"abstract":"Abstract The shortage of operating room technicians has led to a growing demand for automated systems in the OR to maintain the quality of care. Robotic scrub nurse (RSN) systems are increasingly being developed, which perform tasks such as handling instruments and documenting the surgery. While research has focused on detecting instruments in the hands of surgical staff or recognizing surgical phases, there is a lack of research on detecting instruments on the instrument tray. Therefore, this study proposes and evaluates two distinct methodologies for instrument detection on the OR table using the deep learning approaches YOLOv5 and Mask R-CNN. The performance of the two approaches has been evaluated on 18 YOLOv5 models and twelve Mask R-CNN models, mainly differing in model size. Two sets of instruments were used to assess generalizability of the models. The results show a mean average precision (mAP) score of 0.978 for YOLOv5 and 0.846 for Mask R-CNN on the test dataset comprising three classes. An mAP of 0.874 and 0.707 have been computed respectively for the test dataset including six classes. The study provides a comparison of the performance of two suitable approaches for instrument detection on the instrument tray in the OR to enhance the development of RSN systems.","PeriodicalId":10739,"journal":{"name":"Current Directions in Biomedical Engineering","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comparison of two CNN-based instrument detection approaches for automated surgical assistance systems\",\"authors\":\"Flakë Bajraktari, Kathrin Fleissner, Peter P. Pott\",\"doi\":\"10.1515/cdbme-2023-1150\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract The shortage of operating room technicians has led to a growing demand for automated systems in the OR to maintain the quality of care. Robotic scrub nurse (RSN) systems are increasingly being developed, which perform tasks such as handling instruments and documenting the surgery. While research has focused on detecting instruments in the hands of surgical staff or recognizing surgical phases, there is a lack of research on detecting instruments on the instrument tray. Therefore, this study proposes and evaluates two distinct methodologies for instrument detection on the OR table using the deep learning approaches YOLOv5 and Mask R-CNN. The performance of the two approaches has been evaluated on 18 YOLOv5 models and twelve Mask R-CNN models, mainly differing in model size. Two sets of instruments were used to assess generalizability of the models. The results show a mean average precision (mAP) score of 0.978 for YOLOv5 and 0.846 for Mask R-CNN on the test dataset comprising three classes. An mAP of 0.874 and 0.707 have been computed respectively for the test dataset including six classes. The study provides a comparison of the performance of two suitable approaches for instrument detection on the instrument tray in the OR to enhance the development of RSN systems.\",\"PeriodicalId\":10739,\"journal\":{\"name\":\"Current Directions in Biomedical Engineering\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Current Directions in Biomedical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/cdbme-2023-1150\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Directions in Biomedical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/cdbme-2023-1150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
A comparison of two CNN-based instrument detection approaches for automated surgical assistance systems
Abstract The shortage of operating room technicians has led to a growing demand for automated systems in the OR to maintain the quality of care. Robotic scrub nurse (RSN) systems are increasingly being developed, which perform tasks such as handling instruments and documenting the surgery. While research has focused on detecting instruments in the hands of surgical staff or recognizing surgical phases, there is a lack of research on detecting instruments on the instrument tray. Therefore, this study proposes and evaluates two distinct methodologies for instrument detection on the OR table using the deep learning approaches YOLOv5 and Mask R-CNN. The performance of the two approaches has been evaluated on 18 YOLOv5 models and twelve Mask R-CNN models, mainly differing in model size. Two sets of instruments were used to assess generalizability of the models. The results show a mean average precision (mAP) score of 0.978 for YOLOv5 and 0.846 for Mask R-CNN on the test dataset comprising three classes. An mAP of 0.874 and 0.707 have been computed respectively for the test dataset including six classes. The study provides a comparison of the performance of two suitable approaches for instrument detection on the instrument tray in the OR to enhance the development of RSN systems.