Zeyang Zhou , Zhiyong Yang , Shan Jiang , Tao Zhu , Shixing Ma , Yuhua Li , Jie Zhuo
{"title":"Design and validation of a navigation system of multimodal medical images for neurosurgery based on mixed reality","authors":"Zeyang Zhou , Zhiyong Yang , Shan Jiang , Tao Zhu , Shixing Ma , Yuhua Li , Jie Zhuo","doi":"10.1016/j.visinf.2023.05.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose:</h3><p>This paper aims to develop a navigation system based on mixed reality, which can display multimodal medical images in an immersive environment and help surgeons locate the target area and surrounding important tissues precisely.</p></div><div><h3>Methods:</h3><p>To be displayed properly in mixed reality, medical images are processed in this system. High-quality cerebral vessels and nerve fibers with proper colors are reconstructed and exported to mixed reality environment. Multimodal images and models are registered and fused, extracting their key information. The multiple processed images are fused with the real patient in the same coordinate system to guide the surgery.</p></div><div><h3>Results:</h3><p>The multimodal image system is designed and validated properly. In phantom experiments, the average error of preoperative registration is 1.003 mm and the standard deviation is 0.096 mm. The average proportion of well-registered areas is 94.9%. In patient experiments, the surgeons who participated in the experiments generally indicated that the system had excellent performance and great application prospect for neurosurgery.</p></div><div><h3>Conclusion:</h3><p>This article proposes a navigation system of multimodal images for neurosurgery based on mixed reality. Compared with other navigation methods, this system can help surgeons locate the target area and surrounding important tissues more precisely and rapidly.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 64-71"},"PeriodicalIF":3.8000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X23000177","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose:
This paper aims to develop a navigation system based on mixed reality, which can display multimodal medical images in an immersive environment and help surgeons locate the target area and surrounding important tissues precisely.
Methods:
To be displayed properly in mixed reality, medical images are processed in this system. High-quality cerebral vessels and nerve fibers with proper colors are reconstructed and exported to mixed reality environment. Multimodal images and models are registered and fused, extracting their key information. The multiple processed images are fused with the real patient in the same coordinate system to guide the surgery.
Results:
The multimodal image system is designed and validated properly. In phantom experiments, the average error of preoperative registration is 1.003 mm and the standard deviation is 0.096 mm. The average proportion of well-registered areas is 94.9%. In patient experiments, the surgeons who participated in the experiments generally indicated that the system had excellent performance and great application prospect for neurosurgery.
Conclusion:
This article proposes a navigation system of multimodal images for neurosurgery based on mixed reality. Compared with other navigation methods, this system can help surgeons locate the target area and surrounding important tissues more precisely and rapidly.