Aviv Elor, Steve Whittaker, Sri Kurniawan, Sam Michael
{"title":"BioLumin:一种用于交互式微观可视化和生物医学研究注释的沉浸式混合现实体验","authors":"Aviv Elor, Steve Whittaker, Sri Kurniawan, Sam Michael","doi":"10.1145/3548777","DOIUrl":null,"url":null,"abstract":"Many recent breakthroughs in medical diagnostics and drug discovery arise from deploying machine learning algorithms to large-scale data sets. However, a significant obstacle to such approaches is that they depend on high-quality annotations generated by domain experts. This study develops and evaluates BioLumin, a novel immersive mixed reality environment that enables users to virtually shrink down to the microscopic level for navigation and annotation of 3D reconstructed images. We discuss how domain experts were consulted in the specification of a pipeline to enable automatic reconstruction of biological models for mixed reality environments, driving the design of a 3DUI system to explore whether such a system allows accurate annotation of complex medical data by non-experts. To examine the usability and feasibility of BioLumin, we evaluated our prototype through a multi-stage mixed-method approach. First, three domain experts offered expert reviews, and subsequently, nineteen non-expert users performed representative annotation tasks in a controlled setting. The results indicated that the mixed reality system was learnable and that non-experts could generate high-quality 3D annotations after a short training session. Lastly, we discuss design considerations for future tools like BioLumin in medical and more general scientific contexts.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"3 1","pages":"1 - 28"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BioLumin: An Immersive Mixed Reality Experience for Interactive Microscopic Visualization and Biomedical Research Annotation\",\"authors\":\"Aviv Elor, Steve Whittaker, Sri Kurniawan, Sam Michael\",\"doi\":\"10.1145/3548777\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many recent breakthroughs in medical diagnostics and drug discovery arise from deploying machine learning algorithms to large-scale data sets. However, a significant obstacle to such approaches is that they depend on high-quality annotations generated by domain experts. This study develops and evaluates BioLumin, a novel immersive mixed reality environment that enables users to virtually shrink down to the microscopic level for navigation and annotation of 3D reconstructed images. We discuss how domain experts were consulted in the specification of a pipeline to enable automatic reconstruction of biological models for mixed reality environments, driving the design of a 3DUI system to explore whether such a system allows accurate annotation of complex medical data by non-experts. To examine the usability and feasibility of BioLumin, we evaluated our prototype through a multi-stage mixed-method approach. First, three domain experts offered expert reviews, and subsequently, nineteen non-expert users performed representative annotation tasks in a controlled setting. The results indicated that the mixed reality system was learnable and that non-experts could generate high-quality 3D annotations after a short training session. Lastly, we discuss design considerations for future tools like BioLumin in medical and more general scientific contexts.\",\"PeriodicalId\":72043,\"journal\":{\"name\":\"ACM transactions on computing for healthcare\",\"volume\":\"3 1\",\"pages\":\"1 - 28\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM transactions on computing for healthcare\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3548777\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM transactions on computing for healthcare","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3548777","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
BioLumin: An Immersive Mixed Reality Experience for Interactive Microscopic Visualization and Biomedical Research Annotation
Many recent breakthroughs in medical diagnostics and drug discovery arise from deploying machine learning algorithms to large-scale data sets. However, a significant obstacle to such approaches is that they depend on high-quality annotations generated by domain experts. This study develops and evaluates BioLumin, a novel immersive mixed reality environment that enables users to virtually shrink down to the microscopic level for navigation and annotation of 3D reconstructed images. We discuss how domain experts were consulted in the specification of a pipeline to enable automatic reconstruction of biological models for mixed reality environments, driving the design of a 3DUI system to explore whether such a system allows accurate annotation of complex medical data by non-experts. To examine the usability and feasibility of BioLumin, we evaluated our prototype through a multi-stage mixed-method approach. First, three domain experts offered expert reviews, and subsequently, nineteen non-expert users performed representative annotation tasks in a controlled setting. The results indicated that the mixed reality system was learnable and that non-experts could generate high-quality 3D annotations after a short training session. Lastly, we discuss design considerations for future tools like BioLumin in medical and more general scientific contexts.