{"title":"A multi-modal interface for an interactive simulated vascular reconstruction system","authors":"E. Zudilova-Seinstra, P. Sloot, R. Belleman","doi":"10.1109/ICMI.2002.1167013","DOIUrl":null,"url":null,"abstract":"This paper is devoted to multi-modal interface design and implementation of a simulated vascular reconstruction system. It provides multi-modal interaction methods such as speech recognition, hand gestures, direct manipulation of virtual 3D objects and measurement tools. The main challenge is that no general interface scenario in existence today can satisfy all the users of the system (radiologists, vascular surgeons, medical students, etc.). The potential users of the system can vary by their skills, expertise level, habits and psycho-motional characteristics. To make a multimodal interface user-friendly is a crucial issue. In this paper we introduce an approach to develop such an efficient, user-friendly multi-modal interaction system. We focus on adaptive interaction as a possible solution to address the variety of end-users. Based on a user model, the adaptive user interface identifies each individual by means of a set of criteria and generates a customized exploration environment.","PeriodicalId":208377,"journal":{"name":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","volume":"135 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Fourth IEEE International Conference on Multimodal Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMI.2002.1167013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28
Abstract
This paper is devoted to multi-modal interface design and implementation of a simulated vascular reconstruction system. It provides multi-modal interaction methods such as speech recognition, hand gestures, direct manipulation of virtual 3D objects and measurement tools. The main challenge is that no general interface scenario in existence today can satisfy all the users of the system (radiologists, vascular surgeons, medical students, etc.). The potential users of the system can vary by their skills, expertise level, habits and psycho-motional characteristics. To make a multimodal interface user-friendly is a crucial issue. In this paper we introduce an approach to develop such an efficient, user-friendly multi-modal interaction system. We focus on adaptive interaction as a possible solution to address the variety of end-users. Based on a user model, the adaptive user interface identifies each individual by means of a set of criteria and generates a customized exploration environment.