IUI CompanionPub Date : 2020-03-17DOI: 10.1145/3379336.3381506
L. Murthy
{"title":"Multimodal Interaction for Real and Virtual Environments","authors":"L. Murthy","doi":"10.1145/3379336.3381506","DOIUrl":"https://doi.org/10.1145/3379336.3381506","url":null,"abstract":"Multimodal interfaces can leverage the information from multiple modalities to provide robust and error-free interaction. Early multimodal interfaces demonstrate the feasibility of building such systems but focused on specific applications. The challenge in building adaptive systems is lack of techniques for input data fusion. In this direction, we have developed a multimodal head and eye gaze interface and evaluated it in two scenarios. In aviation scenario, our interface has reduced the task time and perceived cognitive load significantly from the existing interface. We have also studied the effect of various output conditions on user's performance in a Virtual Reality (VR) task. Further, we are making our proposed interface to include additional modalities and building novel haptic and multimodal output systems for VR.","PeriodicalId":350807,"journal":{"name":"IUI Companion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128611407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IUI CompanionPub Date : 2017-03-07DOI: 10.1145/3030024.3038285
Myat Su Yin
{"title":"Automated Formative Feedback in a Virtual Reality (VR) Dental Surgery Simulator","authors":"Myat Su Yin","doi":"10.1145/3030024.3038285","DOIUrl":"https://doi.org/10.1145/3030024.3038285","url":null,"abstract":"Fine motor skill is indispensable for a dentist. As in many other medical fields of study, the traditional surgical master apprentice model is widely adopted in dental education. Recently, virtual reality (VR) simulators have been employed as supplementary components to the traditional skill-training curriculum, and numerous dental VR systems have been developed academically and commercially. However, the full promise of such systems has yet to be realized due to the lack of sufficient support for formative feedback. Without such a mechanism, evaluation still demands dedicated time of experts in scarce supply. With the aim to fill the gap of formative assessment using VR simulators in skill training in dentistry, we propose a framework to objectively assess the surgical skill and generate feedback automatically. The core concept of the framework is to generate the feedback by correlating the portion of the procedure responsible with the error in the outcome. Assessment of outcome and the procedure, pedagogical models, and multiple modalities to provide feedback are the integral components of this research.","PeriodicalId":350807,"journal":{"name":"IUI Companion","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114670884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IUI CompanionPub Date : 2017-03-07DOI: 10.1145/3030024.3038291
Cecilia di Sciascio
{"title":"Advanced User Interfaces and Hybrid Recommendations for Exploratory Search","authors":"Cecilia di Sciascio","doi":"10.1145/3030024.3038291","DOIUrl":"https://doi.org/10.1145/3030024.3038291","url":null,"abstract":"Exploring large volumes of data with learning or investigative purposes is often regarded as exploratory search. Rather than plain question answering, exploratory search is an iterative process of information seeking and sensemaking. My focus is the development of an interactive intelligent tool that assists the search task and the study of user behavior and experience. More specifically, I combine recommender systems with advanced user interfaces to maximize what Hearst calls \"recognition over recall\".","PeriodicalId":350807,"journal":{"name":"IUI Companion","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125076487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IUI CompanionPub Date : 2014-02-24DOI: 10.1145/2559184.2559197
Satoshi Sagara, Masakazu Higuchi, T. Komuro
{"title":"Multi-finger AR typing interface for mobile devices","authors":"Satoshi Sagara, Masakazu Higuchi, T. Komuro","doi":"10.1145/2559184.2559197","DOIUrl":"https://doi.org/10.1145/2559184.2559197","url":null,"abstract":"In this paper, we propose a user interface that enables multi-finger typing in the space behind a mobile device. By using the augmented reality (AR) technology, a virtual keyboard is superimposed on the rear camera image, and a hand region of the camera image is again superimposed on that image, which makes it possible to perform input operation as if there were a real keyboard. The system recognizes only key pressing actions and does not recognize a hand or fingers, which enables stable recognition and multi-finger input. Further, key typing at any place on a plane and in the air is possible. Demonstration using an experimental device showed that multi-finger input using a virtual keyboard displayed on the screen was realized.","PeriodicalId":350807,"journal":{"name":"IUI Companion","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124237410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IUI CompanionPub Date : 1900-01-01DOI: 10.1145/3581754.3584112
Lukas Günthermann
{"title":"Application for Doctoral Consortium IUI 2023","authors":"Lukas Günthermann","doi":"10.1145/3581754.3584112","DOIUrl":"https://doi.org/10.1145/3581754.3584112","url":null,"abstract":"1 COVER LETTER Dear Sir or Madam, My name is Lukas, I am a PhD student in the Wearable Technologies Lab at the University of Sussex under supervision of Prof. Daniel Roggen. My second supervisor is Ivor Simpson and the lab is currently overseen by Phil Birch. My work is concerned with annotation assistance for sensor recordings stemming from wearable devices to enable easier creation of annotated datasets for human activity recognition. In particular, I am investigating a tool used to annotate time series data, which utilises machine learning to support the user in their work. Since augmenting the user experience with artificial intelligence techniques matches very well the scope of IUI, I am keen to present my research at the conference and receive valuable feedback. Since I started developing the annotation tool only this summer, there is still a lot of room for change.","PeriodicalId":350807,"journal":{"name":"IUI Companion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129762747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}