{"title":"LUI: A multimodal, intelligent interface for large displays","authors":"V. Parthiban, Ashley Jieun Lee","doi":"10.1145/3359997.3365743","DOIUrl":null,"url":null,"abstract":"On large screen displays, using conventional keyboard and mouse input is difficult because small mouse movements often do not scale well with the size of the display and individual elements on screen. We propose LUI, or Large User Interface, which increases the range of dynamic surface area of interactions possible on such a display. Our model leverages real-time continuous feedback of free-handed gestures and voice to control extensible applications such as photos, videos, and 3D models. Utilizing a single stereo-camera and voice assistant, LUI does not require exhaustive calibration or a multitude of sensors to operate, and it can be easily installed and deployed on any large screen surfaces. In a user study, participants found LUI efficient and easily learnable with minimal instruction, and preferred it to more conventional interfaces. This multimodal interface can also be deployed in augmented or virtual reality spaces and autonomous vehicle displays.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"125 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3359997.3365743","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
On large screen displays, using conventional keyboard and mouse input is difficult because small mouse movements often do not scale well with the size of the display and individual elements on screen. We propose LUI, or Large User Interface, which increases the range of dynamic surface area of interactions possible on such a display. Our model leverages real-time continuous feedback of free-handed gestures and voice to control extensible applications such as photos, videos, and 3D models. Utilizing a single stereo-camera and voice assistant, LUI does not require exhaustive calibration or a multitude of sensors to operate, and it can be easily installed and deployed on any large screen surfaces. In a user study, participants found LUI efficient and easily learnable with minimal instruction, and preferred it to more conventional interfaces. This multimodal interface can also be deployed in augmented or virtual reality spaces and autonomous vehicle displays.