{"title":"AmbientLetter: Letter Presentation Method for Discreet Notification of Unknown Spelling when Handwriting","authors":"Xaver Tomihiro Toyozaki, Keita Watanabe","doi":"10.1145/3266037.3266093","DOIUrl":"https://doi.org/10.1145/3266037.3266093","url":null,"abstract":"We propose a technique to support writing activity in a confidential manner with a pen-based device. Autocorrect and predictive conversion do not work when writing by hand, and looking up unknown spelling is sometimes embarrassing. Therefore, we propose AmbientLetter which seamlessly and discretely presents the forgotten spelling to the user in scenarios where handwriting is necessary. In this work, we describe the system structure and the technique used to conceal the user\"s getting the information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116521380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Head Pose Classification by using Body-Conducted Sound","authors":"Ryo Kamoshida, K. Takemura","doi":"10.1145/3266037.3266094","DOIUrl":"https://doi.org/10.1145/3266037.3266094","url":null,"abstract":"Vibrations generated by human activity have been used for recognizing human behavior and developing user interfaces; however, it is difficult to estimate static poses that do not generate a vibration. This can be solved using active acoustic sensing; however, this method is not suitable for emitting some vibrations around the head in terms of the influence of audition. Therefore, we propose a method for estimating head poses using body-conducted sound naturally and regularly generated in the human body. The support vector classification recognizes vertical and horizontal directions of the head, and we confirmed the feasibility of the proposed method through experiments.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"32 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114461303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual Switch for Gaze Selection","authors":"Jooyeon Lee, Jong-Seok Lee","doi":"10.1145/3266037.3266107","DOIUrl":"https://doi.org/10.1145/3266037.3266107","url":null,"abstract":"One of the main drawbacks of the fixation-based gaze interfaces is that they are unable to distinguish top-down attention (or selection, a gaze with a purpose) from stimulus driven bottom-up attention (or navigation, a stare without any intentions) without time durations or unnatural eye movements. We found that using the bistable image called the Necker's cube as a button user interface (UI) helps to remedy the limitation. When users switch two rivaling percepts of the Necker's cube at will, unique eye movements are triggered and these characteristics can be used to indicate a button press or a selecting action. In this paper, we introduce (1) the cognitive phenomenon called \"percept switch\" for gaze interaction, and (2) propose \"perceptual switch\" or the Necker's cube user interface (UI) which uses \"percept switch\" as the indication of a selection. Our preliminary experiment confirms that perceptual switch can be used to distinguish voluntary gaze selection from random navigation, and discusses that the visual elements of the Necker's cube such as size and biased visual cues could be adjusted for the optimal use of individual users.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128389168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trans-scale Playground: An Immersive Visual Telexistence System for Human Adaptation","authors":"Satoshi Hashizume, Akira Ishii, Kenta Suzuki, Kazuki Takazawa, Yoichi Ochiai","doi":"10.1145/3266037.3266103","DOIUrl":"https://doi.org/10.1145/3266037.3266103","url":null,"abstract":"In this paper, we present a novel telexistence system and design methods for telexistence studies to explore spatialscale deconstruction. There have been studies on the experience of dwarf-sized or giant-sized telepresence have been conducted over a period of many years. In this study, we discuss the scale of movements, image transformation, technical components of telepresence robots, and user experiences of telexistence-based spatial transformations. We implemented two types of telepresence robots with an omnidirectional stereo camera setup for a spatial trans-scale experience, wheeled robots, and quadcopters. These telepresence robots provide users with a trans-scale experience for a distance ranging from 15 cm to 30 m. We conducted user studies for different camera positions on robots and for different image transformation method.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114070464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Dreja, Michael Rietzler, Teresa Hirzle, Jan Gugenheimer, Julian Frommel, E. Rukzio
{"title":"A Demonstration of VRSpinning: Exploring the Design Space of a 1D Rotation Platform to Increase the Perception of Self-Motion in VR","authors":"Thomas Dreja, Michael Rietzler, Teresa Hirzle, Jan Gugenheimer, Julian Frommel, E. Rukzio","doi":"10.1145/3266037.3271645","DOIUrl":"https://doi.org/10.1145/3266037.3271645","url":null,"abstract":"In this demonstration we introduce VRSpinning, a seated locomotion approach based around stimulating the user's vestibular system using a rotational impulse to induce the perception of linear self-motion. Currently, most approaches for locomotion in VR use either concepts like teleportation for traveling longer distances or present a virtual motion that creates a visual-vestibular conflict, which is assumed to cause simulator sickness. With our platform we evaluated two designs for using the rotation of a motorized swivel chair to alleviate this, wiggle and impulse. Our evaluation showed that impulse, using short rotation bursts matched with the visual acceleration, can significantly reduce simulator sickness and increase the perception of self-motion compared to no physical motion.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132301020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing Interactive Behaviours Beyond the Desktop","authors":"David Ledo","doi":"10.1145/3266037.3266132","DOIUrl":"https://doi.org/10.1145/3266037.3266132","url":null,"abstract":"As interactions move beyond the desktop, interactive behaviours (effects of actions as they happen, or once they happen) are becoming increasingly complex. This complexity is due to the variety of forms that objects might take, and the different inputs and sensors capturing information, and the ability to create nuanced responses to those inputs. Current interaction design tools do not support much of this rich behaviour authoring. In my work I create prototyping tools that examine ways in which designers can create interactive behaviours. Thus far, I have created two prototyping tools: Pineal and Astral, which examine how to create physical forms based on a smart object's behaviour, and how to reuse existing desktop infrastructures to author different kinds of interactive behaviour. I also contribute conceptual elements, such as how to create smart objects using mobile devices, their sensors and outputs, instead of using custom electronic circuits, as well as devising evaluation strategies used in HCI toolkit research which directly informs my approach to evaluating my tools.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124051954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DynamicSlide: Reference-based Interaction Techniques for Slide-based Lecture Videos","authors":"Hyeungshik Jung, Hijung Valentina Shin, Juho Kim","doi":"10.1145/3266037.3266089","DOIUrl":"https://doi.org/10.1145/3266037.3266089","url":null,"abstract":"Presentation slides play an important role in online lecture videos. Slides convey the main points of the lecture visually, while the instructor's narration adds detailed verbal explanations to each item in the slide. We call the link between a slide item and the corresponding part of the narration a reference. In order to assess the feasibility of reference-based interaction techniques for watching videos, we introduce DynamicSlide, a video processing system that automatically extracts references from slide-based lecture videos and a video player. The system incorporates a set of reference-based techniques: emphasizing the current item in the slide that is being explained, enabling item-based navigation, and enabling item-based note-taking. Our pipeline correctly finds 79% of the references in a set of five videos with 141 references. Results from a user study suggest that DynamicSlide's features improve the learner's video browsing and navigation experience.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124089818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crowd-AI Systems for Non-Visual Information Access in the Real World","authors":"Anhong Guo","doi":"10.1145/3266037.3266133","DOIUrl":"https://doi.org/10.1145/3266037.3266133","url":null,"abstract":"The world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific interfaces, blind people cannot independently do so without at least first learning their layout and labeling them with sighted assistance. My work investigates interactive systems that integrates computer vision, on-demand crowdsourcing, and wearables to amplify the abilities of blind people, offering solutions for real-time environment and interface navigation. My work provides more options for blind people to access information and increases their freedom in navigating the world.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130253586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antti Oulasvirta, Samuli De Pascale, Janin Koch, T. Langerak, Jussi P. P. Jokinen, Kashyap Todi, Markku Laine, Manoj Kristhombuge, Yuxi Zhu, Aliaksei Miniukovich, G. Palmas, T. Weinkauf
{"title":"Aalto Interface Metrics (AIM): A Service and Codebase for Computational GUI Evaluation","authors":"Antti Oulasvirta, Samuli De Pascale, Janin Koch, T. Langerak, Jussi P. P. Jokinen, Kashyap Todi, Markku Laine, Manoj Kristhombuge, Yuxi Zhu, Aliaksei Miniukovich, G. Palmas, T. Weinkauf","doi":"10.1145/3266037.3266087","DOIUrl":"https://doi.org/10.1145/3266037.3266087","url":null,"abstract":"Aalto Interface Metrics (AIM) pools several empirically validated models and metrics of user perception and attention into an easy-to-use online service for the evaluation of graphical user interface (GUI) designs. Users input a GUI design via URL, and select from a list of 17 different metrics covering aspects ranging from visual clutter to visual learnability. AIM presents detailed breakdowns, visualizations, and statistical comparisons, enabling designers and practitioners to detect shortcomings and possible improvements. The web service and code repository are available at interfacemetrics.aalto.fi.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131683089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing Inherent Interactions on Wearable Devices","authors":"Teng Han","doi":"10.1145/3266037.3266130","DOIUrl":"https://doi.org/10.1145/3266037.3266130","url":null,"abstract":"Wearable devices are becoming important computing devices to personal users. They have shown promising applications in multiple domains. However, designing interactions on smartwears remains challenging as the miniature sized formfactors limit both its input and output space. My thesis research proposes a new paradigm of Inherent Interaction on smartwears, with the idea of seeking interaction opportunities from users daily activities. This is to help bridging the gap between novel smartwear interactions and real-life experiences shared among users. This report introduces the concept of Inherent Interaction with my previous and current explorations in the category.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122557954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}