{"title":"A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces","authors":"Kristian Gohlke, E. Hornecker","doi":"10.1145/3266037.3271647","DOIUrl":"https://doi.org/10.1145/3266037.3271647","url":null,"abstract":"We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114094738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigation into Natural Gestures Using EMG for \"SuperNatural\" Interaction in VR","authors":"Chloe Eghtebas, Sandro Weber, G. Klinker","doi":"10.1145/3266037.3266115","DOIUrl":"https://doi.org/10.1145/3266037.3266115","url":null,"abstract":"Can natural interaction requirements be fulfilled while still harnessing the \"supernatural\" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the \"supernatural\" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115968162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto
{"title":"Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm","authors":"Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto","doi":"10.1145/3266037.3271633","DOIUrl":"https://doi.org/10.1145/3266037.3271633","url":null,"abstract":"In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"9 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124943693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio
{"title":"Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction","authors":"Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio","doi":"10.1145/3266037.3266119","DOIUrl":"https://doi.org/10.1145/3266037.3266119","url":null,"abstract":"Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of \"gazed-at\" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121349969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials","authors":"A. Olwal","doi":"10.1145/3266037.3271650","DOIUrl":"https://doi.org/10.1145/3266037.3271650","url":null,"abstract":"We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"152 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii
{"title":"reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper","authors":"Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii","doi":"10.1145/3266037.3266109","DOIUrl":"https://doi.org/10.1145/3266037.3266109","url":null,"abstract":"We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the \"reminiscence\".","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131768946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Post-literate Programming: Linking Discussion and Code in Software Development Teams","authors":"Soya Park, Amy X. Zhang, David R Karger","doi":"10.1145/3266037.3266098","DOIUrl":"https://doi.org/10.1145/3266037.3266098","url":null,"abstract":"The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134261814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pop-up Robotics: Facilitating HRI in Public Spaces","authors":"Swapna Joshi, S. Šabanović","doi":"10.1145/3266037.3266125","DOIUrl":"https://doi.org/10.1145/3266037.3266125","url":null,"abstract":"Human-Robot Interaction (HRI) research in public spaces often encounters delays and restrictions due to several factors, including the need for sophisticated technology, regulatory approvals, and public or community support. To remedy these concerns, we suggest HRI can apply the core philosophy of Tactical Urbanism, a concept from urban planning, to catalyze HRI in public spaces, provide community feedback and information on the feasibility of future implementations of robots in the public, and also create social impact and forge connections with the community while spreading awareness about robots as a public resource. As a case study, we share tactics used and strategies followed to conduct a pop-up style study of 'A robotic mailbox to support and raise awareness about homelessness.' We discuss benefits and challenges of the pop-up approach and recommend using it to enable the social studies of HRI not only to match but to precede, the fast-paced technological advancement and deployment of robots.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114293828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo
{"title":"Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation","authors":"Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo","doi":"10.1145/3266037.3266086","DOIUrl":"https://doi.org/10.1145/3266037.3266086","url":null,"abstract":"Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123842323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ranjay Krishna, Donsuk Lee, Li Fei-Fei, Michael S. Bernstein
{"title":"Engagement Learning: Expanding Visual Knowledge by Engaging Online Participants","authors":"Ranjay Krishna, Donsuk Lee, Li Fei-Fei, Michael S. Bernstein","doi":"10.1145/3266037.3266110","DOIUrl":"https://doi.org/10.1145/3266037.3266110","url":null,"abstract":"Most artificial intelligence (AI) systems to date have focused entirely on performance, and rarely if at all on their social interactions with people and how to balance the AIs' goals against their human collaborators'. Learning quickly from interactions with people poses both social challenges and is unresolved technically. In this paper, we introduce engagement learning: a training approach that learns to trade off what the AI needs---the knowledge value of a label to the AI---against what people are interested to engage with---the engagement value of the label. We realize our goal with ELIA (Engagement Learning Interaction Agent), a conversational AI agent who's goal is to learn new facts about the visual world by asking engaging questions of people about the photos they upload to social media. Our current deployment of ELIA on Instagram receives a response rate of 26%.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}