{"title":"Masque","authors":"Chi Wang, Da-Yuan Huang, Shuo-wen Hsu, Chu-En Hou, Yeu-Luen Chiu, Ruei-Che Chang, Jo-Yu Lo, Bing-Yu Chen","doi":"10.1145/3332165.3347898","DOIUrl":"https://doi.org/10.1145/3332165.3347898","url":null,"abstract":"We propose integrating an array of skin stretch modules with an head-mounted display (HMD) to provide two-dimensional skin stretch feedback on the user's face. Skin stretch has been found effective to induce the perception of force (e.g. weight or inertia) and to enable directional haptic cues. However, its potential as an HMD output for virtual reality (VR) remains to be exploited. Our explorative study firstly investigated the design of shear tactors. Based on our results, Masque has been implemented as an HMD prototype actuating six shear tactors positioned on the HMD's face interface. A comfort study was conducted to ensure that skin stretches generated by Masque are acceptable to all participants. The following two perception-based studies examined the minimum changes in skin stretch distance and stretch angles that are detectable by participants. The results help us to design haptic profiles as well as our prototype applications. Finally, the user evaluation indicates that participants welcomed Masque and regarded skin stretch feedback as a worthwhile addition to HMD output.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122351331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Roumen, Jotaro Shigeyama, J. Rudolph, Felix Grzelka, Patrick Baudisch
{"title":"SpringFit","authors":"T. Roumen, Jotaro Shigeyama, J. Rudolph, Felix Grzelka, Patrick Baudisch","doi":"10.1145/3332165.3347930","DOIUrl":"https://doi.org/10.1145/3332165.3347930","url":null,"abstract":"Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forc-ing them into slightly smaller openings. Such \"press fit\" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a ma-chine that removes more or less material (\"kerf\"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call canti¬lever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122923383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 9A: Walking, Jumping, Roaming","authors":"Sean Follmer","doi":"10.1145/3368385","DOIUrl":"https://doi.org/10.1145/3368385","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"10 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132508504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Teyssier, G. Bailly, C. Pelachaud, É. Lecolinet, A. Conn, A. Roudaut
{"title":"Skin-On Interfaces: A Bio-Driven Approach for Artificial Skin Design to Cover Interactive Devices","authors":"M. Teyssier, G. Bailly, C. Pelachaud, É. Lecolinet, A. Conn, A. Roudaut","doi":"10.1145/3332165.3347943","DOIUrl":"https://doi.org/10.1145/3332165.3347943","url":null,"abstract":"We propose a paradigm called Skin-On interfaces, in which interactive devices have their own (artificial) skin, thus enabling new forms of input gestures for end-users (e.g. twist, scratch). Our work explores the design space of Skin-On interfaces by following a bio-driven approach: (1) From a sensory point of view, we study how to reproduce the look and feel of the human skin through three user studies;(2) From a gestural point of view, we explore how gestures naturally performed on skin can be transposed to Skin-On interfaces; (3) From a technical point of view, we explore and discuss different ways of fabricating interfaces that mimic human skin sensitivity and can recognize the gestures observed in the previous study; (4) We assemble the insights of our three exploratory facets to implement a series of Skin-On interfaces and we also contribute by providing a toolkit that enables easy reproduction and fabrication.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128359788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proxino: Enabling Prototyping of Virtual Circuits with Physical Proxies","authors":"Te-Yen Wu, Jun Gong, T. Seyed, Xing-Dong Yang","doi":"10.1145/3332165.3347938","DOIUrl":"https://doi.org/10.1145/3332165.3347938","url":null,"abstract":"We propose blending the virtual and physical worlds for prototyping circuits using physical proxies. With physical proxies, real-world components (e.g. a motor, or light sensor) can be used with a virtual counterpart for a circuit designed in software. We demonstrate this concept in Proxino, and elucidate the new scenarios it enables for makers, such as remote collaboration with physically distributed electronics components. We compared our hybrid system and its output with designs of real circuits to determine the difference through a system evaluation and observed minimal differences. We then present the results of an informal study with 9 users, where we gathered feedback on the effectiveness of our system in different working conditions (with a desktop, using a mobile, and with a remote collaborator). We conclude by sharing our lessons learned from our system and discuss directions for future research that blend physical and virtual prototyping for electronic circuits.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"26 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125680492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PseudoBend: Producing Haptic Illusions of Stretching, Bending, and Twisting Using Grain Vibrations","authors":"Seongkook Heo, Jaeyeon Lee, Daniel J. Wigdor","doi":"10.1145/3332165.3347941","DOIUrl":"https://doi.org/10.1145/3332165.3347941","url":null,"abstract":"We present PseudoBend, a haptic feedback technique that creates the illusion that a rigid device is being stretched, bent, or twisted. The method uses a single 6-DOF force sensor and a vibrotactile actuator to render grain vibrations to simulate the vibrations produced during object deformation based on the changes in force or torque exerted on a device. Because this method does not require any moving parts aside from the vibrotactile actuator, devices designed using this method can be small and lightweight. Psychophysical studies conducted using a prototype that implements this method confirmed that the method could be used to successfully create the illusion of deformation and could also change users' perception of stiffness by changing the virtual stiffness parameters.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128337199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 4A: VR Headsets","authors":"S. DiVerdi","doi":"10.1145/3368375","DOIUrl":"https://doi.org/10.1145/3368375","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129238871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pull-Ups: Enhancing Suspension Activities in Virtual Reality with Body-Scale Kinesthetic Force Feedback","authors":"Yuan-Syun Ye, Hsin-Yu Chen, Liwei Chan","doi":"10.1145/3332165.3347874","DOIUrl":"https://doi.org/10.1145/3332165.3347874","url":null,"abstract":"We present Pull-Ups, a suspension kit that can suggest a range of body postures and thus enables various exercise styles of users perceiving the kinesthetic force feedback by suspending their weight with arm exertion during the interaction. Pull-Ups actuates the user's body to move up to 15 cm by pulling his or her hands using a pair of pneumatic artificial muscle groups. Our studies informed the discernible kinesthetic force feedbacks that were then exploited for the design of kinesthetic force feedback in three physical activities: kitesurfing, paragliding, and space invader. Our final study on user experiences suggested that a passive suspension kit alone added substantially to users' perceptions of realism and enjoyment (all above neutral) with passive physical support, while sufficient active feedback can further level them up. In addition, we found that both passive and active feedback of the suspension kit significantly reduced motion sickness in simulated kitesurfing and paragliding compared to when no suspension kit (thus no feedback) was provided. This work suggests that a passive suspension kit is cost-effective as a home exercise kit, while active feedback can further level up user experience, though at the cost of the installation (e.g., an air compressor in our prototype).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121692151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Suzuki, C. Zheng, Y. Kakehi, Tom Yeh, E. Do, M. Gross, Daniel Leithinger
{"title":"ShapeBots: Shape-changing Swarm Robots","authors":"R. Suzuki, C. Zheng, Y. Kakehi, Tom Yeh, E. Do, M. Gross, Daniel Leithinger","doi":"10.1145/3332165.3347911","DOIUrl":"https://doi.org/10.1145/3332165.3347911","url":null,"abstract":"We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions. The modular design of each actuator enables various shapes and geometries of self-transformation. We illustrate potential application scenarios and discuss how this type of interface opens up possibilities for the future of ubiquitous and distributed shape-changing interfaces.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127446214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toby Jia-Jun Li, Marissa Radensky, Justin Jia, Kirielle Singarajah, Tom Michael Mitchell, B. Myers
{"title":"PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations","authors":"Toby Jia-Jun Li, Marissa Radensky, Justin Jia, Kirielle Singarajah, Tom Michael Mitchell, B. Myers","doi":"10.1145/3332165.3347899","DOIUrl":"https://doi.org/10.1145/3332165.3347899","url":null,"abstract":"Natural language programming is a promising approach to enable end users to instruct new tasks for intelligent agents. However, our formative study found that end users would often use unclear, ambiguous or vague concepts when naturally instructing tasks in natural language, especially when specifying conditionals. Existing systems have limited support for letting the user teach agents new concepts or explaining unclear concepts. In this paper, we describe a new multi-modal domain-independent approach that combines natural language programming and programming-by-demonstration to allow users to first naturally describe tasks and associated conditions at a high level, and then collaborate with the agent to recursively resolve any ambiguities or vagueness through conversations and demonstrations. Users can also define new procedures and concepts by demonstrating and referring to contents within GUIs of existing mobile apps. We demonstrate this approach in PUMICE, an end-user programmable agent that implements this approach. A lab study with 10 users showed its usability.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133544266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}