{"title":"Ohmic-Sticker: Force-to-Motion Type Input Device that Extends Capacitive Touch Surface","authors":"Kaori Ikematsu, M. Fukumoto, I. Siio","doi":"10.1145/3332165.3347903","DOIUrl":"https://doi.org/10.1145/3332165.3347903","url":null,"abstract":"We propose \"Ohmic-Sticker'', a novel force-to-motion type input device to extend capacitive touch surfaces. It realizes various types of force-sensitive inputs by simply attaching on to commercial touchpads or touchscreens. A simple force-sensitive-resistor (FSR)-based structure enables thin (less than 2 mm) form factors and battery-less operation. The applied force vector is detected as the leakage current from the corresponding touch surface electrodes by using Ohmic-Touch technology. Ohmic-Sticker can be used for adding force-sensitive interactions to touch surfaces, such as analog push buttons, TrackPoint-like devices, and full 6 DoF controllers for navigating virtual spaces. In this paper, we report a series of investigations on the design requirements of Ohmic-Sticker and some prototypes.We also evaluate the performance of Ohmic-Sticker as a pointing device.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122026951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 6A: Fabrication","authors":"M. Annett","doi":"10.1145/3368379","DOIUrl":"https://doi.org/10.1145/3368379","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128353495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is this Real?: Generating Synthetic Data that Looks Real","authors":"M. Mannino, A. Abouzeid","doi":"10.1145/3332165.3347866","DOIUrl":"https://doi.org/10.1145/3332165.3347866","url":null,"abstract":"Synner is a tool that helps users generate real-looking synthetic data by visually and declaratively specifying the properties of the dataset such as each field's statistical distribution, its domain, and its relationship to other fields. It provides instant feedback on every user interaction by updating multiple visualizations of the generated dataset and even suggests data generation specifications from a few user examples and interactions. Synner visually communicates the inherent randomness of statistical data generation. Our evaluation of Synner demonstrates its effectiveness at generating realistic data when compared with Mockaroo, a popular data generation tool, and with hired developers who coded data generation scripts for a fee.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"3 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126050653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond the Input Stream: Making Text Entry Evaluations More Flexible with Transcription Sequences","authors":"M. Zhang, J. Wobbrock","doi":"10.1145/3332165.3347922","DOIUrl":"https://doi.org/10.1145/3332165.3347922","url":null,"abstract":"Method-independent text entry evaluation tools are often used to conduct text entry experiments and compute performance metrics, like words per minute and error rates. The input stream paradigm of Soukoreff & MacKenzie (2001, 2003) still remains prevalent, which presents a string for transcription and uses a strictly serial character representation for encoding the text entry process. Although an advance over prior paradigms, the input stream paradigm is unable to support many modern text entry features. To address these limitations, we present transcription sequences: for each new input, a snapshot of the entire transcribed string unto that point is captured. By comparing adjacent strings within a transcription sequence, we can compute all prior metrics, reduce artificial constraints on text entry evaluations, and introduce new metrics. We conducted a study with 18 participants who typed 1620 phrases using a laptop keyboard, on-screen keyboard, and smartphone keyboard using features such as auto-correction, word prediction, and copy/paste. We also evaluated non-keyboard methods Dasher, gesture typing, and T9. Our results show that modern text entry methods and features can be accommodated, prior metrics can be correctly computed, and new metrics can reveal insights. We validated our algorithms using ground truth based on cursor positioning, confirming 100% accuracy. We also provide a new tool, TextTest++, to facilitate web-based evaluations.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-Yi Wei, Yajie Zhao, Jingwan Lu, Byungmoon Kim, Hao Li
{"title":"HairBrush for Immersive Data-Driven Hair Modeling","authors":"Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-Yi Wei, Yajie Zhao, Jingwan Lu, Byungmoon Kim, Hao Li","doi":"10.1145/3332165.3347876","DOIUrl":"https://doi.org/10.1145/3332165.3347876","url":null,"abstract":"While hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair authoring interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahabedin Sagheb, F. Liu, A. Bahremand, Assegid Kidané, R. Likamwa
{"title":"SWISH: A Shifting-Weight Interface of Simulated Hydrodynamics for Haptic Perception of Virtual Fluid Vessels","authors":"Shahabedin Sagheb, F. Liu, A. Bahremand, Assegid Kidané, R. Likamwa","doi":"10.1145/3332165.3347870","DOIUrl":"https://doi.org/10.1145/3332165.3347870","url":null,"abstract":"Current VR/AR systems are unable to reproduce the physical sensation of fluid vessels, due to the shifting nature of fluid motion. To this end, we introduce SWISH, an ungrounded mixed-reality interface, capable of affording the users a realistic haptic sensation of fluid behaviors in vessels. The chief mechanism behind SWISH is in the use of virtual reality tracking and motor actuation to actively relocate the center of gravity of a handheld vessel, emulating the moving center of gravity of a handheld vessel that contains fluid. In addition to solving challenges related to reliable and efficient motor actuation, our SWISH designs place an emphasis on reproducibility, scalability, and availability to the maker culture. Our virtual-to-physical coupling uses Nvidia Flex's Unity integration for virtual fluid dynamics with a 3D printed augmented vessel containing a motorized mechanical actuation system. To evaluate the effectiveness and perceptual efficacy of SWISH, we conduct a user study with 24 participants, 7 vessel actions, and 2 virtual fluid viscosities in a virtual reality environment. In all cases, the users on average reported that the SWISH bucket generates accurate tactile sensations for the fluid behavior. This opens the potential for multi-modal interactions with programmable fluids in virtual environments for chemistry education, worker training, and immersive entertainment.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131336076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motohiro Makiguchi, Daisuke Sakamoto, H. Takada, Kengo Honda, T. Ono
{"title":"Interactive 360-Degree Glasses-Free Tabletop 3D Display","authors":"Motohiro Makiguchi, Daisuke Sakamoto, H. Takada, Kengo Honda, T. Ono","doi":"10.1145/3332165.3347948","DOIUrl":"https://doi.org/10.1145/3332165.3347948","url":null,"abstract":"We present an interactive 360-degree tabletop display system for collaborative work around a round table. Users are able to see 3D objects on the tabletop display anywhere around the table without 3D glasses. The system uses a visual perceptual mechanism for smooth motion parallax in the horizontal direction with fewer projectors than previous works. A 360-degree camera mounted above the table and image recognition software detects users' positions around the table and the heights of their faces (eyes) as they move around the table in real-time. Those mechanics help display correct vertical and horizontal direction motion parallax for different users simultaneously. Our system also has a user interaction function with a tablet device that manipulates 3D objects displayed on the table. These functions support collaborative work and communication between users. We implemented a prototype system and demonstrated the collaborative features of the 360-degree tabletop display system.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 1B: Software and Hardware Development","authors":"Elena L. Glassman","doi":"10.1145/3368370","DOIUrl":"https://doi.org/10.1145/3368370","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123064072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 3A: Soft, Silky, Stretchy","authors":"A. Olwal","doi":"10.1145/3368373","DOIUrl":"https://doi.org/10.1145/3368373","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121664236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TilePoP","authors":"Shan-Yuan Teng, Cheng-Lung Lin, C. Chiang, Tzu-Sheng Kuo, Liwei Chan, Da-Yuan Huang, Bing-Yu Chen","doi":"10.1145/3332165.3347958","DOIUrl":"https://doi.org/10.1145/3332165.3347958","url":null,"abstract":"We present TilePoP, a new type of pneumatically-actuated interface deployed as floor tiles which dynamically pop up by inflating into large shapes constructing proxy objects for whole-body interactions in Virtual Reality. TilePoP consists of a 2D array of stacked cube-shaped airbags designed with specific folding structures, enabling each airbag to be inflated into a physical proxy and then deflated down back to its original tile shape when not in use. TilePoP is capable of providing haptic feedback for the whole body and can even support human body weight. Thus, it allows new interaction possibilities in VR. Herein, the design and implementation of TilePoP are described in detail along with demonstrations of its applications and the results of a preliminary user evaluation conducted to understand the users' experience with TilePoP.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122091081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}