{"title":"A Match Made in Heaven: Streaming Real-time Imagery from a Lightfield Camera to a Lightfield Display","authors":"Alex N. Hornstein, Evan Moore, Kai-han Chang","doi":"10.1145/3332167.3356899","DOIUrl":"https://doi.org/10.1145/3332167.3356899","url":null,"abstract":"We are demonstrating a novel design of a live lightfield camera capturing a scene and showing the captured lightfield in realtime in a lightfield display. The simple, distributed design of the camera allows for low-cost construction of an array of 2D cameras that captures high quality, artifact-free imagery of the most challenging of subjects. This camera takes advantage of the natural duality of outside-in lightfield cameras with inside-out lightfield displays, letting us render complex lightfield imagery with a minimum of processing power.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125016165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Approach to Studying Sleep in Autonomous Vehicles: Simulating the Waking Situation","authors":"Won Kim, Seungjun Kim","doi":"10.1145/3332167.3357098","DOIUrl":"https://doi.org/10.1145/3332167.3357098","url":null,"abstract":"In this paper, we present a novel methodology for simulating the physical and cognitive demands that individuals experience when waking from sleep. Better understanding this scenario has significant implications for research in Autonomous Vehicles (AV), where prior research has shown that many drivers would like to sleep while the vehicle is in operation. Our experiment setup replicates the waking situation in two ways: (1) Subjects wear a sleep shade (physical demand) for 3 sessions (5min, 8min, and 11min) in randomly assigned order, after which (2) they view a screen (cognitive demand) that fades from blurry to clear over a 10s-timeframe. We compared subjects' experiences in-study to the physical and cognitive conditions they experience when waking in real life. Our experiment setup was highly rated in effectiveness and appropriateness for alternating sleeping situation. Findings will be utilized as scenario design in future AV studies and can be adopted in other fields, as well.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114949100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating Focused Object using Smooth Pursuit Eye Movements and Interest Points in the Real World","authors":"Yuto Tamura, K. Takemura","doi":"10.1145/3332167.3357102","DOIUrl":"https://doi.org/10.1145/3332167.3357102","url":null,"abstract":"User calibration is a significant problem in eye-based interaction. To overcome this, several solutions, such as the calibration-free method and implicit user calibration, have been proposed. Pursuits-based interaction is another such solution that has been studied for public screens and virtual reality. It has been applied to select graphical user interfaces (GUIs) because the movements in a GUI can be designed in advance. Smooth pursuit eye movements (smooth pursuits) occur when a user looks at objects in the physical space as well and thus, we propose a method to identify the focused object by using smooth pursuits in the real world. We attempted to determine the focused objects without prior information under several conditions by using the pursuits-based approach and confirmed the feasibility and limitations of the proposed method through experimental evaluations.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122341028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew M. Vernier, Jean Y. Song, Edward Sun, A. Kench, Walter S. Lasecki
{"title":"Towards Universal Evaluation of Image Annotation Interfaces","authors":"Andrew M. Vernier, Jean Y. Song, Edward Sun, A. Kench, Walter S. Lasecki","doi":"10.1145/3332167.3357122","DOIUrl":"https://doi.org/10.1145/3332167.3357122","url":null,"abstract":"To guide the design of interactive image annotation systems that generalize to new domains and applications, we need ways to evaluate the capabilities of new annotation tools across a range of different types of image, content, and task domains. In this work, we introduce Corsica, a test harness for image an- notation tools that uses calibration images to evaluate a tool's capabilities on general image properties and task requirements. Corsica is comprised of sets of three key components: 1) synthesized images with visual elements that are not domain- specific, 2) target microtasks that connects the visual elements and tools for evaluation, and 3) ground truth data for each mi- crotask and visual element pair. By introducing a specification for calibration images and microtasks, we aim to create an evolving repository that allows the community to propose new evaluation challenges. Our work aims to help facilitate the robust verification of image annotation tools and techniques.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122559458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marla Narazani, Chloe Eghtebas, G. Klinker, Sarah L. Jenney, Michael Mühlhaus, F. Petzold
{"title":"Extending AR Interaction through 3D Printed Tangible Interfaces in an Urban Planning Context","authors":"Marla Narazani, Chloe Eghtebas, G. Klinker, Sarah L. Jenney, Michael Mühlhaus, F. Petzold","doi":"10.1145/3332167.3356891","DOIUrl":"https://doi.org/10.1145/3332167.3356891","url":null,"abstract":"Embedding conductive material into 3D printed objects enables non-interactive objects to become tangible without the need to attach additional components. We present a novel use for such touch-sensitive objects in an augmented reality (AR) setting and explore the use of gestures for enabling different types of interaction with digital and physical content. In our demonstration, the setting is an urban planning scenario. The multi-material 3D printed buildings consist of thin layers of white plastic filament and a conductive wireframe to enable touch gestures. Attendees can either interact with the physical model or with the mobile AR interface for selecting, adding or deleting buildings.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125979806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pre-screen: Assisting Material Screening in Early-stage of Video Editing","authors":"Qian Zhu, Shuai Ma, Cuixia Ma","doi":"10.1145/3332167.3357112","DOIUrl":"https://doi.org/10.1145/3332167.3357112","url":null,"abstract":"Video editing is a difficult task for both professionals and amateur editors. One of the biggest reasons is that screening useful clips from raw material in the early stage of editing is too much time-consuming and laborious. To better understand current difficulties faced by users in editing task, we first conduct a pilot study involving a survey and an interview among 20 participants. Based on the results, we then design Pre-screen, a novel tool to provide users with both global-view and detailed-view video analysis as well as material screening features based on intelligent video processing, analysis and visualization methods. User study shows that Pre-screen can not only effectively help users screen and arrange raw video material to save much more time than a widely used video editing tool in video editing tasks, but also inspire and satisfy users.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132038885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Voice Input Interface Failures and Frustration: Developer and User Perspectives","authors":"Shiyoh Goetsu, T. Sakai","doi":"10.1145/3332167.3357103","DOIUrl":"https://doi.org/10.1145/3332167.3357103","url":null,"abstract":"We identify different types of failures in a voice user interface application, from both developer and user perspectives. Our preliminary experiment suggests that user-perceived Pattern Match Failure may have a strong negative effect on user frustration; based on this result, we conduct power analysis to obtain more conclusive results in a future experiment.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lucid Fabrication","authors":"Rundong Tian","doi":"10.1145/3332167.3356881","DOIUrl":"https://doi.org/10.1145/3332167.3356881","url":null,"abstract":"Advances in digital fabrication have created new capabilities and simultaneously reinforced outdated workflows. In my thesis work, I primarily explore alternative workflows for digital fabrication that introduce new capabilities and interactions. Methodologically, I build fabrication systems spanning mechanical design, electronics, and software in order to examine these ideas in specific detail. In this paper, I introduce related work and frame it within the historical context of digital fabrication, and discuss my previous and ongoing work.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116192133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing Accessible 3D Design for the Blind and Visually-Impaired via Tactile Shape Displays","authors":"A. Siu","doi":"10.1145/3332167.3356875","DOIUrl":"https://doi.org/10.1145/3332167.3356875","url":null,"abstract":"Affordable rapid 3D printing technologies have become key enablers of the Maker Movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access to people with blindness and visual impairment (BVI). In this paper I address three areas of research that I will conduct as part of my PhD dissertation towards bridging a gap between blind and sighted makers. My dissertation aims to create an accessible 3D design and printing workflow for BVI people through the use of 2.5D tactile displays, and to rigorously understand how BVI people use the workflow in the context of perception, interaction, and learning.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129458279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farshid Salemi Parizi, Eric Whitmire, Alvin Cao, Tianke Li, Shwetak N. Patel
{"title":"Demo of AuraRing: Precise Electromagnetic Finger Tracking","authors":"Farshid Salemi Parizi, Eric Whitmire, Alvin Cao, Tianke Li, Shwetak N. Patel","doi":"10.1145/3332167.3356893","DOIUrl":"https://doi.org/10.1145/3332167.3356893","url":null,"abstract":"We present AuraRing, a wearable electromagnetic tracking system for fine-grained finger movement. The hardware consists of a ring with an embedded electromagnetic transmitter coil and a wristband with multiple sensor coils. By measuring the magnetic fields at different points around the wrist, AuraRing estimates the five degree-of-freedom pose of the finger. AuraRing is trained only on simulated data and requires no runtime supervised training, ensuring user and session independence. AuraRing has a resolution of 0.1 mm and a dynamic accuracy of 4.4 mm, as measured through a user evaluation with optical ground truth. The ring is completely self-contained and consumes just 2.3 mW of power.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132843158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}