Insu Kim, Keunwoo Park, Youngwoo Yoon, Geehyuk Lee
{"title":"Touch180","authors":"Insu Kim, Keunwoo Park, Youngwoo Yoon, Geehyuk Lee","doi":"10.1145/3266037.3266091","DOIUrl":"https://doi.org/10.1145/3266037.3266091","url":null,"abstract":"We present Touch180, a computer vision based solution for identifying fingers on a mobile touchscreen with a fisheye camera and deep learning algorithm. As a proof-of-concept research, this paper focused on robustness and high accuracy of finger identification. We generated a new dataset for Touch180 configuration, which is named as Fisheye180. We trained a CNN (Convolutional Neural Network)-based network utilizing touch locations as auxiliary inputs. With our novel dataset and deep learning algorithm, finger identification result shows 98.56% accuracy with VGG16 model. Our study will serve as a step stone for finger identification on a mobile touchscreen.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122068964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shio Miyafuji, Soichiro Toyohara, Toshiki Sato, H. Koike
{"title":"DisplayBowl","authors":"Shio Miyafuji, Soichiro Toyohara, Toshiki Sato, H. Koike","doi":"10.1145/3266037.3266114","DOIUrl":"https://doi.org/10.1145/3266037.3266114","url":null,"abstract":"We introduce DisplayBowl which is a concept of a bowl shaped hemispherical display for showing omnidirectional images. This display provides three-way observation for omnidirectional images. DisplayBowl allows users to observe an omnidirectional image by looking the image from above. In addition, users can see it with a first-person-viewpoint, by looking into the inside of the hemispherical surface from diagonally above. Furthermore, by observing both the inside and the outside of the hemispherical surface at the same time from obliquely above, it is possible to observe it by a pseudo third-person-viewpoint, like watching the drone obliquely from behind. These ways of viewing solve the problem of inability of pilots controlling a remote vehicle such as a drone to notice what happens behind them, which happen with conventional displays such as flat displays and head mounted displays.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124094351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OptRod","authors":"Ryo Shirai, Yuichi Itoh, Shori Ueda, T. Onoye","doi":"10.1145/3266037.3271639","DOIUrl":"https://doi.org/10.1145/3266037.3271639","url":null,"abstract":"In this demonstration, we propose OptRod, constructing interactive surface with multiple functions and flexible shape by projected image. A PC generates images as control signals and projects them to the bottom of OptRods by a projector or LCD. An OptRod receives the light and converts its brightness into a control signal for the attached output device. By using multiple OptRods, the PC can simultaneously operate many output devices without any signal lines. Moreover, we can arrange surfaces of various shapes easily by combining multiple OptRods. OptRod supports various functions by replacing the device unit connected to OptRod.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116201765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pascal E. Fortin, Elisabeth Sulmont, J. Cooperstock
{"title":"SweatSponse","authors":"Pascal E. Fortin, Elisabeth Sulmont, J. Cooperstock","doi":"10.1145/3266037.3266084","DOIUrl":"https://doi.org/10.1145/3266037.3266084","url":null,"abstract":"Today\"s smartphone notification systems are incapable of determining whether a notification has been successfully perceived without explicit interaction from the user. When the system incorrectly assumes that a notification has not been perceived, it may repeat it redundantly, disrupting the user (e.g., phone ringing). Or, when it assumes that a notification was perceived, and therefore fails to repeat it, the notification will be missed altogether (e.g., text message). We introduce SweatSponse, a feedback loop using skin conductance responses (SCR) to infer the perception of smartphone notifications just after their presentation. Early results from a laboratory study suggest that notifications induce SCR and that they could be used to better infer perception of smartphone notifications in real-time.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132831664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Zempo, Yuichi Mashiba, Takayuki Kawamura, N. Kuratomo, H. E. B. Salih
{"title":"Phonoscape","authors":"K. Zempo, Yuichi Mashiba, Takayuki Kawamura, N. Kuratomo, H. E. B. Salih","doi":"10.1145/3266037.3266120","DOIUrl":"https://doi.org/10.1145/3266037.3266120","url":null,"abstract":"In this paper, we developed an auditory display method which improves the comprehension of photograph to apply the support system for person with visual impairment. The auralization method is constructed by object recognition, auditory iconization and stereophonic techniques. Through the experiments, the enhancement of intelligibility and discriminability was confirmed compared to the image-to-speech reading machine method.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116354790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ZEUSSS","authors":"Nivedita Arora, G. Abowd","doi":"10.1145/3266037.3266108","DOIUrl":"https://doi.org/10.1145/3266037.3266108","url":null,"abstract":"ZEUSSS (Zero Energy Ubiquitous Sound Sensing Surface), allows physical objects and surfaces to be instrumented with a thin, self-sustainable material that provides acoustic sensing and communication capabilities. We have built a prototype ZEUSSS tag using minimal hardware and flexible electronic components, extending our original self-sustaining SATURN microphone with a printed, flexible antenna to support passive communication via analog backscatter. ZEUSSS enables objects to have ubiquitous wire-free battery-free audio based context sensing, interaction, and surveillance capabilities.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126130462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Rompapas, C. Sandor, Alexander Plopski, Daniel Saakes, Dong Hyeok Yun, Takafumi Taketomi, H. Kato
{"title":"HoloRoyale","authors":"D. Rompapas, C. Sandor, Alexander Plopski, Daniel Saakes, Dong Hyeok Yun, Takafumi Taketomi, H. Kato","doi":"10.1145/3266037.3271637","DOIUrl":"https://doi.org/10.1145/3266037.3271637","url":null,"abstract":"Recent years saw an explosion in Augmented Reality (AR) experiences for consumers. These experiences can be classified based on the scale of the interactive area (room vs city/global scale), or the fidelity of the experience (high vs low) [4]. Experiences that target large areas, such as campus or world scale [6], [7], commonly have only rudimentary interactions with the physical world, and suffer from registration errors and jitter. We classify these experiences as large scale and low fidelity. On the other hand, various room sized experiences [5], [8] feature realistic interaction of virtual content with the real world. We classify these experiences as small scale and high fidelity.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130494864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mackeprang, Johann Strama, Gerold Schneider, Philipp Kuhnz, J. Benjamin, Claudia Müller-Birn
{"title":"Kaleidoscope","authors":"M. Mackeprang, Johann Strama, Gerold Schneider, Philipp Kuhnz, J. Benjamin, Claudia Müller-Birn","doi":"10.1145/3266037.3266106","DOIUrl":"https://doi.org/10.1145/3266037.3266106","url":null,"abstract":"Evaluating and selecting ideas is a critical and time-consuming step in collaborative ideation, making computational support for this task a desired research goal. However, existing automatic approaches to idea selection might eliminate valuable ideas. In this work we combine automatic approaches with human sensemaking. Kaleidoscope is an exploratory data analytics tool based on semantic technologies. It supports users in exploring and annotating existing ideas interactively. In the following, we present key design principles of Kaleidoscope. Based on qualitative feedback collected on a prototype, we identify potential improvements and describe future work.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125325543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mindgame","authors":"Tongda Xu, Dinglu Wang, Xiaohui You","doi":"10.1145/3266037.3266083","DOIUrl":"https://doi.org/10.1145/3266037.3266083","url":null,"abstract":"This paper presents Mindgame, a reinforcement learning optimized neurofeedback mindfulness system. To avoid the potential bias and difficulties of designing mapping between neural signal and output, we adopt a trial-and-error learning method to explore the preferred mapping. In a pilot study we assess the effectiveness of Mindgame in mediating people's EEG alpha band. All participants' alpha band change towards the desired direction.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125934174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junichi Yamaoka, K. Nozawa, S. Asada, Ryuma Niiyama, Yoshihiro Kawahara, Y. Kakehi
{"title":"AccordionFab","authors":"Junichi Yamaoka, K. Nozawa, S. Asada, Ryuma Niiyama, Yoshihiro Kawahara, Y. Kakehi","doi":"10.1145/3266037.3271636","DOIUrl":"https://doi.org/10.1145/3266037.3271636","url":null,"abstract":"In this paper, we propose a method to create 3D inflatable objects by laminating plastic layers. AccordionFab is a fabrication method in which the user can prototype multi-layered inflatable structures rapidly with a common laser cutter. Our key finding is that it is possible to selectively weld the two uppermost plastic sheets out of the stacked sheets by defocusing the laser and inserting the heat-resistant paper below the desired welding layer. As the contribution of our research, we investigated the optimal distance between the lens and the workpiece for cutting and welding and developed an attachment which supports welding process. Next, we developed a mechanism of changing the thickness and bending angle of multi-layered objects and created a simulation software. Using these techniques, the user can create various prototypes such as personal furniture that fits user's body and packing containers that fit the contents.","PeriodicalId":421706,"journal":{"name":"The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117141003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}