{"title":"PickRing: seamless interaction through pick-up detection","authors":"Katrin Wolf, Jonas Willaredt","doi":"10.1145/2735711.2735792","DOIUrl":"https://doi.org/10.1145/2735711.2735792","url":null,"abstract":"We are frequently switching between devices, and currently we have to unlock most of them. Ideally such devices should be seamlessly accessible and not require an unlock action. We introduce PickRing, a wearable sensor that allows seamless interaction with devices through predicting the intention to interact with them through the device's pick-up detection. A cross-correlation between the ring and the device's motion is used as basis for identifying the intention of device usage. In an experiment, we found that the pick-up detection using PickRing cost neither additional effort nor time when comparing it with the pure pick-up action, while it has more hedonic qualities and is rated to be more attractive than a standard smartphone technique. Thus, PickRing can reduce the overhead in using device through seamlessly activating mobile and ubiquitous computers.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129746236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Mizushina, Wataru Fujiwara, Tomoaki Sudou, C. Fernando, K. Minamizawa, S. Tachi
{"title":"Interactive instant replay: sharing sports experience using 360-degrees spherical images and haptic sensation based on the coupled body motion","authors":"Yusuke Mizushina, Wataru Fujiwara, Tomoaki Sudou, C. Fernando, K. Minamizawa, S. Tachi","doi":"10.1145/2735711.2735778","DOIUrl":"https://doi.org/10.1145/2735711.2735778","url":null,"abstract":"We propose \"Interactive Instant Replay\" system that the user can experience previously recorded sports play with 360-degrees spherical images and haptic sensation. The user wears a HMD, holds a Haptic Racket and experience the first person sports play scene with his own coupled body motion. The system proposed in this paper could be integrated with existing television broadcasting data that can be used in large sports events such as 2020 Olympic, to experience the same sports play experience at home.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130272913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wearable text input interface using touch typing skills","authors":"Kazuya Murao","doi":"10.1145/2735711.2735779","DOIUrl":"https://doi.org/10.1145/2735711.2735779","url":null,"abstract":"A lot of systems and devices for text input in wearable computing environment have been proposed and released thus far, while these are not commonly used due to drawbacks such as slow input speed, long training period, low usability, and low wearability. This paper proposes a wearable text input device using touch typing skills that would have been acquired for full-size keyboard. Users who have touch typing skills can input texts without training.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130288336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How much do you read?: counting the number of words a user reads using electrooculography","authors":"K. Kunze, Katsutoshi Masai, Yuji Uema, M. Inami","doi":"10.1145/2735711.2735832","DOIUrl":"https://doi.org/10.1145/2735711.2735832","url":null,"abstract":"We read to acquire knowledge. Reading is a common activity performed in transit and while sitting, for example during commuting to work or at home on the couch. Although reading is associated with high vocabulary skills and even with increased critical thinking, we still know very little about effective reading habits. In this paper, we argue that the first step to understanding reading habits in real life we need to quantify them with affordable and unobtrusive technology. Towards this goal, we present a system to track how many words a user reads using electrooculography sensors. Compared to previous work, we use active electrodes with a novel on-body placement optimized for both integration into glasses (or head-worn eyewear etc) and for reading detection. Using this system, we present an algorithm capable of estimating the words read by a user, evaluate it in an user independent approach over experiments with 6 users over 4 different devices (8\" and 9\" tablet, paper, laptop screen). We achieve an error rate as low as 7% (based on eye motions alone) for the word count estimation (std = 0.5%).","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126594764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of a novel finger exoskeleton with a sliding six-bar joint mechanism","authors":"Mahasak Surakijboworn, Wittaya Wannasuphoprasit","doi":"10.1145/2735711.2735837","DOIUrl":"https://doi.org/10.1145/2735711.2735837","url":null,"abstract":"The objective of the paper is to propose a novel design of a finger exoskeleton. The merit of the work is that the proposed mechanism is expected to eliminate interference and translational force on a finger. The design consists of 3 identical joint mechanisms which, for each, adopts a six-bar RCM as an equivalent revolute joint incorporating with 2 prismatic joints to form a close-chain structure with a finger joint. Cable and hose transmission is designed to reduce burden from prospective driving modules. As a result, the prototype coherently follows finger movement throughout full range of motion for every size of fingers. This prototype is a part of the research that will be used in hand rehabilitation.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Chang, H. Iizuka, Y. Naruse, H. Ando, T. Maeda
{"title":"Unconscious learning of speech sounds using mismatch negativity neurofeedback","authors":"Ming Chang, H. Iizuka, Y. Naruse, H. Ando, T. Maeda","doi":"10.1145/2735711.2735827","DOIUrl":"https://doi.org/10.1145/2735711.2735827","url":null,"abstract":"Learning the speech sounds of a foreign language is difficult for adults, and often requires significant training and attention. For example, native Japanese speakers are usually unable to differentiate between the \"l\" and \"r\" sounds in English; thus, words like \"light\" and \"right\" are hardly discriminated. We previously showed that the discrimination ability for similar pure tones can be improved unconsciously using neurofeedback (NF) training with mismatch negativity (MMN), but it is not clear whether it can improve discrimination of the speech sounds of words. We examined whether MMN Neurofeedback is effective in helping native Japanese speakers discriminate 'light' and 'right' in English. Participants seemed to unconsciously improve significantly in speech sound discrimination through NF training without attention to the auditory stimuli or awareness of what was to be learnt. Individual word sound recognition also improved significantly. Furthermore, our results indicate a lasting effect of NF training.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115827858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masaharu Hirose, Yuta Sugiura, K. Minamizawa, M. Inami
{"title":"PukuPuCam: a recording system from third-person view in scuba diving","authors":"Masaharu Hirose, Yuta Sugiura, K. Minamizawa, M. Inami","doi":"10.1145/2735711.2735813","DOIUrl":"https://doi.org/10.1145/2735711.2735813","url":null,"abstract":"In this paper, we propose \"PukuPuCam\" system, an apparatus to record one's diving experience from a third-person view, allowing the user to recall the experience at a later time. \"PukuPuCam\" continuously captures the center of the user's view point, by attaching a floating camera to the user's body using a string. With this simple technique, it is possible to maintain the same viewpoint regardless of the diving speed or the underwater waves. Therefore, user can dive naturally without being conscious about the camera. The main aim of this system is to enhance the diving experiences by recording user's unconscious behaviour and interactions with the surrounding environment.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126174446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extracting users' intended nuances from their expressed movements: in quadruple movements","authors":"T. Komatsu, Chihaya Kuwahara","doi":"10.1145/2735711.2735799","DOIUrl":"https://doi.org/10.1145/2735711.2735799","url":null,"abstract":"We propose a method for extracting users' intended nuances from their expressed quadruple movements. Specifically, this method can quantify such nuances as a four dimensional vector representation {sharpness, softness, dynamics, largeness}. We then show an example of a music application based on this method that changes the volume of assigned music tracks in accordance with each attribute of the vector representation extracted from their quadruple movements like a music conductor.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131757998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A wearable stimulation device for sharing and augmenting kinesthetic feedback","authors":"Jun Nishida, Kanako Takahashi, Kenji Suzuki","doi":"10.1145/2735711.2735775","DOIUrl":"https://doi.org/10.1145/2735711.2735775","url":null,"abstract":"In this paper, we introduce a wearable stimulation device that is capable of simultaneously achieving functional electrical stimulation (FES) and the measurement of electromyogram (EMG) signals. We also propose dynamically adjustable frequency stimulation over a wide range of frequencies (1-150Hz), which allows the EMG-triggered FES device to be used in various scenarios. The developed prototype can be used not only as social playware for facilitating touch communications but also as a tool for virtual experiences such as hand tremors in Parkinson's disease, and an assistive tool for sports training. The methodology, preliminarily experiments, and potential applications are described in this paper.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A mobile augmented reality system to enhance live sporting events","authors":"Samantha Bielli, C. G. Harris","doi":"10.1145/2735711.2735836","DOIUrl":"https://doi.org/10.1145/2735711.2735836","url":null,"abstract":"Sporting events broadcast on television or through the internet are often supplemented with statistics and background information on each player. This information is typically only available for sporting events followed by a large number of spectators. Here we describe an Android-based augmented reality (AR) tool built on the Tesseract API that can store and provide augmented information about each participant in nearly any sporting event. This AR tool provides for a more engaging spectator experience for viewing professional and amateur events alike. We also describe the preliminary field tests we have conducted, some identified limitations of our approach, and how we plan to address each in future work.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114699152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}