Simon Olberding, Kian Peen Yeo, Suranga Nanayakkara, Jürgen Steimle
{"title":"AugmentedForearm: exploring the design space of a display-enhanced forearm","authors":"Simon Olberding, Kian Peen Yeo, Suranga Nanayakkara, Jürgen Steimle","doi":"10.1145/2459236.2459239","DOIUrl":"https://doi.org/10.1145/2459236.2459239","url":null,"abstract":"Recent technical advances allow traditional wristwatches to be equipped with high processing power. Not only do they allow for glancing at the time, but they also allow users to interact with digital information. However, the display space is very limited. Extending the screen to cover the entire forearm is promising. It allows the display to be worn similarly to a wristwatch while providing a large display surface. In this paper we present the design space of a display-augmented forearm, focusing on two specific properties of the forearm: its hybrid nature as a private and a public display surface and the way clothing influences information display. We show a wearable prototypical implementation along with interactions that instantiate the design space: sleeve-store, sleeve-zoom, public forearm display and interactive tattoo.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124385534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher-Eyk Hrabia, Katrin Wolf, Mathias Wilhelm
{"title":"Whole hand modeling using 8 wearable sensors: biomechanics for hand pose prediction","authors":"Christopher-Eyk Hrabia, Katrin Wolf, Mathias Wilhelm","doi":"10.1145/2459236.2459241","DOIUrl":"https://doi.org/10.1145/2459236.2459241","url":null,"abstract":"Although Data Gloves allow for the modeling of the human hand, they can lead to a reduction in usability as they cover the entire hand and limit the sense of touch as well as reducing hand feasibility. As modeling the whole hand has many advantages (e.g. for complex gesture detection) we aim for modeling the whole hand while at the same time keeping the hand's natural degrees of freedom (DOF) and the tactile sensibility as high as possible while allowing for manual tasks like grasping tools and devices. Therefore, we attach motion sensor boards (accelerometer, magnetometer and gyroscope) to the human hand. We conducted a user study and found the biomechanical dependence of the joint angles between the fingertip close joint (DIP) and the palm close joint (PIP) in a relation of DIP = 0.88 PIP for all four fingers (SD=0.10, R2=0.77). This allows the data glove to be reduced by 8 sensors boards, one per finger, three for the thumb, and one on the back of the hand as an orientation baseline for modeling the whole hand through. Even though we found a joint flexing relationship also for the thumb, we decided to retain 3 sensor units here, as the relationship varied more (R2=0.59). Our hand model could potentially serve for rich handmodel-based gestural interaction as it covers all 26 DOF in the human hand.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133226967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality using a 3D motion capturing suit","authors":"Ionut Damian, M. Obaid, Felix Kistler, E. André","doi":"10.1145/2459236.2459277","DOIUrl":"https://doi.org/10.1145/2459236.2459277","url":null,"abstract":"In the paper, we propose an approach that immerses the human user in an Augmented Reality (AR) environment with the use of an inertial motion capturing suit and a Head Mounted Displays system. The proposed approach allows for full body interaction with the AR environment in real-time and it does not require the use of any markers or cameras.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuzuko Utsumi, Yuya Kato, K. Kunze, M. Iwamura, K. Kise
{"title":"Who are you?: A wearable face recognition system to support human memory","authors":"Yuzuko Utsumi, Yuya Kato, K. Kunze, M. Iwamura, K. Kise","doi":"10.1145/2459236.2459262","DOIUrl":"https://doi.org/10.1145/2459236.2459262","url":null,"abstract":"Have you ever experienced that you cannot remember the name of a person you meet again? To circumvent such an awkward situation, it would be great if you had had a system that tells you the name of the person in secret. In this paper, we propose a wearable system of real-time face recognition to support human memory. The contributions of our work are summarized as follows: (1) We discuss the design and implementation details of a wearable system capable of augmenting human memory by vision-based realtime face recognition. (2) We propose a 2 step recognition approach from coarse-to-fine grain to boost the execution time towards the social acceptable limit of 900 [ms]. (3) In experiments, we evaluate the computational time and recognition rate. As results, the proposed system could recognize a face in 238 ms with the the cumulative recognition rate at the 10th rank was 93.3 %. Computational time with the coarse-to-fine search was 668 ms less than that without coarse-to-fine search and the results showed that the proposed system has enough ability to recognize faces in real time.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114640366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yoshida, T. Tanikawa, Sho Sakurai, M. Hirose, Takuji Narumi
{"title":"Manipulation of an emotional experience by real-time deformed facial feedback","authors":"S. Yoshida, T. Tanikawa, Sho Sakurai, M. Hirose, Takuji Narumi","doi":"10.1145/2459236.2459243","DOIUrl":"https://doi.org/10.1145/2459236.2459243","url":null,"abstract":"The main goals of this paper involved assessing the efficacy of computer-generated emotion and establishing a method for integrating emotional experience. Human internal processing mechanisms for evoking an emotion by a relevant stimulus have not been clarified. Therefore, there are few reliable techniques for evoking an intended emotion in order to reproduce this process.\u0000 However, in the field of cognitive science, the ability to alter a bodily response has been shown to unconsciously generate emotions. We therefore hypothesized emotional experience could be manipulated by having people recognize pseudo-generated facial expressions as changes to their own facial expressions. Our results suggest that this system was able to manipulate an emotional state via visual feedback from artificial facial expressions. We proposed the Emotion Evoking system based on the facial feedback hypothesis.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115025806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narihiro Nishimura, Taku Hachisu, Michi Sato, S. Fukushima, H. Kajimoto
{"title":"Evaluation of a tactile device for augmentation of audiovisual experiences with a pseudo heartbeat","authors":"Narihiro Nishimura, Taku Hachisu, Michi Sato, S. Fukushima, H. Kajimoto","doi":"10.1145/2459236.2459282","DOIUrl":"https://doi.org/10.1145/2459236.2459282","url":null,"abstract":"The impression that the viewer has of characters is an important factor affecting the viewer's opinion of audiovisual media, such as movies, television and video games. In particular, when we feel affection toward characters, we sometimes go so far as to identify ourselves as one of them, leading to extreme immersion in the content of the media. Therefore, there is the possibility that content technology can control affective feelings towards characters and create an immersive environment. We propose a device that can be used to facilitate the affection of the user by controlling their positive feelings toward characters in the media content. Previous studies have shown that emotional or physiological states can be altered by the visual and auditory presentation of false heartbeats [1, 2, 3]. However, if these techniques are applied to audiovisual media such as movies, television, or video games, the audio and visual heartbeat cues may interfere with and pollute the audiovisual content.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Paired vibratory stimulation for haptic feedback","authors":"Yasutoshi Makino, T. Maeno","doi":"10.1145/2459236.2459245","DOIUrl":"https://doi.org/10.1145/2459236.2459245","url":null,"abstract":"In this paper, we show a haptic feedback method named \"Paired Vibratory Stimulation.\" We use two vibrators, one is attached onto the device and the other one is attached onto the fingernail. When the two vibrators are activated with different but close frequency, the beat vibration occurs only when the finger touches the device. A human can feel the beat vibration even when each original vibration is hard to be perceived. Therefore, the system can give vibratory sensation only at the contact area by using two vibrators. This is suitable for haptic feedback for a handheld mobile device. For the handheld device, sensation only arises at the contact area not at the holding hand or at the fingernail. This is also applicable to human skin interface system. Recently some researchers have proposed the systems which take advantage of human skin surface as an input media. Our method is suitable to give vibratory haptic feedback for that situation. We show our experimental results which clarified that the Paired Vibratory Stimulation can be achieved and applied to the human skin interface system.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sinziana Mazilu, Ulf Blanke, D. Roggen, G. Tröster, Eran Gazit, Jeffrey M. Hausdorff
{"title":"Engineers meet clinicians: augmenting Parkinson's disease patients to gather information for gait rehabilitation","authors":"Sinziana Mazilu, Ulf Blanke, D. Roggen, G. Tröster, Eran Gazit, Jeffrey M. Hausdorff","doi":"10.1145/2459236.2459257","DOIUrl":"https://doi.org/10.1145/2459236.2459257","url":null,"abstract":"Many people with Parkinson's disease suffer from freezing of gait, a debilitating temporary inability to pursue walking. Rehabilitation with wearable technology is promising. State of the art approaches face difficulties in providing the needed bio-feedback with a sufficient low-latency and high accuracy, as they rely solely on the crude analysis of movement patterns allowed by commercial motion sensors. Yet the medical literature hints at more sophisticated approaches. In this work we present our first step to address this with a rich multimodal approach combining physical and physiological sensors. We present the experimental recordings including 35 motion and 3 physiological sensors we conducted on 18 patients, collecting 23 hours of data. We provide best practices to ensure a robust data collection that considers real requirements for real world patients. To this end we show evidence from a user questionnaire that the system is low-invasive and that a multimodal view can leverage cross modal correlations for detection or even prediction of gait freeze episodes.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130157374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiencing the ball's POV for ballistic sports","authors":"K. Horita, H. Sasaki, H. Koike, Kris Kitani","doi":"10.1145/2459236.2459258","DOIUrl":"https://doi.org/10.1145/2459236.2459258","url":null,"abstract":"We place a small wireless camera inside an American football to capture the ball's point-of-view during flight to augment a spectator's experience of the game of football. To this end, we propose a robust video synthesis algorithm that leverages the unique constraints of fast spinning cameras to obtain a stabilized bird's eye point-of-view video clip. Our algorithm uses a coarse-to-fine image homography computation technique to progressively register images. We then optimize an energy function defined over pixel-wise color similarity and distance to image borders, to find optimal image seams to create panoramic composite images. Our results show that we can generate realistic videos from a camera spinning at speeds of up to 600 RPM.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116774679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Velloso, A. Bulling, Hans-Werner Gellersen, Wallace Ugulino, H. Fuks
{"title":"Qualitative activity recognition of weight lifting exercises","authors":"Eduardo Velloso, A. Bulling, Hans-Werner Gellersen, Wallace Ugulino, H. Fuks","doi":"10.1145/2459236.2459256","DOIUrl":"https://doi.org/10.1145/2459236.2459256","url":null,"abstract":"Research on activity recognition has traditionally focused on discriminating between different activities, i.e. to predict which activity was performed at a specific point in time. The quality of executing an activity, the how (well), has only received little attention so far, even though it potentially provides useful information for a large variety of applications. In this work we define quality of execution and investigate three aspects that pertain to qualitative activity recognition: specifying correct execution, detecting execution mistakes, providing feedback on the to the user. We illustrate our approach on the example problem of qualitatively assessing and providing feedback on weight lifting exercises. In two user studies we try out a sensor- and a model-based approach to qualitative activity recognition. Our results underline the potential of model-based assessment and the positive impact of real-time user feedback on the quality of execution.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133089066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}