{"title":"Flying sports assistant: external visual imagery representation for sports training","authors":"Keita Higuchi, Tetsuro Shimada, J. Rekimoto","doi":"10.1145/1959826.1959833","DOIUrl":"https://doi.org/10.1145/1959826.1959833","url":null,"abstract":"Mental imagery is a quasi-perceptual experience emerging from past experiences. In sports psychology, mental imagery is used to improve athletes' cognition and motivation. Eminent athletes often create their mental imagery as if they themselves are the external observers; such ability plays an important role in sport training and performance. Mental image visualization refers to the representation of external vision containing one's own self from the perspective of others. However, without technological support, it is difficult to obtain accurate external visual imagery during sports. In this paper, we have proposed a system that has an aerial vehicle (a quadcopter) to capture athletes' external visual imagery. The proposed system integrates various sensor data to autonomously track the target athlete and compute camera angle and position. The athlete can see the captured image in realtime through a head mounted display, or more recently through a hand-held device. We have applied this system to support soccer and other sports and discussed how the proposed system can be used during training.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123369191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smart glasses linking real live and social network's contacts by face recognition","authors":"Martin Kurze, Axel Roselius","doi":"10.1145/1959826.1959857","DOIUrl":"https://doi.org/10.1145/1959826.1959857","url":null,"abstract":"Imagine you participate in a big meeting with several people remotely known to you. You remember their faces but not their names. This is where \"Smart Glasses\" supports you: Smart Glasses consist of a (wearable) display, a tiny camera, some local processing power and an uplink to a backend service. The current implementation is based on Android and runs on smartphones, early research prototypes with different types of wearable displays have been evaluated as well. The system executes face detection and face tracking locally on the device (e.g. smartphone) and then links to the service running in the cloud to perform the actual face recognition based on the user's personal contact list (gallery). Recognized and identified persons are then displayed with names and latest social network activities.\u0000 The approach is directed towards an AR ecosystem for mobile use. Therefore, open interfaces on the device are provided as well as to the service backend. We intend to take today's location based AR systems one step further towards computer vision based AR to really fit the needs of today's and tomorrow's users.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123165701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuuki Tanjo, J. Ogawa, S. Ito, R. Sakamoto, Ichiro Umata, H. Ando
{"title":"Training support system for violin bowing","authors":"Yuuki Tanjo, J. Ogawa, S. Ito, R. Sakamoto, Ichiro Umata, H. Ando","doi":"10.1145/1959826.1959863","DOIUrl":"https://doi.org/10.1145/1959826.1959863","url":null,"abstract":"The purpose of this paper is to propose a multimodal data viewer for teaching the violin. There are many studies on motor skills with multimodal data captured from motion capture systems. Using normal motion capture data alone, however, it is difficult to give explanations when experts teach their skills to beginners. For example, not only the motion of the right arm and wrist but also shifting the pressure on the strings with the bow is a critical skill to master when playing the violin. The shifting pressures can be obtained by strain gauge sensors. In this paper, we propose a system designed to provide training support with multimodal data by using composed visualizing motion data and other sensor data such as a strain gauge. As an example, we show a teaching violin support system and experiment data.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131163425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FlexTorque, FlexTensor, and HapticEye: exoskeleton haptic interfaces for augmented interaction","authors":"D. Tsetserukou","doi":"10.1145/1959826.1959859","DOIUrl":"https://doi.org/10.1145/1959826.1959859","url":null,"abstract":"In order to realize haptic interaction (e.g., holding, pushing, and contacting the object) in virtual environment and mediated haptic communication with human beings (e.g., handshaking), the force feedback is required. Recently there has been a substantial need and interest in haptic displays, which can provide realistic and high fidelity physical interaction in virtual environment. The aim of our research is to implement wearable haptic displays for presentation of realistic feedback (kinesthetic stimulus) to the human arm. We developed wearable devices FlexTorque and FlexTensor that induce forces to the human arm and do not require holding any additional haptic interfaces in the human hand. It is a new technology for Virtual Reality that allows user to explore surroundings freely. The concept of Karate (empty hand) Haptics proposed by us is opposite to conventional interfaces (e.g., Wii Remote, SensAble's PHANTOM, SPIDAR [1]) that require holding haptic interface in the hand, restricting thus the motion of the fingers in midair. The HapticEye interface allows the blind person to explore the unknown environment in a natural and effective manner. The wearer can literally see the environment by hand.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The emotional economy for the augmented human","authors":"J. Seigneur","doi":"10.1145/1959826.1959850","DOIUrl":"https://doi.org/10.1145/1959826.1959850","url":null,"abstract":"Happiness research findings are increasingly being taken into account in standard economics. However, most findings are based on a posteriori surveys trying to infer how happy people have been. In this paper, we argue that the advances in wearable computing, especially brain-computer interfaces, can lead to realtime measurements of happiness. We then propose a new kind of economy model where people pay depending on the emotions they have experienced. We have combined current commercial-on-the-shelf software and hardware components to create a proof-of-concept of the model.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"485 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132194506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing the sports prosthetic leg","authors":"S. Yamanaka, Yuki Tsuji, M. Higaki, Hideka Suzuki","doi":"10.1145/1959826.1959841","DOIUrl":"https://doi.org/10.1145/1959826.1959841","url":null,"abstract":"From a prosthesis hidden under clothing to a one comes on spotlight. Our common recognition is changing through sports. For amputee's more beautiful form in running, we've developed prostheses specially focused on usability, exterior, and safety. Here we'd like to introduce how we've designed the prosthesis for lower limb, knee joints and air stabilizer for the carbon fiber foot.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126323511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Homunculus: the vehicle as augmented clothes","authors":"Yoichi Ochiai, Keisuke Toyoshima","doi":"10.1145/1959826.1959829","DOIUrl":"https://doi.org/10.1145/1959826.1959829","url":null,"abstract":"In this paper we propose to add a new system with valuable functionalities to vehicles. We call it \"Homunculus\". It is based on a new concept of interactions between humans and vehicles. It promotes and augments nonverbal communicability of humans in the vehicles.\u0000 It is difficult to communicate with the drivers in the vehicles by eye contact, hand gestures or touching behavior. Our \"Homunculus\" is a system to solve these problems. The instruments of \"Homunculus\" are composed of three system modules. The First is Robotic Eyes System which is a set of robotic eyes that follows drivers eye movements & head rotations. The Second is Projection System which shows drivers hand gestures on the road. The Third is Haptic Communication System which consists of IR Distance Sensors Array on the vehicle and Vibration motors attached to the driver. It gives drivers the haptic sense to approaching objects to the vehicle. These three Systems are set on vehicle's hood or side.\u0000 We propose the situation that humans and vehicles can be unified as one unit by Homunculus. This system works as a middleman for communications between men and vehicles, people in other cars, or even people just walking the street. We suggest the new relationship of men and their vehicles could be like men and their clothes.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125580414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daisuke Kondo, Keitaro Kurosaki, H. Iizuka, H. Ando, T. Maeda
{"title":"View sharing system for motion transmission","authors":"Daisuke Kondo, Keitaro Kurosaki, H. Iizuka, H. Ando, T. Maeda","doi":"10.1145/1959826.1959852","DOIUrl":"https://doi.org/10.1145/1959826.1959852","url":null,"abstract":"We are developing 'view sharing' system for supporting a remote corporative work. The view sharing is constructed from the video-see-through head mounted displays (VST-HMD) and motion trackers. This system allows two users in remote places to share their first-person views each other. The users can share what the other user is seeing, and furthermore the users can correspond their spatial perception, motion and head movement. By sharing those sensations, the non-verbal skills can be transmitted from skilled person to the non-skilled person. Using this system expert in remote place can instruct the non-skilled person to improve task performance.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115985000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasuhiro Kajiwara, Keisuke Tajimi, K. Uemura, Nobuchika Sakata, S. Nishida
{"title":"ClippingLight: a method for easy snapshots with projection viewfinder and tilt-based zoom control","authors":"Yasuhiro Kajiwara, Keisuke Tajimi, K. Uemura, Nobuchika Sakata, S. Nishida","doi":"10.1145/1959826.1959840","DOIUrl":"https://doi.org/10.1145/1959826.1959840","url":null,"abstract":"In this paper, we present a novel method to take photos with a hand-held camera. Cameras are being used for new purposes in our daily lives these days, such as to augment human memory or scan visual markers (e.g. QR-codes) and opportunities to take snapshots are increasing. However, taking snapshots with today's hand-held camera is troublesome, because its viewfinder forces the user to see the real space through itself, and it requires complicated operation to control zoom levels and press a shutter-release button at the same time. Therefore, we propose ClippingLight that is a combination method of Projection Viewfinder and tilt-based zoom control. It enables to take snapshots with low effort. We implement this method using a prototype of real-world projection camera. We conducted user study to confirm the effect of CippingLight in situations to take photos one after another. As a result, we found that ClippingLight is more comfortable and requires lower effort than today's typical camera when a user takes a photo quickly.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117258455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiromi Yoshikawa, Taku Hachisu, S. Fukushima, M. Furukawa, H. Kajimoto
{"title":"\"Vection field\" for pedestrian traffic control","authors":"Hiromi Yoshikawa, Taku Hachisu, S. Fukushima, M. Furukawa, H. Kajimoto","doi":"10.1145/1959826.1959845","DOIUrl":"https://doi.org/10.1145/1959826.1959845","url":null,"abstract":"Visual signs and audio cues are commonly used for pedestrian control in the field of general traffic research. Because pedestrians need to first acquire and then recognize such cues, time delays invariably occur between cognition and action. To better cope with this issue of delays, wearable devices have been proposed to control pedestrians more intuitively. However, the attaching and removing of the devices can be cumbersome and impractical. In this study, we propose a new visual navigation method for pedestrians using a \"Vection Field\" in which the optical flow is presented on the ground. The optical flow is presented using a lenticular lens, a passive optical element that generates a visual stimulus based on a pedestrian's movement without an electrical power supply. In this paper we present a design for the fundamental visual stimulus and evaluate the principle of our proposed method for directional navigation. Results revealed that the optical-flow of a stripe and random-dot pattern displaced pedestrian pathways significantly, and that implementation with a lenticular lens is feasible.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121400577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}