{"title":"KUSUGURI: a shared tactile interface for bidirectional tickling","authors":"M. Furukawa, H. Kajimoto, S. Tachi","doi":"10.1145/2160125.2160134","DOIUrl":"https://doi.org/10.1145/2160125.2160134","url":null,"abstract":"Tickling, a nonverbal form of communication, can provide entertainment. Therefore, tickling is a desirable addition as content as a remote communication method. However, tickling is difficult to realize because it requires both body contact as well as bidirectionality. In this paper, we propose a method of \"Shared Tactile Interface\" which allows sharing of a body part with another user at a distance. The interface has three features: direct contact, transfer of the tickling sensation, and bidirectionality. The first allows users to view another person's finger as if it is directly contacting the user's own palm and moving on the user's palm. The second feature delivers a vibration to the user's palm which generates an illusion and perception of a tickling sensation. The third feature enables bidirectional tickling because one user can also tickle the other user's palm in the same manner. We built prototypes based on this design method, and evaluated the proposed method through two technical exhibitions. The users were able to tickle each other, which confirmed that the design method \"Shared Tactile Interface\" works as expected. However, we found issues especially regarding the reliability of the tickling sensation.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128648280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tatsuya Horie, T. Terada, Takuya Katayama, M. Tsukamoto
{"title":"A pointing method using accelerometers for graphical user interfaces","authors":"Tatsuya Horie, T. Terada, Takuya Katayama, M. Tsukamoto","doi":"10.1145/2160125.2160137","DOIUrl":"https://doi.org/10.1145/2160125.2160137","url":null,"abstract":"Graphical User Interfaces (GUIs) are widely used and pointing devices are required to operate most of them. We have proposed Xangle, a pointing method using two accelerometers for wearable computing environments. The cursor is positioned at the intersection of two straight lines, which are synchronized with the angles of the accelerometers at fingers. However, Xangle is difficult to be used in daily-life, when the user frequently changes which part of the body they point with. Therefore, we propose a method of changing the body parts used for pointing according to the situation. Additionally, we proposed a method to accelerate the pointer and a method to layout menu items for Xangle since these methods are suitable for using GUI in wearable computing environments. We confirmed that the proposed method was effective from the results of evaluations.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127870551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hasegawa, Seiichiro Ishijima, Fumihiro Kato, Hironori Mitake, Makoto Sato
{"title":"Realtime sonification of the center of gravity for skiing","authors":"S. Hasegawa, Seiichiro Ishijima, Fumihiro Kato, Hironori Mitake, Makoto Sato","doi":"10.1145/2160125.2160136","DOIUrl":"https://doi.org/10.1145/2160125.2160136","url":null,"abstract":"Control of body position is important in skiing. During turn, novice skiers often lean back and lose their control. Leaning back is natural reaction for people. They arc afraid of the slope or speed. We develop a device to provide realtime sonification feedback of the center of gravity of the skier. The device guides the position of skier. A preliminary experiment shows possibility of improvements that the user become to be able to control their position immediately and even to overcome the afraid of slope and speed.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114598325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmenting on-road perception: enabling smart and social driving with sensor fusion and cooperative localization","authors":"C. Wang, Jennifer Healey, Meiyuan Zhao","doi":"10.1145/2160125.2160146","DOIUrl":"https://doi.org/10.1145/2160125.2160146","url":null,"abstract":"In many ways the car is the most common human augmentation: it increases our speed, renders us more powerful and enables us to reach distances that are otherwise impossible. In this paper, we show how advanced localization systems enable yet another dimension of human augmentation: allowing the driver to visually perceive data streams from other cars. These data streams may contain social messages from other drivers such as \"Follow Me\" or warnings from the sensor systems of the other cars themselves such as \"Distracted Driver!\" We describe both the technical work in progress that makes this system possible as well as the future vision of how this technology will enable smart and social driving through M2M communication with other vehicles that are encountered ad hoc on the road.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129035425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Invisible feet under the vehicle","authors":"Yoichi Ochiai, Keisuke Toyoshima","doi":"10.1145/2160125.2160156","DOIUrl":"https://doi.org/10.1145/2160125.2160156","url":null,"abstract":"When we drive a car, we have many blind spots. The information from the outside is almost limited to vision and sound. We have a vision that the driver and the car unified and moves as one[1] to face the problem of the gap of the information between the outside of the car and inside. We call the unity of driver and the car Homunculus which makes communication with the outside of the vehicle.\u0000 With this concept, we developed a new haptic system. Our system assigns the sense of driver's foot to the bottom of a car. It connects nine vibration motors on a grid to the nine IR distance sensors on a grid. If users use this system, they can feel something passed through the bottom of a car, a bump and so on with feeling like a sole of hid foot was touched. It is like a invisible foot(Figure1) is sticked out bottom of the cars.\u0000 We applied our prototype to several cases in driving and found several interesting points on this. We would discuss on these points on this paper.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128774724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Usability of video-overlaying SSVEP based BCIs","authors":"C. Kapeller, C. Hintermüller, C. Guger","doi":"10.1145/2160125.2160151","DOIUrl":"https://doi.org/10.1145/2160125.2160151","url":null,"abstract":"This work investigates the usability of an steady-state visual evoked potentials (SSVEP) based brain-computer interface (BCI) with on-screen stimulation. The BCI controls were displayed with an underlying feedback video. Each control had a unique flashing frequency. For classification a combination of minimum energy (ME) and linear discriminant analysis (LDA) was used. Two experiments showed that the use of overlaying controls is possible, but also decreasing the performance.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115830499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuyang Zhao, Taku Hachisu, Asuka Ishii, Y. Kuniyasu, H. Kajimoto
{"title":"Augmentation of obstacle sensation by enhancing low frequency component for horror game background sound","authors":"Shuyang Zhao, Taku Hachisu, Asuka Ishii, Y. Kuniyasu, H. Kajimoto","doi":"10.1145/2160125.2160149","DOIUrl":"https://doi.org/10.1145/2160125.2160149","url":null,"abstract":"Computer games provide users with a mental stimulation that the real world cannot. Especially, horror games are a popular category. Current horror games can provide the user with a visible ghost and the stereo background sound to thrill the user. Inspired by obstacle sense - the ability of blind people localizing themselves only with hearing, a novel method to augment the sense of existence in the game background sound is proposed in this paper. We found that an effective sense can be created by decreasing high frequency component and increasing low frequency component simultaneously.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116599656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented control of an avatar using an SSVEP based BCI","authors":"C. Kapeller, C. Hintermüller, C. Guger","doi":"10.1145/2160125.2160152","DOIUrl":"https://doi.org/10.1145/2160125.2160152","url":null,"abstract":"The demonstration shows the usage of an EEG-based brain-computer interface (BCI) for the real-time control of an avatar in World of Warcraft. Visitors can test the installation during the conference after about 5 minutes of training time. World of Warcraft is a common Massively Multiplayer Online Role-Playing Game (MMORPG) in which the player controls an avatar in a virtual environment.\u0000 The user has to wear newly developed dry EEG electrodes which are connected to a biosignal amplifier. Then the data is transmitted to a computer to perform the real-time analysis of the EEG data. The BCI system is using steady-state visual evoked potentials (SSVEPs) as control signal. Therefore the system shows different icons flickering with different frequencies. If the user focuses now on one of the icons the flickering frequency is visible in the EEG data and can be extracted with frequency analysis algorithms.\u0000 In order to control an avatar in World of Warcraft it is necessary to have 4 control icons that are analyzed in real-time. Three icons are necessary to turn left or right or to move forward. Additionally a 4th icon is required to perform certain actions like grasping objects, attacking other objects....like shown in Figure 1. The visual stimulation took place via a 60Hz LCD-display with flickering frequencies of 15, 12, 10 and 8.57Hz in combination with an underlying video.\u0000 To visualize the flickering controls a BCI-Overlay library based on OpenGL was implemented, which can be used by any graphics application. It provides the possibility to generate BCI controls within a virtual reality environment or as overlays in combination with video sequences\u0000 Figure 2 shows the components of the complete system. The user is connected with 8 EEG electrodes to the BCI system that is running under Windows and MATLAB. The BCI system uses the minimum energy algorithm and a linear discriminant analysis to determine if the user is looking at one of the icons or if the user is not attending.\u0000 Via a UDP communication channel the BCI system is controlling the BCI-Overlay module that generates the 4 flickering icons around the WoW User Interface. If the BCI system detects a certain command it is transmitted to the game controller which generates the corresponding WoW command. This is straight forward for the left, right and move forward commands, but more complicated for the action command. Action commands are context dependant and the controller has to select certain possible actions. Finally the command is transmitted to WoW and the avatar performs the action.\u0000 This allows the user to play WoW with the BCI system only by thought.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124589418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maki Yokoyama, Yu Okano, Michi Sato, S. Fukushima, M. Furukawa, H. Kajimoto
{"title":"Looming silhouette: an approaching visual stimulus device for pedestrians to avoid collisions","authors":"Maki Yokoyama, Yu Okano, Michi Sato, S. Fukushima, M. Furukawa, H. Kajimoto","doi":"10.1145/2160125.2160148","DOIUrl":"https://doi.org/10.1145/2160125.2160148","url":null,"abstract":"We are exposed daily to the risk of collision at numerous blind intersections. To avoid the risk of collision, we propose a system that elicits an \"approaching sensation\" by presenting a visual stimulus. Possible factors for the approaching sensation are the \"expansion\" and \"motion\" of a silhouette. We compared the effects of these two factors on the approaching sensation and found that to elicit an approaching sensation, the expansion factor is important, and the motion factor has a certain effect in alarming pedestrians. On the base of this result, we produced a system that presents an expanding and moving silhouette of an approaching pedestrian to the pedestrians user.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123303677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantifying Japanese onomatopoeias: toward augmenting creative activities with onomatopoeias","authors":"T. Komatsu","doi":"10.1145/2160125.2160140","DOIUrl":"https://doi.org/10.1145/2160125.2160140","url":null,"abstract":"Onomatopoeias are used when one cannot describe certain phenomena or events literally in the Japanese language, and it is said that one's ambiguous and intuitive feelings are embedded in these onomatopoeias. Therefore, an interface system that can use onomatopoeia as input information could comprehend such users' feelings, and moreover, this system would contribute to augmenting creative activities such as with computer graphics, music, choreography, and so on. The purpose of this study is to propose an objective quantification method for onomatopoeias in the form of an expression vector to be applied to an interface system in order to augment various creative activities.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114218327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}